The RISKS Digest
Volume 7 Issue 73

Wednesday, 9th November 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

The Computer Jam — How it came about
John Markoff via Geoff Goodfellow
Single-bit error transmogrifications
Robert D. Houk
New news from Hacker attack on Philips France, 1987
Klaus Brunnstein
Re: Telephone answering machines
William Curtiss
Fly by Light
Martyn Thomas
WORM/VIRUS DICUSSION:
Decompiled viruses
Dave Pare
Worms/viruses/moles/etc. and the risk of nuclear war
Clifford Johnson
The Worm
Vince Manis
Info on RISKS (comp.risks)

NYT/Markoff: The Computer Jam — How it came about

the tty of Geoff Goodfellow <geoff@fernwood.mpk.ca.us>
Tue, 8 Nov 88 21:40:00 PST
THE COMPUTER JAM: HOW IT CAME ABOUT
By JOHN MARKOFF
c.1988 N.Y. Times News Service, 8-Nov-88

   Computer scientists who have studied the rogue program that crashed through
many of the nation's computer networks last week say the invader actually
represents a new type of helpful software designed for computer networks.
   The same class of software could be used to harness computers spread aroun
the world and put them to work simultaneously.
   It could also diagnose malfunctions in a network, execute large computations
on many machines at once and act as a speedy messenger.
   But it is this same capability that caused thousands of computers in
universities, military installations and corporate research centers to stall
and shut down the Defense Department's Arpanet system when an illicit version
of the program began interacting in an unexpected way.
   ``It is a very powerful tool for solving problems,'' said John F. Shoch, a
computer expert who has studied the programs. ``Like most tools it can be
misued, and I think we have an example here of someone who misused and abused
the tool.''
   The program, written as a ``clever hack'' by Robert Tappan Morris, a
23-year-old Cornell University computer science graduate student, was
originally meant to be harmless.  It was supposed to copy itself from computer
to computer via Arpanet and merely hide itself in the computers. The purpose?
Simply to prove that it could be done.
   But by a quirk, the program instead reproduced itself so frequently that the
computers on the network quickly became jammed.
   Interviews with computer scientists who studied the network shutdown and
with friends of Morris have disclosed the manner in which the events unfolded.
   The program was introduced last Wednesday evening at a computer in the
artificial intelligence laboratory at the Massachusetts Institute of
Technology. Morris was seated at his terminal at Cornell in Ithaca, N.Y., but
he signed onto the machine at MIT.  Both his terminal and the MIT machine were
attached to Arpanet, a computer network that connects research centers,
universities and military bases.
   Using a feature of Arpanet, called Sendmail, to exchange messages among
computer users, he inserted his rogue program. It immediately exploited a
loophole in Sendmail at several computers on Arpanet.
   Typically, Sendmail is used to transfer electronic messages from machine to
machine throughout the network, placing the messages in personal files.
   However, the programmer who originally wrote Sendmail three years ago had
left a secret ``backdoor'' in the program to make it easier for his work. It
permitted any program written in the computer language known as C to be mailed
like any other message.
   So instead of a program being sent only to someone's personal files, it
could also be sent to a computer's internal control programs, which would start
the new program. Only a small group of computer experts _ among them Morris _
knew of the backdoor.
   As they dissected Morris's program later, computer experts found that it
elegantly exploited the Sendmail backdoor in several ways, copying itself from
computer to computer and tapping two additional security provisions to enter
new computers.
   The invader first began its journey as a program written in the C language.
But it also included two ``object'' or ``binary'' files — programs that could
be run directly on Sun Microsystems machines or Digital Equipment VAX computers
without any additional translation, making it even easier to infect a computer.
   One of these binary files had the capability of guessing the passwords of
users on the newly infected computer. This permits wider dispersion of the
rogue program.
   To guess the password, the program first read the list of users on the
target computer and then systematically tried using their names, permutations
of their names or a list of commonly used passwords. When successful in
guessing one, the program then signed on to the computer and used the
privileges involved to gain access to additonal computers in the Arpanet
system.
   Morris's program was also written to exploit another loophole. A program on
Arpanet called Finger lets users on a remote computer know the last time that a
user on another network machine had signed on. Because of a bug, or error, in
Finger, Morris was able to use the program as a crowbar to further pry his way
through computer security.
   The defect in Finger, which was widely known, gives a user access to a
computer's central control programs if an excessively long message is sent to
Finger. So by sending such a message, Morris's program gained access to these
control programs, thus allowing the further spread of the rogue.
   The rogue program did other things as well. For example, each copy
frequently signaled its location back through the network to a computer at the
University of California at Berkeley. A friend of Morris said that this was
intended to fool computer researchers into thinking that the rogue had
originated at Berkeley.
   The program contained another signaling mechanism that became its Achilles'
heel and led to its discovery. It would signal a new computer to learn whether
it had been invaded. If not, the program would copy itself into that computer.
   But Morris reasoned that another expert could defeat his program by sending
the correct answering signal back to the rogue. To parry this, Morris
programmed his invader so that once every 10 times it sent the query signal it
would copy itself into the new machine regardless of the answer.
   The choice of 1 in 10 proved disastrous because it was far too frequent. It
should have been one in 1,000 or even one in 10,000 for the invader to escape
detection.
   But because the speed of communications on Arpanet is so fast, Morris's
illicit program echoed back and forth through the network in minutes, copying
and recopying itself hundreds or thousands of times on each machine, eventually
stalling the computers and then jamming the entire network.
   After introducing his program Wednesday night, Morris left his terminal for
an hour. When he returned, the nationwide jamming of Arpanet was well under
way, and he could immediately see the chaos he had started. Within a few hours,
it was clear to computer system managers that something was seriously wrong
with Arpanet.
   By Thursday morning, many knew what had happened, were busy ridding their
systems of the invader and were warning colleagues to unhook from the network.
They were also modifying Sendmail and making other changes to their internal
software to thwart another invader.
   The software invader did not threaten all computers in the network. It was
aimed only at the Sun and Digital Equipment computers running a version of the
Unix operating system written at the University of California at Berkeley.
Other Arpanet computers using different operating systems escaped.
   These rogue programs have in the past been referred to as worms or, when
they are malicious, viruses. Computer science folklore has it that the first
worms written were deployed on the Arpanet in the early 1970s.
   Researchers tell of a worm called ``creeper,'' whose sole purpose was to
copy itself from machine to machine, much the way Morris's program did last
week. When it reached each new computer it would display the message: ``I'm the
creeper. Catch me if you can!''
   As legend has it, a second programmer wrote another worm program that was
designed to crawl through the Arpanet, killing creepers.
   Several years later, computer researchers at the Xerox Corp.'s Palo Alto
Research Center developed more advanced worm programs.  Shoch and Jon Hupp
developed ``town crier'' worm programs that acted as messengers and
``diagnostic'' worms that patrolled the network looking for malfunctioning
computers.
   They even described a ``vampire'' worm program. It was designed to run very
complex programs late at night while the computer's human users slept. When the
humans returned in the morning, the vampire program would go to sleep, waiting
to return to work the next evening.

      [Please keep any responses short and to the point. PGN]


Single-bit error transmogrifications

Robert D. Houk <houk@sli>
Tue, 8 Nov 88 13:50:34 EST
  [This started out with a question from Robert on mutations.  My answer
  to him (not in RISKS) included this fragment:
    In general, the difference between killer and benign could be one bit.  The
    ARPANET crash of 27 Oct 1980 resulted from bits accidentally being dropped
    in the time stamp of one status word. ...  PGN]

Another example of a one-bit error that escaped all error-detection to wreak
havoc: A DQ-11 [am I dating myself?] would very occasionally (every month or
so) drop a leading "0" bit at the begining of a message it was transmitting
(this being a DDCMP protocol point-to-point network called "ANF-10" running
under the TOPS-10 operating system). If this bit stream happened to also end
in a "0" bit, the net effect was to "left-shift" the entire message by one
bit. This not only slipped by the CRC checks, it resulted in a
legally-formatted message as well! In particular, a random data message was
transmogrified into a network protocol START message. The receiving node saw
the START, and did the normal communications restart, including sending the
START-ACK to the other side, which promptly went ??WHAT??, took its line
down, and restarted communications with a real START message. The net effect
(for this particular customer/network) was an out-of-the-blue transient
network partitioning (they had no alternate routing paths, so all user
connections were lost in the partitioning), with no errors detected or
reported anywhere to account for it!

Now, while we were all pretty amused (yea, even amazed) at such arcana,
I recall the customer being more of the opinion that it was a "killer"
and not all that amusing. A new DQ-11 fixed the problem.
                                                    -RDH


New news from Hacker attack on Philips France, 1987

Klaus Brunnstein <brunnstein%rz.informatik.uni-hamburg.dbp.de@RELAY.CS.NET>
07 Nov 88 12:49 GMT+0100
A German TV magazine reported (last week) that the German hackers which
attacked, in summer 1987, several computer systems and networks (including
NASA, the SPANET, the CERN computers which are labeled `European hacker
center', as well as computers of Philips France and Thompson-Brandt/France) had
transferred design and construction plans of the MegaBit chip having been
developed in the Philips laboratories.  The only information available is that
detailed graphics are available to the reporters showing details of the MegaBit
design.

Evidently it is very difficult to prosecute this data theft since German law
does not apply to France based enterprises. Moreover, the German law may
generally not be applicable since its prerequit may not be true that PHILIPS'
computer system has `special protection mechanisms':  evidently, the system was
only be protected with UID and password, which may not be a sufficient
protection (and was not).

Evidently, the attackers had much more knowledge as well as instruments (e.g.
sophisticated graphic terminals and plotters, special software) than a `normal
hacker' has. Speculations are that these hackers were spions rather than
hackers of the CCC which was blamed for the attack.  Moreover, leading members
of CCC one of whom was arrested for the attack, evidently have not enough
knowledge to work with such systems.

Klaus Brunnstein, Hamburg, FRG


Re: Telephone answering machines

<Curtiss@DOCKMASTER.ARPA>
Wed, 9 Nov 88 09:18 EST
In RISKS-7.69 Vince Manis states that his answering machine has an 8-bit
octal security code for a search space of 256.  He is not worried about
someone breaking into his machine since "people aren't patient enough to
call a number 256 times just to break in." Well, someone who is really
determined would and with a computer and War Games dialer it even wouldn't
be that terrible.  However, even that is not necessary.  Only TWO phone
calls should be sufficient.  Most answering machines do not care about wrong
access codes or noise around a correct one — they just ignore the extra
digits.  As long as the incoming stream contains the code somewhere you are
given access.  Since 256 in octal needs three digits, a carefully
constructed string of 258 digits will contain every possible combination
(for example, if the code is a triplet composed of just the numbers 1 and 2
then the string 1211122212 contains all eight triplets).  Since the normal
true-tone phone generates a pulse 70 milliseconds long with inter-digit
spacing of the same (some phones continue generating the tone as long as the
digit is held down, but these are near the minimum — perhaps someone more
knowledgeable in the matter can provide us with the guaranteed minimum) or
7.14 digits per second.  So, I only need just over 36 seconds to try all
codes.  Since your machine might not be this patient, I could do it in two
18 second calls.  Of course, I can't dial this fast, but my modem can.

William Curtiss

    [And there is a large literature on polynomially generated shift-register
    sequences that have the desired property.  I would say that is a rather
    vulnerable design for use in sensitive applications, but perhaps adequate
    if you have no secrets to hide and would not be concerned with maliciously 
    deleted messages.  Note that denials of service — e.g., total saturation
    of our answering machine tape — can be achieved without access to the 
    security code.  PGN]


Fly by Light

Martyn Thomas <mct@praxis.UUCP>
Mon, 7 Nov 88 13:02:42 BST
Flight International (November 5th 1988) reports the successful maiden
flight on October 23rd of the Airship Industries Skyship 600-04 - which uses
the world's first full-authority fly-by-light flight control system.

The fly-by-light system, developed by GEC Avionics, uses fibre-optics, which
are highly resistant to electromagnetic interference and lightening strikes.

Martyn Thomas               !uunet!mcvax!ukc!praxis!mct


decompiled viruses

Dave Pare <mr-frog@fxgrp.UUCP>
Fri, 4 Nov 88 17:22:33 PST
Last night a massive effort to decompile the "internet virus" was made
by half a dozen people working at the CSRG in Berkeley.  Most of the code
is complete now, and most of the guesses that people have made were right
on insofar as how it works, what it targets, and how it is distributed.

Precautions were taken by the author to clean up intermediate files, to
use XOR functions on program strings, repeated forks, making many static
procedures and using the linker to remove (presumably) suspicious-looking
procedure names, naming the program "sh", etc, in order to better protect
the program from detection and identification.  This was definitely a
deliberate action; quite a number of precautions were taken to hide the
process.

The program was also fairly sophisticated in concept.  It doesn't appear
that the primary *motivation* of the author was the overloading of the
target machine.  There were several bugs encountered during decompilation,
(most notably a "bzero(foo, sizeof(foo))", where foo is a struct) which
may have accounted for the program's obnoxious and apparently unintentional
behavior.

In my opinion, had the author tested this program more completely,
it would have been quite a while longer before it was detected.  While
this incarnation of the program was a serious nuisance, a correctly-working
version would have been far more insidious.  It would have required a
very curious system manager to notice an "innocuous" daemon listening to
an unusual internet port number, or strange-looking messages in the
sendmail syslog.

When someone who really knows how to code finally writes one of these
things, we may never find out about it until weeks or months later.
Although this program doesn't modify existing programs on affected systems,
future ones might — heck, they may have already done so.

Dave Pare


Worms/viruses/moles/etc. and the risk of nuclear war

"Clifford Johnson" <GA.CJJ@Forsythe.Stanford.EDU>
Mon, 7 Nov 88 18:44:33 PST
>  From: Brad Templeton <brad%looking.uucp@RELAY.CS.NET>
>    People are talking as though there's some
>  surprising end-of-the-world potential in this event...
>  University systems are designed, and should be designed as
>  low-caution, high convenience systems...  The press
>  will want sensationalistic answers, but if you're talking to
>  them, try to steer them away from all the comments about
>  "War Games."

I was interviewed specifically on whether the worm illustrated a potential
cause of nuclear Armageddon.  (Bay Area TV Channel 7 (ABC), 11pm news Nov.4,
lead story.)  My answer was a qualified affirmative, and I think an unqualified
negative irresponsible.  Directly asked whether "War Games" was a good analogy,
I responded that things were generally headed that way but that the Vincennes
shootdown was a closer analogy to the present parlous state.  (The Pentagon
having concluded that no person was at fault for the mistaken shootdown, the
cause was by default computer-related error.)

In some circumstances, a computer virus/worm could be a determinative factor
precipitating a nuclear strike.  Nuclear command and control computerization
involves two functionally distinct sets of systems: (a) those for *executing* a
nuclear attack a la SIOP recipes; (b) those for *prompting* a strike by
informing the military (or political) leaders that an attack should be
launched, which includes notice of (predefined) preemptive and launch on
warning contingencies.

Execute command systems could be relatively closed; and hence are, or could be
managed so as to be, relatively secure.  Nevertheless, it gives pause that the
system requires that underground officers respond immediately if valid launch
codes are suddenly received through electronic communication channels.  Note
well that message traffic processed in the launch control capsules has greatly
increased in recent years.

Re the strategic and tactical warning systems, they are already based on
networked/workstation systems, i.e. the IDHS (Intelligence Data Handling
System), and DEFSMAC, which link the CIA, NSA, DIA, JCS, NORAD, and SIOP-CINCS.
These are of the same ilk as Arpanet/Milnet, and access is very wide, albeit
not public.  Intelligence-gathering/data-fusion by its function requires a
network open to diverse and disparate inputs, and so is vulnerable to
disruption/confusion by viruses/worms et alia.  While such systems may be
impervious to telenet attack, and while published "bugs" like the UNIX debug
option may be foreclosed in them, the potential for implanting subtle
bugs/traps in the software is plain, and that is but one potential source of
error.  Infiltration of software companies/military programming seems easy and
suggests *real* espionage possibilities that are frightening in their scope and
mulitplicity.  Also frightening is that the apparent failure of nodes in the
tactical warning net is interpreted as notice that those nodes may have been
destroyed in a nuclear attack.  (NORAD and other testimony admits this.)  There
is manifestly the potential for accidental/malicious net malfunctions causing a
catastrophic nuclear panic.

As for the trends, despite first appearances, the "Star Wars" system would
greatly add to U.S. vulnerability rather than to security by resting U.S.
strategic execution (as well as warning) upon a huge network of systems, much
harder to secure than the present execution system.  The warning system also
becomes much more complex.  The funded National Test Bed is in essence the
development of such vulnerable networks for strategic warning and execution.

The pace of technology is not the *imperative* that the military claim must be
followed so as to sustain deterrence.  The probable futility of plugging holes
in vast computer networks is at least as vital a message as the message that
known holes can be effectively patched.  The fact that the military widely use
complex networks in nuclear planning means that a most urgent lesson is that we
must avoid critical reliance on them, not merely that we should carry on under
the placebo of plugging what holes are identified.


The Worm

Vince Manis <manis@grads.cs.ubc.ca>
Mon, 7 Nov 88 07:56:14 PST
Our site apparently didn't get hit, because our newly installed NSFnet router
has been so flaky that it has been unusable. Just goes to show, I guess.

I was struck by the fact that the biggest hole was the `debug' option in
sendmail. Since it has been well-known for about 15 years that allowing
anonymous remote execution is a tremendous danger, and since (I assume)
sendmail, in the standard distribution, comes with it disabled, one has to ask
why SA's were enabling it. I'm not entirely sure that all the blame should go
to the worm's author (putatively Mr Morris).

If the assumption in the previous paragraph is false, then perhaps UCB,
Sun, and Mt Xinu (among others) are at least morally culpable, too. 

Please report problems with the web pages to the maintainer

x
Top