The RISKS Digest
Volume 7 Issue 71

Sunday, 6th November 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Send us your Arpanet Virus War Stories
Cliff Stoll
Suspect in Virus Case
Brian M. Clapper
Internet Virus
Mark W. Eichin
RISKS of getting opinions from semi-biased sources
Brad Templeton
PGN
Worm/virus mutations
David A. Honig
PGN
Worm sending messages to ernie.berkeley.edu?
Jacob Gore
Re: "UNIX" Worm/virus
Peter da Silva
Comments on vote counting
"Bill Stewart and/or Shelley Rosenbaum"
Re: A320 update
Henry Spencer
Info on RISKS (comp.risks)

Send us your Arpanet Virus War Stories

Cliff Stoll <cliff@Csa4.LBL.Gov>
Sun, 6 Nov 88 04:29:26 PST
 COLLECTING ARPANET VIRUS WAR STORIES

I'm collecting information about the Nov 3 Arpanet virus, trying to determine:
     > How many sites were infected
     > How many were not
     > How quickly it spread

SO:  If you were infected, please send me a note describing your experiences. 
Please include:
     > Where are you?  What type of computers?
     > What times were stamped on the /usr/tmp/x files?
     > Which of your computers were infected?  All of them?

Please send your anecdotes & stories, such as:  
     >  What time did you discover it?
     >  What tipped you off?
     >  How did you and your colleagues respond?
     >  What would you differently?
     >  Did you call anyone?  Or did anyone call you?
     >  Where would you turn for information next time?
     >  When did you finally eradicate it?  
     >  Any weird wrinkles or strange effects?

I'm interested in hearing from you even if you were not infected!

Please pass this message on to others:
I would rather have multiple responses from a site than none.

Thank you very much for your time & trouble.
In return, I'll mail summaries to everyone that contributes.
If you want this, please include your address.


Thank you very much for your time & troubles!

Cliff Stoll                Harvard/Smithsonian Center for Astrophysics
617/495-7147               60 Garden Street,  Cambridge, MA 02138
Cliff@cfa200.harvard.edu   ( or on bitnet,  Cliff@lbl )   [Nov 5, '88]

   [Cliff is presumably referring to the relevant UNIX systems only.  I doubt
   he is interested in responses saying that your TOPS-20 or your PRIME 
   or whatever was not hit, since the worm/virus was specific to certain DEC
   and Sun versions.  (Thanks to Cliff for offering this service, although
   it further distracts him from his attempted pursuits of astronomy.)  PGN]


Suspect in Virus Case

Brian M. Clapper <clapper@NADC.ARPA>
Sun, 6 Nov 88 14:55:20 EST
Reprinted (without permission) from the Philadelphia Inquirer, Sunday,
November 6, 1988:

(From Inquirer Wire Services)
ITHACA, N.Y. - A Cornell University graduate student whose father is a top
government computer-security expert is suspected of creating the "virus" that
slowed thousands of computers nationwide, school officials said yesterday.
   The Ivy League university announced that it was investigating the computer
files of 23-year-old Robert T. Morris, Jr., as experts across the nation
assessed the unauthorized program that was injected Wednesday into a military
and university system, closing it for 24 hours.  The virus slowed an estimated
6,000 computers by replicating itself and taking up memory space, but it is not
believed to have destroyed any data.
   M. Stuart Lynn, Cornell vice president for information technologies, said
yesterday that Morris' files appeared to contain passwords giving him un-
authorized access to computers at Cornell and Stanford Universities.
   "We also have discovered that Morris' account contains a list of
passwords substantially similar to those found in the virus," he said at a
news conference.
   Although Morris "had passwords he certainly was not entitled to," Lynn
stressed, "we cannot conclude from the existence of those files that he was
responsible."
   FBI spokesman Lane Betts said the agency was investigating whether any
federal laws were violated.
   Morris, a first-year student in a doctoral computer-science program, has
a reputation as an expert computer hacker and is skilled enough to have written
the rogue program, Cornell instructor Dexter Kozen said.

   [ ... omitted details concerning Morris' unavailability for comment ... ]

   Reached at his home yesterday in Arnold, Md., Robert T. Morris, Sr., chief
scientist at the National Computer Security Center in Bethesda, Md., would not
say where his son was or comment on the case.
   The elder Morris has written widely on the security of the Unix operating
system, the target of the virus program.  He is widely known for writing a
program to decipher passwords, which give users access to computers.

   [ The remainder of the article basically restates information which has
     already been reported. ]


Internet Virus

"Mark W. Eichin" <eichin@ATHENA.MIT.EDU>
Fri, 4 Nov 88 21:52:58 EST
A team at MIT and a team at UCB worked Thursday evening through until Friday
morning both examining the Virus in isolation and reverse engineering to create
C code that could produce the binary output we had in hand.

MIT had a Press Conference at 12Noon Friday, 4 November; about 20 minutes
earlier, we had determined that with the modules we had received from Berkeley
and the work we had done at MIT we indeed had a complete knowledge of the inner
workings of the Virus, permitting us to declare that there was no code in the
virus designed to harm files.

The Berkeley group was lead by Keith Bostic (I don't have details on his
group); the MIT group was a collection of programmers from various
organizations including Project Athena, LCS, SIPB, and Telecom. Stan Zanarotti
and I led a group of around 6 in the reverse engineering effort, while others
worked on using Netwatch on an isolated testbed machine.

The Virus uses three possible paths to transmit itself from one
machine to another:
    1) finger (via a bug in /etc/fingerd which turned out to be
difficult for the Virus to exploit)
    2) sendmail (via the `debug' command, which should be turned
off in a production server, but apparently was turned on by default in
the binary BSD distribution)
    3) password guessing and shell/rexec/rsh/telnet logins.

Whichever method used, it attempted to run a /bin/sh on the remote
machine, and then feed it a set of commands which caused it to build a
new program and suck over an unlinked VAX or Sun image. It then linked
this with the system's local libraries, and executed it.

Once the virus was running on the new site, it chose a variety of
paths to find new hosts to propogate to:
    1) routing tables
    2) interface tables
    3) user .forward files
    4) user .rhosts files
    5) /etc/hosts.equi

Note that it did *not* make any use of the inherent security problems
commonly involved with .rhosts files, it merely used them as a source
of hostnames.

[I'll cut this short now, I need the sleep...]

Project Athena was not vulnerable to the finger attack at all; one or
two private machines were vulnerable to the `debug' attack, but at
least one was an IBM RT/PC (which the Virus could `live' on.) What did
hit several Athena machines was the use of password guessing; this is
really more of a Human Security problem than a Computer Security
problem. Other MIT machines were hit by various combinations of the
several attacks.

There were several bugs in the Virus itself, which Keith Bostic
suggested posting patches for. It also seems clear that the original
design did not intend for it to hog resources as it did, but merely to
propagate quietly, which would have certainly been interesting.

Very little effort was made to actually hide the behavior of the code
(it even had a reasonably large symbol table, making it easier to
identify subroutines.) It *did* attempt to hide at a higher level, for
example by calling itself "sh" and destroying its argument list (to
make it appear in the process table as ``some random shell script'').

I will try and post more details as I have time to write them up.

                Mark Eichin
            <eichin@athena.mit.edu>
        SIPB Member & Project Athena ``Watchmaker''


RISKS of getting opinions from semi-biased sources

Brad Templeton <brad%looking.uucp@RELAY.CS.NET>
Sun Nov 6 15:50:07 1988
It's rare for the net to make the evening news, but I'm noting some
interesting things.

Most of the media are jumping to "computer security experts" for comment
on the matter.  No offense, folks, but people who make their living from
consulting on computer security are bound to perceive (and expound on) this
as worse than it actually is.   People are talking as though there's some
surprising end-of-the-world potential in this event, when it really comes
as no surprise to any RISKS reader, Internet or UNIX user.

University systems are designed, and should be designed as low-caution, high
convenience systems.  Such flaws *will* exist, and events like this only make
us wiser.  Remember the original unix documentation which detailed "how to
bring UNIX to a halt if you're joe user" on page 1?  Funny, but it never
happens.
                                       [  "What, Never?"  "No, Never."
                                          "What, Never?"  "Hardly ever."
                                                (Thanks to W.S.Gilbert)  PGN  ]

The press will want sensationalistic answers, but if you're talking to them,
try to steer them away from all the comments about "War Games."  And clear
up the use of the word "hacker" now that things are in the public eye!

Brad Templeton, Looking Glass Software Ltd.  —  Waterloo, Ontario 519/884-7473


Semi-biased sources

Peter G. Neumann <Neumann@KL.SRI.COM>
Sun, 6 Nov 88 22:01:17 PST
This episode has given system administrators and users the opportunity to
recall that there are many lurking vulnerabilities.  But to assume that
university computing should be relatively wide open would be a serious mistake.
Unethical and other abuses are not uncommon.  There is plenty of proprietary
research, and there are serious integrity problems.  NASA contemplates
permitting a professor sitting at his workstation to up-link via network to the
space station and control his experiments in real-time.  So can anyone else who
happens to subvert the local system or its communications.  Intruders might
even be able to bring down the space station itself by penetration techniques.
Also, students writing theses would not like to see their files deleted and
their backups made unretrievable by premeditated contamination resulting from a
long-standing but not yet detected Trojan horse.  Note that many problems could
arise accidentally rather than maliciously.  In some cases accidental damage
can be just as severe as intentional damage.  (See my comments on the 1980
ARPANET fiasco, below.)  It seems that the valuable lessons that should be
derived from the worm/virus would be totally lost if we continue with business
as usual (although presumably at least the flaws already exposed will have been
patched, now that they are suddenly more widely known).  Furthermore,
significant improvements in security are in the offing — including various
considerably more secure versions of UNIX.  (And amazingly, performance and
ease of use need not be seriously compromised!)  I think that a university
computing administrator would be strung up after an installation known *a
priori* to be badly flawed was attacked, especially if much better systems or
better operational procedures had been available and commonly used elsewhere.
Perhaps I am flogging a straw herring in mid-stream, but in the light of what
is known about the ubiquity of security vulnerabilities, it seems vastly too
dangerous for university folks to run with their heads in the sand.  [It is not
ostrich of the imagination to contemplate future attacks that are really
malicious.  And yes, they do happen and have happened.]  So, despite the the
worm/virus having caused considerable pain, it does have the potential for
having been a useful exercise — if anyone is listening and thinking.


Worm/virus mutations

"David A. Honig" <honig@BONNIE.ICS.UCI.EDU>
Sun, 06 Nov 88 14:24:09 -0800
The resource-hungry arpanet worm was supposedly a mistake: it was not supposed
to use resources as fast as it did.  This was a design (programmer's) error, I
think.

I'm interested in how difficult it would have been for the intended mild worm
to "mutate" into the system-stopping one.  In particular, what would a mild
version look like, and what kind of error in reproduction would transform from
the hungry worm into the milder one?

Would simply removing a line of text from one of the scripts have worked?
Would a single undetected bit error in one character have made a difference?
(What if that error commented-out an entire line of a file?) I expect most
mutations would be fatal or sterilizing at least, but if a net-infection were
to persist uncured for a while, it is possible (and as time progresses,
probable) that mutated strains would occur, and some fraction of these would be
nastier than their ancestors.


Re: Worm/virus mutations

Peter G. Neumann <Neumann@KL.SRI.COM>
Sun, 6 Nov 88 21:27:39 PST
In general, the difference between killer and benign could be one bit.  The
ARPANET crash of 27 Oct 1980 resulted from bits accidentally being dropped in
the time stamp of one status word.  The resulting multiplicity of three
versions (two corrupted) of the same status word (with different time stamps)
broke the garbage collection algorithm, and degraded the ARPANET to ZERO.  (See
ACM SIGSOFT Software Engineering Notes Jan 1981 for discussion of the data
retrovirus.)

The UNIX worm/virus would have been relatively harmless (and much less
detectable) had its creator not attemped to make it survivable even after
detection and removal.  The choice of a parameter invoking a one-in-ten
reinfection was what made it degrade each attacked system.  


Worm sending messages to ernie.berkeley.edu?

Jacob Gore <gore@eecs.nwu.edu>
Sun, 6 Nov 88 16:36:52 CST
From The New York Times (National), Sunday, Nov. 6:

  Mr. Morris learned of his replication error through a monitoring mechnanism
  [sic] he had built into his program.  Each second each virus broadcast its
  location to a computer named Ernie at the University of California at
  Berkeley, said a computer researcher who has analyzed the virus.

Is this true?  If so, what account were the logs sent to?

Jacob Gore              Gore@EECS.NWU.Edu
Northwestern Univ., EECS Dept.      {oddjob,gargoyle,att}!nucsrl!gore


Re: "UNIX" Worm/virus (RISKS DIGEST 7.70)

Peter da Silva <peter@sugar.uu.net>
5 Nov 88 19:31:58 CST (Sat)
   decwrl!ucbvax!KL.SRI.COM!RISKS

I realise that for most of the people in the Internet UNIX == 4.2, but people
should be more careful of referring to bugs in the UNIX operating system.
While there may be bugs in any operating system, this virus didn't exploit
any UNIX bugs.

    First, the actual bug is implicit in a mailer program, sendmail. This
isn't "The UNIX operating system", and it's not even found on most systems.

    Secondly, the other "bugs" are security holes deliberately left open
to make network operations more convenient when dealing with other trusted
machines. Again, this isn't a bug in UNIX.

    Finally, a channel like this can't be used to infect non-BSD systems
without the debug version of sendmail, unless individual users choose to set
up "shell deamons" to watch their mailboxes. This falls under case 2 above.

Referring to this is a UNIX virus is going to give naive users the idea that
UNIX is particularly susceptible to penetration over a network. Our management
has expressed concern, for example, that our own Usenet feed could be used to
infect us.

Yes, it could... given a sufficiently subtle trojan horse hidden in, say, a
comp.sources distribution. But that's not a *UNIX* or *Network* problem...
we're more susceptible to people bringing in diskettes.

The last thing we need now is the UNIX equivalent of the "Audi sudden
acceleration" panic.

        Peter da Silva  `-_-'  peter@sugar.uu.net


Comments on vote counting

<wcs@alice.att.com>
Sat, 5 Nov 88 23:24:33 EST
Several people have commented on the risks of computerized vote tabulation,
but there's another RISK here - inaccurate and fraudulent reporting.

The National Election Service, which is the joint election reporting 
group funded 20% each by CBS, NBC, ABC, AP, and UPI, has announced that
they will not report any third-party results for the presidential
elections this year.  Ron Paul, the Libertarian Party candidate, asked
how they would report a 45%-45%-10% vote (if he gets 10% of the vote in
Alaska, an LP stronghold) - they replied "We'll call that 50-50".

This sort of thing has a major effect on voters' perceptions of the
elections - if they never hear about alternatives to the status-quo
parties, they're unlikely to vote for them in the future.  This could
get especially interesting if Lenora Fulani's New Alliance Party
succeeds in their lawsuit to keep the Democratic and Republican parties
off the Indiana ballot (where they filed late) - they may get all the
electoral votes unless there is substantial write-in voting.

# Bill Stewart, att!ho95c!wcs, AT&T Bell Labs Holmdel NJ 1-201-949-0705
# and/or
# Shelley Rosenbaum, att!ho95c!slr, 1-201-949-3615   ho95c.att.com


Re: A320 update

<attcan!utzoo!henry@uunet.UU.NET>
Sun, 6 Nov 88 02:08:09 EST
>Henry Spencer's recent article on the A320's first six months in
>service states that the fly-by-wire system has "behaved perfectly."
>It should be noted, however, that the article he was referring
>to clearly pointed out that there were failures of the primary 
>flight guidance computer, which were rectified by backup systems.

Hardware failures, dealt with by backup systems, happen even in non-
computerized aircraft.  With substantial frequency, in fact.  This did
not seem worth mentioning.  As nearly as I can tell from that article,
and the later ones, there have been no major *software* problems in the
flight-control software... which is what everyone was worried about.
Hardware failures are to be expected.

>I find Ziegler's rationale for the failures of the A320 somewhat 
>disturbing.  With only a handful of airplanes in service, for any
>significant percentage of in-flight or on-ground failures to occur,
>and then say it should be compared to the massive fleets of existing
>aircraft, is to obfuscate the issue.

How so?  Note that he is citing *percentages of flights* delayed, not
absolute counts; fleet size is irrelevant except insofar as statistics over
a small fleet are less precise than over a large fleet.  His comments
about media attention are more dubious in this regard, since occasional
failures in a small fleet are indeed more significant than the same
failures-per-day rate would be in a large fleet, but even there I think
he's got a point:  if ten 747s fail per day, nobody cares, but if an
A320 fails once every two weeks, it's a scandal.

>His confidence in the A320's backup electrical systems is also rather
>odd, considering the airplane's susceptibility to transient controls,
>and his company's failure to provide even a mediocre cabin lighting
>control system.

Notice that the transient problems are (as far as I've heard) all in
non-critical support systems, and the cabin-lighting-control problem is with
a subcontractor, presumably not the same people who did the main electrical
system.  Agreed that Airbus is responsible in the end, but the implication
that these problems spill over into more critical systems seems unjustified.

Henry Spencer at U of Toronto Zoology               uunet!attcan!utzoo!henry

Please report problems with the web pages to the maintainer

x
Top