The RISKS Digest
Volume 20 Issue 32

Tuesday, 20th April 1999

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Airbus Autopilot Failure?
Chuck Weinstock
Another old-fashioned bug comes back to byte
Jeremy Epstein
Risks of running a PKI
Steve Bellovin
New paper on Simulating Cyber Attacks, Defenses, and Consequences
Fred Cohen
Re: Ghost trains
Peter Campbell Smith
Re: GUIDs and Melissa
David M. Chess
JDean
Nick Brown
Russ Cooper
Re: Mainframe viruses
David M. Chess
Otto Stolz
Re: Microsoft reschedules Memorial Day
Bernard Sufrin
Re: Overzealous applications
Mark Brader
Re: Overzealous criticism
Peter da Silva
Calendar problem with old Calvin and Hobbes comics strips
Michael Cook
AT&T PINs
e
Ameritech calling card ready to use!
Nathan Brindle
High-Integrity System Specification and Design book
Jonathan Bowen
Info on RISKS (comp.risks)

Airbus Autopilot Failure?

Chuck Weinstock <weinstock@sei.cmu.edu>
Mon, 19 Apr 1999 16:49:28 -0400
I imagine it is too soon to know for sure whether it was an autopilot
failure.  We've seen enough finger pointing in other incidents for us not to
believe the first reports without some additional evidence.  Chuck

  On 19 Apr 1999, an Air India Airbus 320 en route from Singapore to Bombay
  via New Delhi had apparent had an autopilot failure at 27,000 feet,
  resulting in a dive that injured three crew members (two seriously) and an
  infant.  The pilot was able to regain control, and manually flew the jet
  to Bombay.  [Source: AFP, 19 Apr 1999; PGN-ed]


Another old-fashioned bug comes back to byte

"Epstein, Jeremy" <Jeremy_Epstein@NAI.com>
Mon, 19 Apr 1999 14:11:50 -0700
Wired reports in "'EBayla' Bug Strikes eBay" (see
http://www.wired.com/news/news/technology/story/19207.html) that eBay users
can enter an HTML description of the item being auctioned.  However, the
script provided by the seller can also include Javascript, thus allowing the
seller to create a fairly simple web page that, when accessed by the
unsuspecting bidder, can capture the bidder's eBay username and password and
send it to the bidder (or anyone else).

This is a new version of an old bug: if you allow users to specify input
that can be used by others, make sure there's enough filtering that it can't
be harmful.

Perhaps the scariest part was the reaction from eBay, as reported by Wired:
"EBay's senior director of corporate communications characterized the hole
as an 'occasional byproduct' of the service's user-focused design."  eBay
downplayed the severity of the exploit, noting that "If somebody had indeed
used your password as well as your username and started bidding on a bunch
of items, you'd be the first person to be contacted by eBay through e-mail,
and we'd be able to backtrack on that to make sure that we could take care
of that situation."

Gee thanks.  After it happens, you'll let me know I just bought a velvet
Elvis and a set of matching pink lawn flamingos :-)


Risks of running a PKI

Steve Bellovin <smb@research.att.com>
Mon, 19 Apr 1999 10:42:02 -0400
There are many problems involved in setting up a robust public-key
infrastructure.  An oft-overlooked issue is bookkeeping — just keeping
track of what has been done.  Today, even the most sophisticated companies
can't get this right.

The other day, my daughter clicked on "Windows 98 Update" on her machine.
Up popped the usual request to accept an ActiveX control.  When she clicked
on "Microsoft", a nice red "X" showed up — the certificate had expired.
Now, this has to be one of the most important non-root certificates in
existence, since it controls updates to an extremely popular platform.  And
it is owned by one of the most technological companies in existence.  And it
was mishandled.

This isn't the first occurrence of this problem — I reported similar
incidents to Microsoft at least 9 months ago.  And the problem isn't that
the code had been tampered with, simply that some certificate in the chain
had expired.  But the administrative issues — keeping track of all of the
certificates — are formidable.

There are also human factors issues here.  The initial pop-up box doesn't
indicate that there may be a problem.  The secondary pop-up doesn't give the
user any guidance on how to evaluate the seriousness of the issue — should
the control be accepted or not.  And there was timezone and notational
confusion — this occurred late on April 16, the day the certificate expired
-- if the certificate expired at midnight UTC, it was indeed invalid; if it
expired at midnight Eastern time, where we were, or Pacific time, where
Microsoft is, the checking code is buggy.  There was also the usual question
of 12am/12pm vs. noon or midnight.

My point here is not to pick on Microsoft.  Rather, I'm asserting that the
real difficulties in utilizing a PKI aren't solved by things like X.509 or
SPKI; rather, there are serious generic issues involving keeping track of
certificates and presenting the information to users in a useful form.


New paper on Simulating Cyber Attacks, Defenses, and Consequences

Fred Cohen <fc@all.net>
Tue, 20 Apr 1999 10:33:14 -0700 (PDT)
This morning, I posted a new paper to the all.net Web site that I think you
might be very interested in.  It will soon appear in print as an invited
paper in "Computers and Security", it will be briefly covered in one of my
short presentations at the IEEE Oakland conference, and it will be given in
its entirety at the upcoming CSI conference.  The paper is titled:

    "Simulating Cyber Attacks, Defenses, and Consequences"

and it can be found at the all.net web site under:

    Feature Articles: InfoSec Baseline Studies

I think that this is one of the most interesting papers I have written in
the last two years, and I ask that those of you who are interested in such
things take the time to read and comment on it.  I thank you for your time.

FC
  Fred Cohen & Associates: http://all.net - fc@all.net - tel/fax:925-454-0171
[Fred's disclaimer omitted, distinguishing his FC&A and Sandia roles.  PGN]


Re: Ghost trains (PGN, RISKS-20.31)

"Campbell Smith, Peter" <CampbellP@logica.com>
Tue, 20 Apr 1999 14:23:13 +0100
PGN reported that recently a ghost train appeared in the BART (San Francisco
metro) computers and he was surprised that 567 such incidents occurred in a
two-year period.

Very many rail systems, BART included, use track circuits to detect trains -
ie they rely on the train itself making an electrical connection between the
two running rails.  There are other detection methods used instead of or in
conjunction with track circuits, but track circuits have been used for
decades, are acceptably reliable in hostile environments and are generally
accepted as safe.  If they fail, they are much more likely to falsely report
a train than falsely report the track as vacant.

One snag with track circuits is that they can only localise a train to
somewhere within a stretch of track.  To track trains reliably in a busy
rail system like BART it is necessary to apply some additional heuristics,
starting with the first law of train control - trains cannot be created or
destroyed.  This can still leave ambiguities: for example if three trains
occupy adjacent track sections and then a gap opens up, is it behind or in
front of the middle train?  BART and other systems have a hierarchy of
automatic algorithms followed by manual procedures and operating rules to
overcome these sorts of problems, but however this is done there is always
the possibility of ghost trains - ie track sections which appear to be
occupied but which cannot be associated with a known train.

There are two main causes of ghost trains: failure to track correctly a real
train - in which case the controller (human or computer) needs some
independent corroboration that the train is in the section, and faulty track
circuits, when the controller similarly needs corroboration that the section
is indeed vacant.  All rail systems, BART included, have operating
procedures which allow trains to continue running, subject to certain rules,
when track circuits fail.

Computerised control systems however have their limits and if a series of
track circuits fails, especially in a congested area such as the one
reported, the best option is to switch to manual control.  The reason for
this is that the only safe way to operate automatically would be to allow
only one train at a time into the 'blind' section and wait for it to
reappear at the other side before letting the next one in.  Under manual
control however trains can be made to go very slowly, to be driven with the
driver in radio contact with control, or to operate with visual signals or
even flagmen.

So in short, it isn't very surprising, at least to those who design train
control systems, that ghost trains occur so often.  They have been foreseen
by those that built the system and it has been designed to operate safely,
if not perhaps very rapidly, in their presence.

Peter Campbell Smith


Re: GUIDs and Melissa (PGN, RISKS-20.31)

David M. Chess <chess@us.ibm.com>
Mon, 19 Apr 1999 11:34:08 -0400
> [Incidentally, there is confusion among the various news reports of
> Melissa as to exactly when and how the GUID entered into the
> identification of the perpetrator.  Can anyone who really knows
> enlighten us?  PGN]

Well, I won't claim to really really know, but I may know as much as anyone
else who's free to reveal.  From what I can tell, the GUID in the original
LIST.DOC was the same as the GUID in a document used to distribute an
earlier virus (by "Vcodin ES"), and the same as the GUID in an even earlier
document used to distribute an even earlier virus by someone with an even
sillier "handle".  Since the GUID isn't updated when an existing module is
modified, it seems likely that this provides evidence only that virus
writers tend to build on earlier viruses; something everyone already knew.
It could provide evidence against the author of Melissa only rather
indirectly; if for instance a number of documents were found containing
early drafts of Melissa, or showing the progressive modification of the
earlier Vcodin ES virus into Melissa, the continuity of GUIDs would help
round out the story.  But it is likely not the case that the GUID directly
implicates the author; the GUID of the original Melissa LIST.DOC is probably
not the same as the GUID placed in brand-new documents created on the virus
author's system.

DC

  [The reason for my original note that Dave has reproduced (above) was my
  suspicion that media reportage falsely implied that the GUID explicitly
  implicated the person arrested.  But remember, digital evidence should
  always be considered suspect, especially with sloppy system security and
  sloppy administration, but even in the best of circumstances.  PGN]


Re: GUIDs and Melissa (Graham, RISKS-20.31)

<jdean1@nomvs.lsumc.edu>
19 Apr 99 06:36:46 -0500
> If you open the Melissa document in BINARY mode (not in Word!), you can see
> the following string:
>
> PID_GUID {572858EA-36DD-11D2-885F-004033E0078E}

Assuming this conforms to Internet Draft
<draft-leach-uuids-guids-0.txt>, the date portion decodes to
8/18/1998 20:52:22.510 UTC — long before Melissa appeared.
(ftp://colbleep.ocs.lsumc.edu/pub/utility/win95/decoguid.zip
contains my program to decode GUIDs).


Re: GUIDs and Melissa (Graham, RISKS-20.31)

BROWN Nick <Nick.BROWN@coe.fr>
Mon, 19 Apr 1999 12:52:25 +0200
The NIC address which is placed in the GUID, is not obtained directly from
the network card's hardware.  It is obtained from the driver.  And on NT at
least (and I imagine on 95/98), you can change this.  On NT it's at
(assuming a NIC of type <card>):
  HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\<card>1\Parameters
Add a string named "NetworkAddress" containing 12 hex digits, reboot, and
away you go.

Of course, this is hugely powerful.  I get my colleague's NIC address with
ping and arp.  Then I disconnect my PC from the network (to avoid
conflicts), change my NIC address, create my Word document saying "the boss
sucks", and leave the document lying around... (I'm assuming that if I just
edit the document with Notepad to change the GUID, something might break in
a checksum or something, but even that isn't clear).  When the network
administrator is called in to help in the witch-hunt, he'll find a smoking
gun leading to my colleague's PC... in the office which he keeps locked at
all times...

If you have a NIC but you're not currently on a LAN, I recommend changing
your NIC address to 4675636B4D24, but you can use this mechanism to send any
6-character ASCII message you want...

Nick Brown, Strasbourg, France


RE: GUIDs and Melissa (Graham, RISKS-20.31)

Russ <Russ.Cooper@rc.on.ca>
Tue, 20 Apr 1999 01:27:53 -0400
Robert Graham made some reasonable observations about GUIDs in the 20.31
digest, I would add...

1. If the machine does not have an Ethernet adapter, but say, has a
modem instead (or no adapter), the MAC address component of the
machine's base GUID is randomly generated at OS install time. It then
appears to be re-used by anything that generates a GUID. There is no
indication that it attempts to exclude adapter vendor assigned address
space when generating this random value, meaning it may well duplicate a
*real* MAC address.

2. GUIDs are based on the owner of the document. So in the case of virus
databases and such, they would have to know they have a source document,
not some document which had been infected, to know they had the
original. In the case of Melissa, because it spread so quickly, the
parties (Richard Smith of Pharlap et al) believed they had the original
from the newsgroups it first appeared in. While its possible, there's no
way to prove it.

3. GUIDs don't identify the person who keyed the document, only the
machine on which they were keyed. Anyone fearing repercussions from the
circumstantial evidence merely has to do a fresh install of their OS to
replace their GUID. In today's computing environment, it actually makes
more sense to simply take the drive out to the road and run over it with
your car a few times and throw it off a bridge, then for $150 replace it
with a new one (and during the new OS install generate an entirely new
GUID). Cutting and pasting the contents of one document (from one
machine) to another (from another OS install or different machine)
allows the contents to obtain a new GUID (or the GUID of someone you
want to implicate if you have any other document from them).

While the press picked up on the GUID issue quickly (largely due to
recent discussions about it and Richard's penchant for talking with the
media), fact is if it were ever introduced and accepted as any sort of
evidence we'd all be in for some bad times.

I remember the furor that was caused when an email, purported to be from
the former Premier of Ontario Bob Ray, was put into the hands of the
media. He was blasted for its contents, when in fact its author was
merely identified by a FROM: address in an SMTP message. GUIDs in an MS
document represent about the same level of assurance as to who the
author was.

The obvious risk is that we need to ensure we're relying on reliable
information as evidence of anything, as I'm sure Fred Cohen would
agree...;-]

Russ - NTBugtraq moderator  http://ntbugtraq.ntadvice.com


Re: Mainframe viruses (Kabay, RISKS-20.31)

<chess@us.ibm.com>
Mon, 19 Apr 1999 11:50:54 -0400
>The Christmas Tree "virus" affected VM systems some 10+ years ago.  (IIRC
>it really affected PROFS.)  I think it is  reasonably comparable to Melissa.

In fact CHRISTMA EXEC worm (back in 1987) was a rather generic VM/CMS mail
exploiter; it didn't require PROFS (one popular CMS mail client), nor as I
recall did it specifically target the PROFS address book when looking for
addresses to spam itself to.  The similarities to Melissa are many: victims
would receive a CHRISTMA EXEC in the mail from someone they knew (someone whose
address book, roughly, they were in), and have to perform an explicit action
(receive and run it) to have the worm actually execute.

<>  Microsoft engineers decided to dispense with a security kernel ...
>If a computer has an integrated word processor in its mail software, then it
>very well might not take supervisor privileges for a word processor macro to
>send mail.  I think this was the source of the PROFS vulnerability.

This is a very good point.  Of course it does not require supervisor
privileges either to send e-mail or to alter one's own documents.  In
general, viruses do not need to violate access controls or obtain root
privilege or alter the operating system in order to spread.  I think in the
original piece Mr. Kabay overemphasizes the lack of security in DOS-heritage
systems when thinking about the causes of virus vulnerabilities.  In fact,
until the rise of network connectivity, PC security was very very strong:
you, sitting at your PC over there, would have a much *harder* time altering
files on my PC over here than the typical multi-user-system user would have
altering files belonging to someone else on the same system.  Since viruses
spread via *authorized* communication channels (programs and documents that
I intentionally share, or email that I am permitted to send), preventing
programs from exercising unauthorized features isn't terribly relevant.
Certainly there are things that current office systems could do better,
security-wise, such as recognizing that a program or documents is from an
untrusted source and limiting its function appropriately, but it's not
realistic to imply that if only Microsoft had put a security kernel into
Windows, viruses would never have become a problem.  Whenever people use
general-purpose programmable systems, and intentionally share programs and
active content, viruses will continue to be a threat we must guard against;
simply applying traditional access-control security to a system does not
render viruses on that system either impossible or harmless.

DC


CHRISTMA EXEC (was: Mainframe virus; RISKS 20.30 and 20.31)

Otto Stolz <Otto.Stolz@uni-konstanz.de>
Tue, 20 Apr 1999 10:38:22 -0600
Some corrections are due (this is not "IIRC", as the recent contributions:
I've been there, seen it, even kept records).

On Fri, 9 Apr 1999 21:27:19 -0400 (EDT), hes@unity.ncsu.edu said:
> The Christmas Tree "virus" affected VM systems some 10+ years ago.
> (IIRC it really affected PROFS.)

It propagated only on VM/CMS systems, but it affected the whole EARN/Bitnet/
Netnorth and IBM's internal network Vnet, due to network overloading, from
1987-12-09 through 1987-12-14th. PROFS was not needed (cf. infra).

> I think it is  reasonably comparable to Melissa.

Melissa propagates in two ways, both as a Word Macro Virus, and by sending
mail through a particular e-mail user agent. In contrast, CHRISTMA EXEC was
not a virus, and exploited CMS's SENDFILE command (a predecessor of FTP);
PROFS, a mail service, was not used, at all.  This type of code was then
dubbed "Rabbit".

On Fri, 16 Apr 1999 19:01:25 -0400, jt5555@epix.net (Julian Thomas) said:
> IIRC the program was called CHRISTMA EXEC that came in a message that
> said something to the effect of "Don't worry, just run this".

Not quite so. It came as a bare EXEC file (akin to a .Bat file in the PC
world, or to a shell script, in the Unix world), i. e. as a source file
anybody could read (and everybody was supposed to understand :-) ).

It contained, after several screens full of rather boring REXX statements of
the sort "display so-and-so many blanks and then so-and-so many
asterisks...", a screen-sized comment block saying "Browsing a dull program
like this one is no fun; just type CHRISTMAS to run it" or something to this
effect. Actually, the addressee had to explicitly RECEIVE the file (i.e.,
move it from the system spool area to a local disk) before typing CHRISTMA
would work. The blanks and asterisks would, of course, generate an ASCII-art
image of a Christmas tree, on the user's screen.

> It then proceeded to send itself to everyone in the recipient's address
> list.  It wasn't exclusive to PROFS - also affected users of NOTE.

It harvested the addresses from two files: an audit trail of former file
transfers (including e-mail sent and received), and the user's personal
address book. Both were used by the CMS commands NOTE and SENDFILE; I do not
know, whether PROFS used these same files.

The incident was discussed in RISKS: issues 5.72, 5.74, and 5.79, carry only
folklore and open questions; in issue 5.80, item1, Ross Patterson
comprehensively discusses CHRISTMA EXEC; issues 5.81 through 6.1 carry minor additions,
and even more folklore (viz. a press item meant as a deterrent
example of journalists not understanding the issues they are reporting
about); issues 6.1 through 6.7 discuss whether source-code availability
implies any security.

Five years later, CHRISTMA EXEC was discussed in Virus-L, issues 5-176
through 5-206 (and I had to correct, in issue 5-178, the same errors now
re-told in RISKS); to get the archive files of this latter discussion, send
e-mail to listproc@Lehigh.EDU
containing the following lines:
  search Virus-L -all "CHRISTMA EXEC"
  get virus-l/Digests 5-176
  get virus-l/Digests 5-178
(and so on; use the results of the search command mailed to you).

CHRISTMA EXEC "inspired" several epigones [*], who later produced the
Rama, ZebraTell, Game2, and other VM rabbits [*].

Otto Stolz

  [* Must have been epigonadotrophic.  PGN]


Re: Microsoft reschedules Memorial Day

<bernard.sufrin@comlab.ox.ac.uk>
Mon, 19 Apr 1999 11:56:36 +0100
Outlook 98's view of the dates of at least two of the UK bank holidays
this year is also wrong by a week.


Re: Overzealous applications (Cargill, Risks-20.31)

Mark Brader <msbrader@interlog.com>
Mon, 19 Apr 1999 03:23:33 -0400 (EDT)
> Thus if you enter 2/4/99, it will be interpreted as 2 April 99 in the
> UK, 4 Feb 99 in the US, etc.  Fine so far, but what if you enter an
> invalid date, such as 29/2/02?  ... in fact, Access quietly accepts this
> as a valid date - 29 Feb 2002!!  The trouble seems to be that Access
> tries too zealously to find a 'correct' interpretation, so if dd/mm/yy
> (or mm/dd/yy) doesn't work, then it tries yy/mm/dd.

If that's the reason, I hope it interprets it as 2 Feb 2029, not 29 Feb 2002!!

Mark Brader, Toronto, msbrader@interlog.com


Re: Overzealous criticism (Re: Cargill, RISKS-20.31)

Peter da Silva <peter@baileynm.com>
19 Apr 1999 16:36:32 GMT
>Fine so far, but what if you enter an invalid date, such as 29/2/02?

I assume you mean something like 2/2/29, because I would expect 29/2/02 to
be accepted as 29 Feb 2002. Or did it come out as 2 Feb 1929?

> The trouble seems to be that Access tries too zealously to find a
> 'correct' interpretation, so if dd/mm/yy (or mm/dd/yy) doesn't work, then
> it tries yy/mm/dd.

Which is the normal date format in Japan.

> Of course, the problem goes away if you require users to enter four-digit
> years, but it is still unwarrantedly aggressive behaviour.

Much as I like to lob bombs at Microsoft, I can't legitimately fault them
for this. It's the social issues behind the Year 2000 problem that caught
them up.

Peter da Silva <peter@baileynm.com>


Calendar problem with old Calvin and Hobbes comics strips

Michael Cook <mlcook@cca.rockwell.com>
Tue, 20 Apr 1999 12:56:21 -0500
The "Calvin and Hobbes" comic strip web site has a calendar problem!

Universal Press Syndicate is re-running old C&H comic strips.  Each day they
run the strip from that date 11 years ago.

Another example of calendar problems in general, and a pre-Y2K Y2K problem!

The note on the C&H web pages:

  Calvin is back! Each day we are reissuing an original Calvin and Hobbes
  strip, 11 years after it was first published. Start your adventure with
  the first Calvin and Hobbes Strip, published November 18, 1985. Please
  note: 1988 was a leap year, therefore, Sunday comics will appear on
  Saturday through February 2000.

http://www.calvinandhobbes.com/strips/88/04/ch880417.html
  [change date to wherever you want to start. ^^^^^^
  But the 2-digit year field is not a problem here!  PGN]


AT&T PINs (Re: RISKS-20.31)

e <erisks@gmx.net>
Mon, 19 Apr 1999 10:57:45 -0400
AT&T does this all the time. my so-called PIN is actually printed onto the
card as the last four digits of the long number.

I can only assume that telcos don't really care if "cards" (all you need is
the n-digit number that's printed on it) are used by others, as long as
someone pays the bill. And I admit that I never worried too much about it
because my employer does that — I would not accept this method for
something i was paying for personally.


Ameritech calling card ready to use! (Re: RISKS-20.31)

Nathan Brindle <nbrindle@netdirect.net>
Mon, 19 Apr 1999 17:16:24 -0500
I received a new calling card from Ameritech in today's mail.  Conveniently
enough they placed two little peel-off stickers--one with my telephone
number, the other with my PIN--right next to the calling card, on the
folder it came in.  I'm sure they consider this to be less RISKy than
actually printing the numbers on the card for me.  Awfully nice of them,
don't you think? :)

This is the kind of thing that makes me glad my mailbox has a lock :)

Nathan


High-Integrity System Specification and Design book

Jonathan Bowen <jpb@csres.cs.reading.ac.uk>
Tue, 20 Apr 1999 14:13:48 +0100 (BST)
The following book may be of interest to RISKS readers. It provides an
introduction to computer-based system specification and design, paying
particular attention to structured and formal methods, method integration,
concurrency and safety-critical systems. The book consists of both original
material and reprints of classic papers in the field of system specification
and design.

    J.P. Bowen and M.G. Hinchey. High-Integrity System Specification
    and Design. Springer-Verlag, London, FACIT series, April 1999.
    xviii+701 pages, 65 UK pounds. ISBN 3-540-76226-4.

Paper authors include Brooks, Craigen, Harel, Hoare, Lamport and Leveson.

For further on-line information, see:
    URL: http://www.fmse.cs.reading.ac.uk/hissd/

This includes a full list of papers and an electronic copy of the
preface and table of contents.

Jonathan Bowen,  The University of Reading,  Dept of Computer Science
Whiteknights,  PO Box 225,  Reading,  Berks RG6 6AY,  England
Tel: +44-118-931-6544 (direct) -8611 (enquiries)  Fax: +44-118-975-1994
Email: J.P.Bowen@reading.ac.uk URL: http://www.cs.rdg.ac.uk/people/jpb/

Please report problems with the web pages to the maintainer

x
Top