The RISKS Digest
Volume 19 Issue 50

Sunday, 14th December 1997

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Programmable defibrillator bug
Steve Bellovin
Vandal posts ransom note on Yahoo
Edupage
Computerized test failure
Steve Bellovin
Insanely insulting spelling checker
Martin Bonner
On Weak RSA-keys produced from Pretty Good Privacy
Jean-Jacques Quisquater
Retraction on weak RSA-keys produced from PGP
Jean-Jacques Quisquater
Computer crash impacts Washington DC Metro
Epstein Family
Risks of new Motorola system
Matthew Healy
Re: Potential software nightmare for ISS [name withheld]
Mars Pathfinder priority inversion
Bob Rahe
Automated translation from AltaVista
Seth David Schoen
Re: Beware of HTML Mail
Martin Minow
Software Fault Injection
Gary McGraw
7th USENIX Security Symposium - Conference Program
Jackson Dodd
Info on RISKS (comp.risks)

Programmable defibrillator bug

Steve Bellovin <smb@research.att.com>
Wed, 10 Dec 1997 11:20:32 -0500
The AP reports that a particular brand of implantable defibrillator has a
programming bug — it can cause the patient's heart to race.  A fix has been
developed, and doctors will be able to use a small radio transmitter to load
in the new code.  Of course, that raises other issues, such as how the code
is authenticated, how well a more distant transmitter can load new code into
someone else's body, etc....


Vandal posts ransom note on Yahoo

Edupage Editors <educom@educom.unc.edu>
Thu, 11 Dec 1997 11:42:52 -0500
A network vandal broke into the Yahoo Web site for several minutes Monday
night to post a note instructing the government to release the prisoner
Kevin Mitnick, who is serving time for having used phones and computers to
break into corporate, government and university computer systems.  Although
the vandal claimed to have implanted a "logic bomb/worm" on the Yahoo site,
no virus was found, and the security breach was discovered and patched
immediately.  [*The New York Times* Cybertimes, 7 Dec 1997; Edupage, 11 Dec
1997]


Computerized test failure

Steve Bellovin <smb@research.att.com>
Thu, 11 Dec 1997 18:30:35 -0500
The Graduate Management Admission Test (GMAT) is now entirely computerized
-- candidates take it via an interactive system.  The trouble is, on
Thursday it broke — "The screens would freeze, the tests wouldn't launch or
start".  The organization that administers that test is trying to expand its
hours so that people can reschedule.

The AP article did note that the computerized system does allow for much
more flexibility in scheduling, and that the paper-based system has its own
outage potentials — sites have been closed by fires and by the noise from
marching bands at football games...


Insanely insulting spelling checker

Martin Bonner <martin.bonner@pigroup.co.uk>
Thu, 11 Dec 1997 09:30:11 -0000
No, this is not another "<name> becomes <rude-word>" stories...

Cambridge City Council (in England) wrote to a number of residents.
Being careful people, they spell-checked the letter before sending it.
The problem was that the spell checker couldn't see anything wrong with
a letter that began "Dear Sir or Madman...".

The RISKS?  Assuming that once a computer has checked it, it must be OK.

(Reported in Cambridge Evening News, 10th December 1997).
(Note for non-native English speakers, the conventional start to such a
letter would be "Dear Sir or Madam...")

Martin Bonner, Pi Technology Ltd, Milton Hall, Church Lane, Milton,
Cambridge, CB4 6AB  +44 (0)1223 203894  martin.bonner@pigroup.co.uk


On Weak RSA-keys produced from Pretty Good Privacy

jjq <jjq@dice.ucl.ac.be>
Wed, 10 Dec 1997 08:31:46 +0100
is the title of a paper presented at ICICS '97 (Beijing, Nov. 11-14, 1997)
by Y. Sakai, K. Sakurai and I. Ishizuka, pp. 314-324, LNCS 1334,
Springer-Verlag.  (Don't forget to read the paper just before this one :-).

>From the abstract:
"... Our obtained results show that bad primes are generated in PGP and
induced weak keys can be easily breakable via P + 1 factoring method.
For instance, in the case of RSA-key with 512-bit, we could attack
1. 0.3% users' systems with only 15 hours single PC-computations
   (very weak keys!!),
2. 2% users' systems with 50 days single PC-computations (weak keys?)."

Notice that they only claim a possible problem for key length around 512 bits.

I'm sure that many other commercial implementations have such similar
deficiencies.

My questions:

1. Any public evaluation of this paper? Other estimations?. This paper
   was accepted at a good conference (good panel of cryptographers,
   not a proof :-) and there was no comment (contradiction) after the
   talk. It seems they are some controversy about that results. Where?
   I think it is very good for any product to be submitted to
   public scrutiny especially if we are speaking about our (your) privacy.

2. Why do we trust some systems and implementations?
   And how to improve our methodology of tests?
   And how to convince somebody that indeed we used a good
   generation method for our keys.
   In the case of encryption there is really a general problem.
   You want I use YOUR (maybe insecure) public key for encrypting MY
   private message: please convince me FIRST you did correctly the key
   generation. Any convincing solution?

Jean-Jacques Quisquater, http://www.dice.ucl.ac.be/crypto


Retraction on weak RSA-keys produced from PGP

jjq <jjq@dice.ucl.ac.be>
Sun, 14 Dec 1997 16:01:18 +0100
The main result of this paper is not correct at all!
See the retraction of the main author in sci.crypt.research.

A correct estimate is given by Bob Silverman in a report
accessible at RSA Labs (www.rsa.com/rsalabs):
"The requirement for strong primes in RSA encryption" (May 1997).
Very interesting paper (see the remarks about the EMV generation).

In few words, the table p.4 means:

- suppose one million of users choose a RSA key of 512 bits
  (two factors of 256 bits, chosen randomly) then, if each user
  is using 50.000 hours of serial computation (not fully precise what
  Bob means by hours) for breaking his own key using the (P-1)
  algorithm, ONLY one key will be likely broken.

- suppose one billion of users (China!) choose a RSA key by the
  same method, then, using 500 hours of serial computation for each
  key,   ONLY 7 keys will be likely broken.

It is implicit from these results that it is correct to use more than
2 factors for RSA if modulus with more than 512 bits are used:

- suppose one million of users choose a RSA key of 512 bits with 3
  factors then, using 50.000 hours for each key, around 1000 keys
  will be likely broken (using the Canfield, Erdos, Pomerance
  approximation for the number of smooth numbers) ;

- BUT suppose one million of users choose a RSA key of 1024 bits with
  4 factors then, using 50.000 hours of computation for each key, no
  key at all will be likely broken.

Let us remark that this computing power will be better used to factorize
some chosen modulus (here the results are randomly obtained): however
you need cooperation between the users.

In fact, it is possible to imagine a new LOTTERY (no needed cooperation
between users: maybe a new idea of challenge for RSA, inc :-)
- publish many modulus to factorize with a given level of security;
- the users choose randomly the modulus they want to factorize, using
  some algorithm;
- after a given time, the winners are given by the first factorizations.

Jean-Jacques Quisquater,
http://www.dice.ucl.ac.be/crypto


Computer crash impacts Washington DC Metro

Epstein Family <jepstein@mail1.mnsinc.com>
Sat, 13 Dec 1997 13:48:57 -0500
According to the 17 Dec 1997 *Washington Post*, a computer failure caused
20-minute delays on Metro's Red Line.  It goes on to say "The problem
occurred when workers in Metro's downtown central control room tried to add
an accessory to the main computer that monitors trains' positions.  The
computer crashed and came back on line only when the accessory was detached,
Metro officials said."  No indication what the "accessory" was or why it
caused a crash.

There was no indication that there were any safety-related issues associated
with the computer failure.


Risks of new Motorola system

Matthew D. Healy <Matthew.Healy@yale.edu>
Thu, 11 Dec 1997 14:17:02 -0500
Front page of {USA Today} for Wed, 10 December 1997:

Pager Lets You Locate Your Car, Unlock And Start It

According to the story, Motorola has a new system that can use paging
systems to:

   o unlock a car for somebody who locked their keys in the car
   o locate a car in a large parking lot
   o remotely disable a car so it cannot be started

According to the story, finance companies are especially keen on the idea --
they'd be able to disable the cars of customers who fail to make their loan
payments.

I don't think I need spell out the potential problems for RISKS readers.
I've had enough fun over the years convincing this or that creditor that
their records were in error; having my car disabled while I was fighting
their collection department could really be lots of fun...

Matthew.Healy@yale.edu           http://ycmi.med.yale.edu/~healy/


Re: Potential software nightmare for ISS (Gross, RISKS-19.49)

<[identity withheld by request]>
Wed, 10 Dec 1997 14:14:23 -0800 (PST)
This is a note on Philip Gross' well-justified concern over the
International Space Station software. The *Aviation Week* article he cites
concludes with a telling quote: "'Most people would not have given us a
snowball's chance in Hell to finally pull this thing off, but now we are
actually headed toward the pad to launch it,' [Randy] Brinkley [NASA's ISS
program manager] said."

This strikes me as reminiscent of the narrow definition of "success" that
preceded the Challenger explosion. As Richard Feynman stated, NASA was
playing " ... a kind of Russian roulette .... [The shuttle] flies [with
O-ring erosion] and nothing happens. Then it is suggested, therefore, that
the risk is no longer so high for the next flights." Feynman was stunned by
NASA's ability to take a nonlinear event — the erosion of a material by a
hot gas — and translate it into binary terms, into a "linear rule of
thumb," as James Gleick put it in his text on Feynman, GENIUS.

The point here is that "going to the launch pad" is not equal to success,
because getting the asset on orbit is not the end.  Structural integrity
cannot overcome weakness in 3.5M LOC, especially when that much code means
that the astronauts themselves are there to do experiments, not to "fly" the
ISS. We are watching the establishment of an autonomous unit in space with
the humans serving as back-up to the back-up to the back-up.  Structural
integrity AND bad code is equal to disaster.

I would concur with Peter Gross' concern that a few months of testing is not
enough, that NASA is testing its material far more than its code — which is
surprising considering the rigor that NASA's contractor brings to its
shuttle software: each of the last three releases (each 420K LOC) had one
(1) error each, with the last eleven releases having a total of 17 errors. I
wonder if the 3.5M LOC for the ISS will have a handful of errors in its
releases?


Mars Pathfinder priority inversion (Jones, RISKS-19.49)

Bob Rahe <bob@hobbes.dtcc.edu>
Wed, 10 Dec 1997 10:30:01 EST
In RISKS-19.49, Mike Jones <mbj@MICROSOFT.com> writes a fascinating account
of "What really happened on Mars Rover Pathfinder".  It's just that kind of
behind-the-scenes articles that make RISKS so great.  But, near the end he
talks about the 3 authors from CMU who had written a paper in 1990 that he
says had first identified the priority inversion problem and proposed a
solution.

Not to take anything from those authors but that exact type of problem was
addressed in at least one mainframe operating system way back in the early
1970s.  The Burroughs MCP (Master Control Program) that ran (and still runs
as the Unisys A-Series MCP) their mainframe systems, had a locking scheme
that could produce exactly that kind of problem.  A lock procured by a
low-priority task which then get's pre-empted by a medium- priority task,
could lock out a high priority task.  Their solution at the time (and still
is the last time I looked) was to bump the priority of any task procuring a
global system lock such as the one mentioned in the article.  Simply, if
priorities ran from 0 (low) to 99 (high), procuring a lock under their
algorithm would add 100 to the priority of the locking program.  When the
lock was released the priority would be dropped back by the same amount.

While not as elegant a solution, in my opinion, it certainly was adequate
for the problem given that these were not 'real-time' systems, and was
certainly an enduring solution.

Another example of there apparently being nothing new in software as well as
under the sun?

Bob Rahe, Delaware Tech&Comm Coll., Computer Center, Dover, Delaware
Internet: bob@hobbes.dtcc.edu


Automated translation from AltaVista

Seth David Schoen <schoen@uclink4.Berkeley.EDU>
Thu, 11 Dec 1997 20:59:48 -0800
DEC's AltaVista has integrated a very impressive automated translation
system with their search engine.  You'll be given the option to translate
pages when you search with AltaVista.

I tried translating documents from English to Spanish and back, and several
similar experiments.  Amazingly, although the grammar comes out very poorly,
the sense of the documents is almost always intelligible.

The risk is presented in AltaVista's own press release: "As this technology
matures, it may become unnecessary to offer Web sites in multiple languages."
(http://www.altavista.digital.com/av/content/pr120997.htm).  So risks lie in:

(1) Trusting machine translation as reliable and for more than it's worth
-- as an adequate means of reaching intended audiences.  One can accidentally
lose substantial and significant shades of meaning.  (Try "web site" from
English to Portugese to English; you get "leather-strap place".)

(2) Marginalizing less-common languages.  Many web sites of international
organizations are presently given in some uncommon languages; if they
decide that this translation service or a successor is "adequate", they
may drop support of smaller languages AltaVista won't translate for them.

   Seth David Schoen L&S '01 (undeclared) / schoen@uclink4.berkeley.edu

[NOTE ADDED LATER:
Try "http://babelfish.altavista.digital.com/cgi-bin/translate?" directly
(rather than clicking the "Translate" link from a web page they've found).
It was configured slightly differently when I first tried it, so I wasn't
sure how the URL was going to turn out.]


Re: Beware of HTML Mail (Brazil, RISKS-19.49)

Martin Minow <minow@apple.com>
Tue, 9 Dec 1997 15:26:54 -0800
Another interesting "risk" in the applet spam described by Tom Brazil
is the way the applet was configured:
<> <applet code=XXX codebase=YYY name=ZZZ width=2 height=2>
This tells the Java interpreter to run the applet in a 2x2 pixel screen
area, which will be easy for the viewer to miss. So the applet can do it's
thing while the viewer reads the rest of the display (presumably in HTML).

It might prove interesting to use a Java decompiler to determine precisely
what this applet actually does, and whether it calls any classes that are
not protected by the Java security "sandbox."  Sun's 100% pure test program
can be downloaded from their website for a first-pass check.

Martin Minow    minow@apple.com


Software Fault Injection

Gary McGraw <gem@rstcorp.com>
Wed, 10 Dec 1997 08:21:12 -0500 (EST)
We are pleased to announce the publication of a new software engineering
book:

    Software Fault Injection: Inoculating Programs Against Errors
    by Jeff Voas and Gary McGraw (Reliable Software Technologies)
         ISBN: 0-471-18381-4, John Wiley & Sons, 1997

Fault injection is a useful tool in developing high quality, reliable
code.  Its ability to reveal how software systems behave under
experimentally controlled anomalous circumstances makes it an ideal
crystal ball for predicting how badly good software can behave.  This
complete, how-to guide to a revolutionary new approach to software
analysis gets developers, programmers, and managers up to speed on
cutting-edge fault injection techniques. Fault injection is useful in
multiple domains including:

o Testing     - predicting where faults are most likely to hide
o Safety      - simulating failures in real software environments and
                estimating worst-case scenarios
o Law         - predicting the level of liability incurred by a piece of code
o Security    - uncovering potential security vulnerabilities during the
                development cycle
o Reuse       - obtaining a more accurate read on crucial maintenance and
                reuse issues
o Engineering - seamlessly introducing fault-injection into your
                software process

For more information, see <http://www.rstcorp.com/fault-injection.html>.


7th USENIX Security Symposium - Conference Program

Jackson Dodd <jackson@usenix.ORG>
Thu, 11 Dec 1997 18:01:53 GMT
7th USENIX Security Symposium
January 26-29, 1998
San Antonio, Texas

Sponsored by USENIX, the Advanced Computing Systems Association
In cooperation with the CERT Coordination Center

For more information about this event, visit the Security Symposium
Website:  http://www.usenix.org/events/sec98/

=======================================================
Early registration discount deadline:  January 5, 1998
=======================================================

Are you responsible for your company's security?  Are you looking
for real-world implementations for security issues?  If you are,
plan to attend the 7th USENIX Security Symposium January 26-29 in
San Antonio.

You can learn about the newest tools in tutorials, hear the latest solutions
offered by researchers, meet and talk with some of the leading lights in the
security community, and share problems and solutions with your peers.
Speakers include Bill Cheswick, Carl Ellison, Dan Geer, Arjen Lenstra,
Alfred Menezes, Clifford Neuman, JoAnn Perry, Marcus Ranum, Jon Rochlis, Avi
Rubin, Shabbir Safdar, Bruce Schneier.

Be sure to sign up for tutorials early--they often fill up fast.  Topics
include:

* Java, NT, and Web Security
* Cryptography
* Certification
* How to Handle Incidents
* What Every Hacker Already Knows

USENIX is the Advanced Computing Systems Association.  Its members are the
computer technologists responsible for many of the innovations in computing
we enjoy today.  To find out more about USENIX, visit its web site:
http://www.usenix.org.

Please report problems with the web pages to the maintainer

x
Top