The RISKS Digest
Volume 17 Issue 65

Monday, 22nd January 1996

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Stolen computers from the U.N.
Brian Mulvaney
Hey, your mailing list is sending me viruses!
Jon Callas
Cryo-risks
Charles P. Schultz
Spugs, BellWeckers & Chin'95...
T H Pineapple
Cost to crack Netscape security falls from $10,000 to $584
David Golombek via Lance J. Hoffman
Japanese fighter plane shot down another plane
Chiaki Ishikawa
Re: Galileo fault protection software
Kevin Maguire
Re: X-31 crash follow up
Roy Wright
Call Signs are unambiguous [Delta 153]
Peter Ladkin
"Year 2000" conference
Mark Seecof
US export regulations
Wilhelm Mueller
RISKS of personalized Windows mail readers
A. Padgett Peterson
Time glitch clarification
Ivars Peterson
Reminder, ISOC 1996 Symp. Netw. & Distr. Sys. Security
Christopher Klaus
ABRIDGED info on RISKS (comp.risks)

Stolen computers from the U.N.

<Brian_Mulvaney@intersolv.com>
Wed, 17 Jan 96 08:37:56 EST

The *Wall Street Journal* 17 Jan 1996 had a short headline that reads:

"U.N. officials said four computers containing most of the data on human rights violations in Croatia were stolen in New York. Officials said the theft was a 'very heavy blow' to efforts to prosecute war crimes."

The risk: The usual hazards of not having a good data backup plan?


Hey, your mailing list is sending me viruses!

Jon Callas <jon@worldbenders.com>

[Editor's note: The names sending this to me were deleted, to protect everyone. This is a fascinating letter, for a number of reasons. What happened was that some mail message blew up the AOL mailer, causing the host machine to crash, and the poor schlep getting the mail message complained to the listmom, assuming that said person was a programmer and not simply someone who maintains a mailing list. On the one hand, we can laugh at it, on the other hand, that the bug exists in both the Mac and Windows versions means that they are doing some really cool cross-platform development. On the third hand, I'm sure there are a lot of people who would love to know what plaintext mail message will crash an AOL account. For the mischief-minded, let me say in advance that I don't know. — jdcc]

Real e-mail from a real AOLer to a local mailing list manager.

Names removed to protect the ignorant. :-)

-----

<>Some of the mail I receive from the ** mailing list does not open properly
<>in my mail reader, AOL for Windows, 2.5. When I click on an e-mail from the<br /> <>** mailing list, AOL freezes and I have to reboot the system. It doesn't<br /> <>happen with every single piece of ** e-mail, only some of them. The same<br /> <>thing also happens on my husband's AOL account. He has AOL for Macintosh,<br /> <>2.6.<br /> >
>Please refer this problem to AOL tech support. I don't believe
>the problem is with the mail message. It's much more likely to be with
>the mailer reader.

**, this is NOT an AOL tech support problem. My husband was able to solve the problem for me, and for himself, by going through each message downloaded, one at a time, until he hit the one that made AOL freeze. After deleting this message, all problems were gone. I would therefore suggest to you, that if it is at all possible, run some sort of virus checker or something on postings to the list before you send them out. I am not a programmer, so I am not sure how to go about this, or if it is even possible, so please don't flame me again. Just see what you can do to get the list working properly. Thanks.


Cryo-risks

<CharlesP_Schultz-ECS013@email.mot.com>
19 Jan 96 09:54:11 -0600

While skipping through the web one day, I came across a Cryonics web page. It certainly seems like its important for everything to work right while you or a loved one are being preserved, and the web page contains a number of statements to reassure the potential customer. On the other hand, for RISKS Digest readers, the statements made may not be so much reassuring, but rather showing a sense of false confidence.

For example:
"CryoSpan...employs multiply redundant, fail-safe computer monitoring of liquid levels in patient dewars that are situated in reinforced-concrete underground vaults."

"These vaults offer unprecedented protection against earthquakes, fires, floods, and vandalism." I'd be interested in knowing if there have already been anecdotes recorded about failures in cryo-preservation environments.
Charles P. Schultz

Spugs, BellWeckers & Chin'95...

T H Pineapple <thp@cix.compulink.co.uk>
Mon, 22 Jan 96 15:35 GMT

Looks as if MicroSoft ran their Win95 install system thru their Word smellchucker, too. `Upgrading' our CDR-machine from WfWg3.11 to W'95 gave the kick-stab-thrash-gouge-alt-delete instruction as:

`Press Ctlt-Ds Ctlt-Del a to reboour reboour mach'

http://steev@almathera.uk.tech.masochists
[PC: win'95 Companion] [Amiga: Photogenics, EuroScene 2 n a bunch of stuff]

IP: Single computer breaks 40-bit RC4 in under 8 days (fwd)

"Lance J. Hoffman" <hoffman@seas.gwu.edu>
Fri, 19 Jan 1996 12:38:29 -0500 (EST)

[Intermediate forwardings deleted. PGN]

>Date: Thu, 18 Jan 1996 20:45:33 -0500
>From: daveg@pakse.mit.edu (David Golombek)
>To: cypherpunks@toad.com
>Subject: Single computer breaks 40-bit RC4 in under 8 days
>
>MIT Student Uses ICE Graphics Computer
>To Break Netscape Security in Less Than 8 Days:
>Cost to crack Netscape security falls from $10,000 to $584
>
>CAMBRIDGE, Mass., January 10, 1996 — An MIT undergraduate and part-time
>programmer used a single $83,000 graphics computer from Integrated Computing
>Engines (ICE) to crack Netscape's export encryption code in less than eight
>days. The effort by student Andrew Twyman demonstrated that ICE's advances
>in hardware price/performance ratios make it relatively inexpensive — $584
>per session — to break the code.
>
>While being an active proponent of stronger export encryption, Netscape
>Communications (NSCP), developer of the SSL security protocol, has said that
>to decrypt an Internet session would cost at least $10,000 in computing time.
>
>Twyman used the same brute-force algorithm as Damien Doligez, the French
>researcher who was one of the first to crack the original SSL Challenge.
>The challenge presented the encrypted data of a Netscape session, using the
>default exportable mode, 40-bit RC4 encryption. Doligez broke the code in
>eight days using 112 workstations.
>
>"The U.S. government has drastically underestimated the pace of technology
>development," says Jonas Lee, ICE's general manager. "It doesn't take a
>hundred workstations more than a week to break the code — it takes one ICE
>graphics computer. This shuts the door on any argument against stronger
>export encryption."
>
>Breaking the code relies more on raw computing power than hacking expertise.
>Twyman modified Doligez's algorithm to run on ICE's Desktop RealTime Engine
>(DRE), a briefcase-size graphics computer that connects to a PC host to
>deliver performance
>of 6.3 Gflops (billions of floating point instructions per second).
>According to Twyman, the program tests each of the trillion 40-bit keys
>until it finds the correct one. Twyman's program averaged more than 830,000
>keys per second, so it would take 15 days to test every key. The average
>time to find a key, however, was 7.7 days. Using more than 100
>workstations, Doligez averaged 850,000 keys per second.ICE used the
>following formula to determine its $584 cost of computing power: the total
>cost of the computer divided by the number of days in a three-year lifespan
>(1,095), multiplied by the number of days (7.7) it takes to break the code.
>
>ICE's Desktop RealTime Engine combines the power of a supercomputer with the
>price of a workstation. Designed for high-end graphics, virtual reality,
>simulations and compression, it reduces the cost of computing from $160 per
>Mflop (millions of floating point instructions per second) to $13 per Mflop.
>ICE, founded in 1994, is the exclusive licensee of MeshSP technology from
>the Massachusetts Institute of Technology (MIT).
>
>###
>
>INTEGRATED COMPUTING ENGINES, INC.
>460 Totten Pond Road, 6th Floor
>Waltham, MA 02154
>Voice: 617-768-2300, Fax: 617-768-2301
>
>FOR FURTHER INFORMATION CONTACT:
>
>Bob Cramblitt, Cramblitt & Company
>(919) 481-4599; cramco@interpath.com
>
>Jonas Lee, Integrated Computing Engines
>(617) 768-2300, X1961; jonas@iced.com
>
>Note: Andrew Twyman can be reached at kurgan@mit.edu.


Japanese fighter plane shot down another plane

Chiaki Ishikawa <ishikawa@personal-media.co.jp>
Thu, 18 Jan 1996 22:01:20 +0900 (JST)

During October 1995 a Japanese air force fighter plane shot down another fighter plane involved in a combat training by mistake.

What happened was this.

Two F-15 planes took off for combat training. One of them apparently was carrying LIVE missiles. The main switch that activates firearms on the airplane was off by instruction. However, somehow, when the pilot triggered fire button during the interception training, the sidewinder missile was fired and shot down the other fighter plane nicely. The pilot escaped by parachute safely.

Of course, the main question is why on earth an airplane in a training needs to carry live munition, sidewinders missile at that. Airforce officials claim that the airplane in question is routinely used for scramble mission to intercept possible intruders and thus the taking the missiles off the airplane and reloading for occasional training is a time consuming task to be avoided. Newspaper article pointed out that the airplane needs refueling before and after the mission that takes time and missiles might as well be taken off and reloaded anyway for the training.

An article regarding the tentative investigation report appeared in newspaper last week. According to the report,

According the article, the "lock cue" or whatever is shown ONLY WHEN the master firearm is turned ON. With other signals displayed on the screen, this particular symbol or message (I am not sure what that is from what I read in the newspaper) indicates that the master firearm system is activated and the pilot can fire the missile really.

Now the airforce officials are said to wonder if the pilot in question could somehow sense that something was amiss with the onboard firearm computer system and aborted the lock-on training prematurely. (Presumably deducing that firearm master switch is off, and "lock cue" should no appear, and why am I seeing it on the screen!?)

Now the risk, of course, is obvious.

The plane should not have carried the live missile in the first place since this was not the intended part of this routine training. (Airforce officials were saying that they would have disarmed the airplane if they felt they had enough time.)

Secondly, we should not depend on the stressed operator of computers (in this case, the pilot) to detect the anomaly of the computer operation so casually. If the operator sits in a comfortable chair at a desk looking at computer display in a quiet environment, we may be justified in expecting the operator to notice a strange message on the screen. But a fighter plane pilot in a combat exercise is hardly in a position to notice that something is wrong with the on-board computer!? ("What is the message on the screen? SYSERR 1043. Hmm stack overflow of numerical signal processor that computes the trajectory of missile?")

As a matter of fact, there was even a speculation that the poor pilot had not turned off the main firearm switch as instructed and he was the prime culprit until these initial charges turned groundless. Poor guy.

I feel that Japanese authorities tend to focus on human error side of these accidents when, in fact, there may be inherent systems problems. I don't know if this is true in other countries.

Since I noticed that there seem to be a lot of readers of Risks who seem to have background in the military and/or military-related industries, the above may be of some interest.

PS: Regarding the Japanese breeder reactor accident on Dec. 8th, 1995, I reported last year, someone questioned the wisdom of using chemically active sodium for the secondary loop. Well, the breeder DOES use Sodium (Natrium, Na) for both primary and secondary cooling loop. It is a decision made by the Japanese design team. French breeder reactor used sodium only for the primary loop.

Japanese investigators who have been analyzing the fracture of the secondary pipe, from which hot liquid sodium escaped, found that the fractured casing of temperature sensor broke the inserted sensor as a whole and liquid sodium escaped from the hole punched there. A part of sensor casing is missing. It is believed to get stuck somewhere in the secondary loop piping. The fracture itself is believed to be caused by metal fatigue: the casing must have vibrated about half a billion times during the operation. The vibration is caused by the interaction of the sodium flow and the vortex behind the sensor.

Some heads rolled because there was an effort to cover up the severity of the accident by the on-site managers (and possibly people in the headquarters of the operating corporation). It has been a big scandal for the last three weeks or so.

A video taken by the operating crew immediately after the accident was edited and the juicy part of the video was NOT shown to the public until the video taken by the alarmed local town official was broadcast over and over on Japanese TV. Someone finally fessed up and three (initially reported two, but I think it now counts three) different VCR tapes recorded immediately after the accident were found in the desks/locker of some of the employees. Old timers might recall the movie "China Syndrome", in which an operating company tried to hide the structural weakness of a reactor caused by sloppy construction and checking.

A tragedy is that one man of the operating company who handled the interviews concerning the hiding of these key tapes committed suicide. It seems that he could not handle the pressure of interviewing his former colleagues and underlings and nailing the culprit of concerted efforts to hide the tapes. Also, he had to answer tough questions from journalists at a widely broadcast PR conference. (Maybe another risk: Don't let someone in the suspected organization to police itself if the charges are serious enough. They may not go down to the bottom of the problem, and in the Japanese case, someone might commits suicide!)

Well, the truth has caught up the fiction and may surpass it in Japan.

Chiaki Ishikawa, Personal Media Corp., Shinagawa, Tokyo, Japan 142 ishikawa@personal-media.co.jp

Re: Galileo fault protection software (http. RISKS-17.64)

Kevin Maguire <maguire@tina.jpl.nasa.gov>
22 Jan 1996 19:19:38 GMT

During the critical engineering sequence, the sequence of commands that controlled the spacecraft during the Probe Relay and Jupiter Orbit Insertion, most of the spacecraft commands were issued twice (in case of problems with the first issuance).

When the command to fire the main engine for the insertion burn was issued for the second time, a "command constraint violation" was issued, since that command is illegal in the state the software entered after the command was issued the first time. This was expected, and not a problem.

When a command constraint violation is issued, a bit is set in a status word, and remains set until cleared by a direct command. The particular bit in question was scheduled to be cleared after the turn during which we had the incident.

The standard package of commands used to perform the turn, however, contains a command that checks the fault status word for non-zero bits. When it found one, it cancelled the active sequence of commands, and called a subroutine that safes the instruments, configures the telecom system to maximize commandability, and widens the deadbands that must be exceeded for the spacecraft to perform autonomous attitude and spin rate maintenance.

Kevin Maguire, Galileo Attitude and Articulation Control Subsystem
Kevin.P.Maguire@jpl.nasa.gov

Re: X-31 crash follow up (Mellor, RISKS-17.60)

Roy Wright <roy-wright@ti.com>
Mon, 22 Jan 1996 15:26:43 -0600

BTW, the NTSB report showed the cause of the crash to be miscommunication. I.e., the original pitot tube that had pitot heat had been replaced with one that did not. This information was not properly communicated to the test team. The pilot had recognized the pitot was icing, turned on pitot heat and requested ground control to remind him to turn it off. Ground control acknowledged his request. There was a backup flight mode that did not depend on airspeed that would have been engaged if it was known at the time that the pitot tube did not have a heater. Ground support then informed the controller that there was no pitot heat, twice. Too late.

Fault was found with not correctly recognizing and communicating the risk of substituting the pitot tube.

Source - Aviation Week. Note the above has some risk since it was recalled from volatile organic memory.

Roy Wright, Texas Instruments roy-wright@ti.com 214-575-6691

Call Signs are unambiguous [Delta 153] (Lucero, RISKS-17.64)

<ladkin@techfak.uni-bielefeld.de>
Wed, 17 Jan 1996 22:43:54 +0100

Because Delta 153 thought a take-off clearance for American 153 was instead for them, there was a near-miss at JFK on 5 Jan 96. Scott Lucero (RISKS-17.64) saw two RISKS:

  1. not designing systems to recognize these [..] situations, and
  2. a risk [with] a growing number of customers in a limited address space.
As airports get busier, [..] incidents [..] could happen more often.

I quote from the FAA Airman's Information Manual 4-34. Aircraft Call Signs.

a. Precautions in the Use of Call Signs--

  1. Improper use of call signs can result in pilots executing a clearance intended for another aircraft. Call signs [begin italics] should never be abbreviated on an initial contact or at any time when other aircraft call signs have similar numbers/sounds or identical letters/numbers [end italics] [...]
  2. Pilots, therefore, must be certain that aircraft identification is complete and clearly identified [sic] before taking action on an ATC clearance. ATC specialists will not abbreviate call signs of air carrier or other civil aircraft having authorized call signs.
The call sign of American 153 is `American one-five-three', and Delta's is similar. Either the controller breached regulations and abbreviated the call sign or Delta 153 breached AIM 4-34.2. The procedures are there. Thus Lucero's Risk 1) does not pertain. His Risk 2) only pertains if based on an assumption that as traffic increases, the number of aircraft handled by a single controller will increase. This is not so--there's an upper limit to traffic handled by a single controller.
Peter Ladkin

"Year 2000" conference

Mark Seecof <Mark.Seecof@latimes.com>
Thu, 18 Jan 1996 14:24:04 -0800

I got a solicitation from "Software Productivity Group, Inc." 508-366-3344 to attend their commercial conference ($1K/2 days--NB: I have NO connection with these people and NO opinion on the value of their product) presenting and discussing plans and methods for coping with 1 Jan 2000 system-date problems. Advertising like this helps attract management attention to the RISKS, I think... and it's clear that a lot of vendors and consultants are promoting work on the issues, so maybe 1/1/00 won't be the day the world ends after all.


US export regulations

Wilhelm Mueller <muewi@informatik.uni-bremen.de>
18 Jan 1996 15:29:31 +0100

Today we received a following letter from a computer manufacturer concerning a (Un*x) operating system upgrade. It contained the following lines (words in [...] are manufacturer or product names in the original text; typos and an attempt at a translation to English by me):

[...]

Zu unserem Bedauern muessten wir feststellen, dass sich ein Problem beim Zusammenstellen dieser Produkte auf diesem Datentraeger ergeben hat. Die Zusammenstellung erfolgte in einer Weise, die nicht in Uebereinstimmung mit den Exportbestimmungen der USA ist.

Wenn Sie Ihre Systeme noch nicht mit [Produkte] aktualisiert haben, zerstoeren Sie bitte den Datentraeger. [Firma] wird Ihnen eine aktualisierte Version dieser Produkte innerhalb der naechsten Monate zusenden. Falls Sie Ihre Systeme fuer diese Produkte bereits aktualisiert haben, setzen Sie sich bitte umgehend mit [Kundendienst] in Verbindung, um den korrigierten Datentraeger zugeschickt zu erhalten. [...]

Aktualisieren Sie bitte nach Empfang des korrigierten Datentraegers Ihre Systeme mit ihm und zerstoeren Sie den Datentraeger mit [Produkte].

[...]

The German is quite formal and cautious; I can't translate it in the same style. [And PGN tried to fix the garbled nonASCII chars with customary equivalents.] It says the following:

[...]

We regret that we had to discover that a problem occurred when putting together the products on the media. The composition was carried out in a manner which does not conform to the export regulations of the USA.

If you have not yet upgraded your systems with [products], please destroy the media. [Company] will send you an updated version of these products during the next months. In case you have already upgraded your systems for these products, please contact immediately [service] to obtain corrected media.

Please update your systems after receipt of the corrected media and destroy the media with [products].

[...]

Wilhelm M|ller, Am Wall 139, D-28195 Bremen muewi@informatik.uni-bremen.de Tel. (B|ro/off.) +49-421-361-10629 Tel. (priv./home) +49-421-169 2525

RISKS of personalized Windows mail readers

A. Padgett Peterson <padgett@tccslr.dnet.mmc.com>
Fri, 19 Jan 96 10:24:18 -0500

Lately I have been seeing a broad spectrum of postings from people, some that seem to go only part of the way across the terminal and some others that not only use up the entire eighty characters on the screen but keep going far past the end of the line and wrap [or get lost] on most 80 character terminals.

I was helping my wife do some research on the net and came across one of her postings that wrapped. Seems she had been using Eudora with a tiny font setting that was not wrapping until 90 characters had been sent. Playing with the fonts, I found a large one that wrapped with only forty characters on the line. Both appeared to be using the whole screen.

Might mention that Eudora/Windows seems to be the mail reader of choice for the "free software" most ISPs are providing for new netters.

Suspect that the different wraps in postings really are telling us a lot more about the posters visual acuity than they may really want the world to know.

Padgett

Time glitch clarification

Ivars Peterson <ip@scisvc.org>
Fri, 19 Jan 96 09:02:10

My apologies to the National Institute of Standards and Technology for inadvertently implicating NIST in the New Year's Day time glitch. The problems at AP broadcast services involved time signals that originated at the U.S. Naval Observatory.

Ivars Peterson, Math/Physics Editor, Science News, 1719 N Street, NW Washington, DC 20036-2888 ip@scisvc.org Tel: 202-785-2255 Fax: 202-659-0365

Reminder, ISOC 1996 Symp. Netw. & Distr. Sys. Security (RISKS-17.52)

Christopher Klaus <cklaus@iss.net>
Thu, 18 Jan 1996 12:54:51 +1494730 (EST)

THE INTERNET SOCIETY 1996 SYMPOSIUM ON NETWORK AND DISTRIBUTED SYSTEM SECURITY (NDSS '96) 22-23 FEBRUARY 1996
SAN DIEGO PRINCESS RESORT, SAN DIEGO, CALIFORNIA

FOR MORE INFORMATION on registration contact Donna Leggett by phone at 703-648-9888 or via e-mail to Ndss96reg@isoc.org.
FAX to NDSS'96 Registration (703) 648-9887.
NDSS96, 12020 Sunrise Valley Drive, Suite 210, Reston, VA, 22091, USA

WEB PAGE - Additional information about the symposium and San Diego, as well as an on-line registration form, are available via the Web at: http://www.isoc.org/conferences/ndss96

Christopher William Klaus, Internet Security Systems, Inc., Suite 115, 5871 Glenridge Dr, Atlanta, GA 30328 http://iss.net/ (404)252-7270

Please report problems with the web pages to the maintainer

x
Top