The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 11 Issue 16

Monday 25 February 1991


o RISKS in radiation treatment of cancer
Peter Kendell
o Computer Tax Glitch in Los Angeles
Steve Milunovic
o Computer problems with MD-11 jumbo jet
Fernando Pereira
o Re: Warranties
Jim Horning
o Accuracy in movies and newspapers
Tom Neff
o Re: Biology of Unix; Message attributions
Michael Travers
o Re: Worse-is-better for the 1990s (Unix)
Joseph M. Newcomer
o Peace Shield and "software development problems"
Henry Spencer
o Monopoly Security Policies for Thumb Prints
Bob Baldwin
o Re: More on very reliable systems
Rod Simmons
Robert I. Eachus
o Info on RISKS (comp.risks)

RISKS in radiation treatment of cancer

Peter Kendell <>
23 Feb 91 10:07:42 GMT (Sat)
>From the Guardian (London edition) Saturday February 23rd

Patients die after radiation mix-up

A Spanish health official said yesterday that he feared the worst for 24 cancer
patients who received high doses of radiation when a Zaragoza hospital's linear
accelerator went haywire for 10 days.  Three patients have already died.

A Zaragoza judge is investigating the causes of the disaster, which the
director of the Insalud chain of state-run hospitals called "the worst
accident in the world" of its type.

"We fear the worst for some of the patients," an Insalud spokesman, Fernando
Gomez, said.  "We are seeking information to find someone who has experience
in this kind of situation."

General Electric officials were unable to comment on the machine's operation.
Hospital officials said that the machine was now functioning normally.

Peter (

Computer Tax Glitch in Los Angeles

"Steve Milunovic" <>
25 Feb 91 10:31:19 U
  [abridged by PGN]

A story by Denis Wolcott (with contributions from Walter Hamilton) in today's
Los Angeles Daily News from the NY Times service described how thousands of LA
County homeowners were billed up to $15,000 for three years' property taxes,
because of a 1988 glitch in an $18M computer system (`Optimum'), which did not
work and had to be rewritten from scratch.  As a result, the county was unable
to collect $10M in taxes, and has kept them from disbursing the tax money to
schools districts and other agencies.  Mercifully, the county will not charge
back interest on the unbilled back taxes!

Computer problems with MD-11 jumbo jet

Fernando Pereira <>
Sun, 24 Feb 91 14:55:19 EST
According to the AP (2/20/91), American Airlines might refuse delivery of a
second MD-11 jet from McDonnell Douglas because of computer and fuel problems
with the first MD-11 it received. American's chairman said that the airline is
``very, very, very unhappy'' with the plane.  American has suspended flight
certification tests for the plane because of the computer problems.


Jim Horning <>
Thu, 21 Feb 91 12:31:18 PST
I, too, consider the standard shrinkwrap warranties scandalous, and definitely
a RISK to the public.  But it's not easy to see how to fix them.  Even though
the function of hardware is much better specified, computer hardware generally
comes with what I call the Kodak Film Warranty: "liability limited to cost of
unexposed film."

The current round of discussion has relied too much on strained analogies
with automobiles, and not enough on a rational discussion of the issues.
Greg Johnson and Gene Spafford separately made two important points in
RISKS 11.15.  Taken together, they provide a basis for considering what
a sensible warrantee should promise.

1) Software isn't a "thing" that can be purchased, it is a design.  As
Dijkstra has said, "The true subject matter of the programmer is the design
of computations."  If we pursue the automobile analogy, we need to consider,
not defective automobiles, but defective automobile designs (e.g., Pinto
gas tanks).

2) Software doesn't wear out, and hence doesn't need "maintenance" to cope
with physical wear.  Any defects were present at the outset.

I think publishing provides a more natural and enlightening analogy:

- We buy media containing software, just as we buy CD's containing music
or books containing words.

- There is a sharp distinction between the medium being defective (lost bits
or missing pages) and the contents being defective.

- Most publishers warrantee only the medium, not the contents.  However,
if a CD was labeled as Beethoven's Fifth Symphony and the contents turn out
to be 2 Live Crew, you can probably get an exchange during the first week
after purchase, even if you've broken the shrink wrap.

- The medium is subject to physical wear and decay, not the contents.

- Some publishers of certain kinds of books will, for a fee, provide periodic
updates to the contents of their books.  Most book buyers don't pay for such
services, but they are invaluable for a few.  Other publishers will supply
errata on request.

- There are few legal restrictions on what purchasers may do with the medium,
many more about what they may do with the contents.

Not much software is sold with a specification precise enough to allow a
customer to prove it "doesn't perform as specified."

Maybe someone who is outraged at the security hole that started this
discussion would post the part of his vendor's specification stating that
no such security holes exist?  Failing that, maybe someone would post a
specification that the vendors "ought to have" based their warrantees on?

After we see these specifications, we can discuss:

- What fraction of the total functionality of the operating system this
component of the specification represents.  We need some idea of the size
of the total specification that a warrantee should be based on.

- What fraction of the outraged customers would have noticed if JUST THIS
PIECE of the specification had been omitted by their software vendor.  And
what fraction of those who noticed would then have refused to buy it.

Until we have ANSI standards (or their equivalents) for complete operating
systems and application packages, I'm afraid that the ordinary customer is at
the mercy of the competence and goodwill of the software vendors.
                                                                     Jim H.

Accuracy in movies and newspapers (Re: Hollombe, RISKS-11.15)

Tom Neff <>
21 Feb 91 00:00:14 GMT
The beauty of TV and newspapers is that _everything_ in them is wrong!  This is
really true.  No matter what the subject matter is, if you happen to be a
specialist in that area, you'll grit your teeth when you read or watch.
Neurosurgery — ballet — the law — names of streets in your own hometown --
archaeology — accounting — you name it, they get it wrong.  I'm not
surprised: the media have to talk about everything under the sun but couldn't
possibly afford to be experts in all of it.  The fun part is that even when
we're done rolling our eyes at, say, some egregious astronomy error, we sit
back and take at face value something about China or Churchill or Chernobyl or
child development!  We shouldn't.  Experts in those areas are busy gritting
their teeth even now — while they swallowed the astronomy stuff without
complaint. :-)

    [Another instance that strikes home even more is being directly MISquoted
    after making a carefully worded direct statement and insisting that it
    be used verbatim if at all...  Perhaps there is nothing special about
    computers and related technologies that causes many media folks to be so
    far off the mark.  But there are lots of technological nonsophisticates
    writing on technology (and only a few really thoughtful and careful ones).
    Perhaps the worst problem is the tendency toward 10-second sound bites and
    25-words-or-less oversimplifications.  PGN]

Re: Biology of Unix; Message attributions

Thu, 21 Feb 91 15:39:14 EST
Sorry, but I'm not the author of that message, it was forwarded from
some company's internal mail system and the best guess at the author
ID is MADRE::MWM.  My own opinion of Unix is that it's more like
crabgrass, or rabbits in Australia — a rapidly-spreading and
obnoxious weed that invades computational ecologies and displaces the
native species.

Also, the Mike that signed the first few paragraphs is not me as you
inferred; it must be MWM.  I only wrote the first sentence.

    [finger informs me that "mt" is Michael Travers.  Apparently he was
    unaware that HIS OWN FROM: field did not give his own name!  In
    response to a poke from me, he has now upgraded.  PGN]

worse-is-better for the 1990s

"Joseph M. Newcomer" <>
Thu, 21 Feb 1991 15:32:30 -0500 (EST)
Here we are in 1991.  The two primary operating systems (at least in volume)
are representative of O/S technology of the late 1960s to early 1970s (Unix and
MS-DOS), and the three important languages are C (vintage mid-1960s, i.e., it
is BCPL in disguise), LISP (vintage late 1950s) and Ada (vintage early 1970s).
With the exception of LISP, there have been many better operating systems and
languages lost along the way, and there seems to be no interest in updating our
technology.  I have my list of better languages and O/Ss (and it is probably
different than yours).  But on the whole, the barely-adequate-but-portable has
replaced the not-too-bad or even pretty-good but-not-portable.  There is a
lesson here.  There is a risk of accepting the barely adequate; you may have to
live with it for a long time.

Peace Shield and "software development problems"

Mon, 25 Feb 91 14:45:19 EST
A friend, who has asked not to be identified, says it looks to him like the
Boeing problem with Peace Shield was not software-development trouble but cost
estimation.  (He has worked for Boeing but is not currently a Boeing employee.)
His feeling is that Boeing bid low on a fixed-price contract, discovered an
impending cost overrun, found that the Saudis weren't interested in pumping in
more cash, decided that the best way to cut its losses was to encounter fatal
software-development problems, and more or less stopped work on the project
until the schedule slippage became too severe to ignore and the contract was

This explanation is certainly plausible.  Whether accurate or not, it points
out a significant issue: problems in developing large software systems are
sufficiently common that it's convenient to use them as an excuse when
something goes wrong elsewhere.  It's probably wise to be very cautious about
blaming the software people for recent military systems, in particular.  DoD
has recently been using fixed-price contracts a lot, and a good many military
contractors have discovered — the hard way — that they've forgotten how to do
realistic cost estimates.  Since DoD basically has no memory, it's better to
default on the contract and accept some transient bad feelings than to fulfill
it and lose money.  Software makes a wonderful excuse, since it's considered a
natural law that a certain fraction of big software projects inexplicably fail
with nobody to blame.
                           Henry Spencer at U of Toronto Zoology   utzoo!henry

Monopoly Security Policies for Thumb Prints <Bob Baldwin>
25 Feb 91 11:53:00 -0800
The California department of motor vehicles requires a right thumb print to get
a driver's license or state ID card, so the DMV now has a large online database
of thumb prints.  The primary purpose of this database is to prevent people
from taking on new identities without DMV knowing about it (and thus getting
clean driving records).

What if someone is supposed to have a new identity?  What about
witness relocation programs?  What about undercover police officers?
The DMV has to treat all its thumb print data as being as sensitive
as the most sensitive thumb print it contains.  One alternative is
for each government agency submit a list of the thumb prints that
need special restrictions.  Needless to say, the DMV wouldn't want to
pay for the safeguards that would be required on such a list.

The underlying problem is the desire to enforce security (access) policies that
imply monoploy control of data.  For example, the FBI wants to know about all
queries matching the finger prints of its employees.  The National Crime
Information Center can support this policy, but such a policy is hard to
enforce when different states are involved.  The FBI doesn't what to tell all
the states about all its employees.

What's the solution?  We could eliminate monopoly security policies and the
programs that depend on them.  We could have the federal government maintain
its monopoly, and provide services to the states.  We could trust the states to
make sure that only "good guys" access the data.  Perhaps the best solution is
to ignore the problem and go to lunch.

Re: More on very reliable systems (Leichter, RISKS-11.12)

rod simmons <rod@uceng.UC.EDU>
Sun, 24 Feb 91 11:56:15 -0500
>In fact, I've always found it interesting how much more we demand from digital
>systems than we demand from mechanical ones.       ^^^^^^^^^^^^^^^^^^^^^^^^^^^

Perhaps we "do" because we "can," or at least we think that we "can," based
upon an inspection of failure probabilities for various types of components and
devices (such as those given in WASH 1400, and other similar compilations of
failure data).
                                        --Rod Simmons

Re: More on very reliable systems (Siegman, RISKS-11.14)

Robert I. Eachus <>
Mon, 25 Feb 91 17:11:12 EST
   Anthony E. Siegman (siegman@sierra.Stanford.EDU) said:

>     Anyone concerned with the subject of multiple correlated
>  failures in systems with very reliable individual components should
>  look back at the incident some years ago when a United Airlines jet
>  lost all three engines simultaneous in flight over the Caribbean
>  south of Miami....

>     2.  The problem was really a false alarm, i.e., the oil
>  pressures were OK, just the sensor indications were wrong.

   No, the engines were really overheating because all the oil leaked
out past the (missing) seals.

>  Oh, they did get one (or two?) of the engines restated, just in the
>  nick of time, however, and limped into Miami.

     When the middle engine started to overheat, the pilot shut it
down, declared an emergency, and immediately headed back toward Miami,
grabbing for altitude.  When the other two engines showed overheat, I
think he cut the RPMs back so that engine failure would not be
catastrophic, but in any case ran them to failure.  When these two
engines failed he was on a straight in glide to Miami International,
with calculations showing that at best glide, the plane would hit the
water a mile short of the runway.  Passengers were told to prepare for
the possibility of a water landing.

     At an altitiude of 200 feet as planned (and apparently as
explained to the passengers) the crew restarted the middle engine
(which they had shut down remember) and got the hoped for two minutes
of life out of it, long enough to land and get off the main runway.
All three engines were damaged beyond repair.

     Risks (or lessons learned) from this and the Gimli Glider
incident?  First, airline pilots should be required to have glider
experience.  In both these cases, the pilots knew what to do, however
there have been several cases where the aircrew behavior during a
multiple engine shutdown has verged on panic, most recently a Chinese
747.  (There have been lots of such incidents, usually involving
flying through clouds volcanic origin.)

     Also, and much more important, but often overlooked, is the value
of experience in knowing when some condition is "new" unknown and the
trust which should be given to the pilots by their employers (and the
FAA) in such cases.  What would have happened if the pilot in this
case took his time about turning back to see if he had a "real"
ememrgency?  In this case he acted immediately to turn around--he
could always turn around again if the problem wasn't serious, and
expect the airline to back his judgement.  With some airlines, the
pressure is in the opposite direction (schedule ahead of safety) and
that was a contributing factor in the Gimli glider incident.

Please report problems with the web pages to the maintainer