The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 10 Issue 30

Tuesday 4 September 1990

Contents

o Business Week on High-Tech Amusement Parks
Karl Lehenbauer
o Arabian heat causing problems with US weapons computers
Jon Jacky
o Re: Stonefish mine
Chaz Heritage via Richard Busch
o Flight simulator certification
Henry Spencer
o Glass cockpits
Martyn Thomas
o "Wild failure modes" in analog systems
Kent Paul Dolan
o Faultless Software
Robert L. Smith
o Comment on Software Reliability Synopsis
Martin Minow
o Database searches and counseling center confidentiality
Derek Beatty
o Info on RISKS (comp.risks)

Business Week on High-Tech Amusement Parks

Karl Lehenbauer <karl@sugar.hackercorp.com>
3 Sep 90 11:18:30 CDT (Mon)
In the September 10, 1990 issue of Business Week, a sidebar to an article
about problems at Universal Studios Florida ("MCA May Have Created A
Monster") is about high-tech amusement park rides.  The cavalier attitude
toward risks is startling.

The article reports that "bigger thrills are needed to lure a jaded
generation raised on dazzling movie special effects" and that, consequently,
Universal Studios spent over $200 million for rides such as "a giant
robotic shark that attacks a boatload of tourists, a three-story animated
King Kong, and a bone-jarring imitation earthquake."  The article also
notes that other theme-park operators, including of course Walt Disney Co.,
have "joined the race for ''animatronics,'' robots, and other computerized
contraptions."

    Yet as the bugs plaguing Universal show, technology has its
    price.  Says Joseph B. McHugh, vice-president of Ride & Show
    Engineering Inc., which made the Jaws and Earthquake rides:
    "The complexity of the systems means there are more components
    that can shut a ride down."

    The scariest prospect for a park operator is a Jaws that
    doesn't bite.  And all it takes is one software bug.  At
    Universal, the trick is to synchronize a moving tram and
    an animated shark or gorilla that runs on a fixed program.
    "If they're not coordinated exactly, they run into one
    another and parts get bent," says Q. David Schweninger,
    chief of Sequoia Creative Inc., Kong's creator.  Universal's
    rides are "way out in front of everyone else," he says.
    "The price is that you're going to have teething problems."

Later, the sidebar quotes Schweninger as saying "There are lessons
to be learned here," and adding that ride makers *may* insist on more
shakedown time before new rides open.

    But none of this is likely to halt the shift to high-tech
    rides, which promise more safety and eat less real estate.
    Changing demographics also favor gentler high-tech rides
    over old-style gut-wrenchers such as roller coasters.

The article concludes by pinpointing the Disney Star Tours ride in
1987 as the first of the new wave of participatory rides and that
ride makers are working on dozens of variations of the simulator
ride for theme parks, casinos and special theaters.

Although it's reasonable to assume gentler rides would be safer than "old-
style gut-wrenchers," the claim seems to be that high-tech makes the rides
safer, which the rest of the article seems to refute.  Oh well.

uunet!sugar!karl  Usenet access: (713) 438-5018


Arabian heat causing problems with US weapons computers

Jon Jacky <ON@GAFFER.RAD.WASHINGTON.EDU>
Tue, 4 Sep 1990 10:35:49 PDT
Here are excerpts from a story in THE SEATTLE TIMES, Sept 3 1990, p. A8:

US TROOPS ALREADY UNDER ATTACK FROM SUN AND SAND by Molly Moore,
Washington Post

... Patriot missiles, which are to protect critical military sites from attack
by Iraqi Scud-B missiles and attack planes, are controlled by computer
equipment housed in air-conditioned vans.  But the heat is so intense on each
van's metal shell that it raises the temperature inside.

"Every now and then there is a glitch that makes the (radar) scope look blank,"
said one missile technician.  He said in an attack, the computers are supposed
to override any radar screen malfunction or other problem that might hinder
a human operator, and track the incoming missile on their own.  ...

... If the A-10 attack plane fights Iraqi tanks, it will depend on cylindrical
pods under its wings to jam the signals of enemy air defenses.  But weapons
loaders here said the heat renders the jammers useless after about one hour of
operation.  A typical sortie against hostile tanks would likely require far
more than an hour's protection ...

- Jon Jacky, University of Washington, Seattle jon@gaffer.rad.washington.edu


Forwarding: Stonefish mine

&ichard_Busch.sd@Xerox.com>
Tue, 4 Sep 1990 10:03:21 PDT
Chaz Heritage has requested that I forward the following as a candidate for
inclusion in the Risks Digest. He is apparently unable to get mail to you
directly.

richard
  - - - - - - - - - - - - - - -
From: chaz heritage:wgc1:RX
Sender: chaz heritage:wgc1:rx
Subject: Stonefish mine
Date: 3-September-90 (Monday) 4:34:36 PDT

In RISKS-FORUM Digest  Wednesday 29 August 1990   Volume 10 : Issue 26 Pete
Mellor asks a number of questions about the Stonefish mine which may be
possessed by Iraq:

>1. If the Iraqis have the software for a 'limited number' of mines, why
    haven't they got enough for an unlimited number?<

To the best of my knowledge each Stonefish is intended to have a unique
identity to allow selective arming or disarming. Therefore the software
installed in each mine will be slightly different from the others'. It is
likely that Stonefish's software is in ROM, and not loaded into RAM from
floppies as Mr. Mellor seems to suggest. Manufacturing more ROMs, even if
one knows how to assign new identities, would require a ROM-burner and a
supply of blanks. The sort of ROMs used are possibly 'strategic goods' and
not available directly to Iraq, nor, perhaps, to Cardoen.

>2. How does Stonefish 'hide' from a minesweeper?<

Gee, that sure is classified!

Minesweepers can detect mines by a number of methods, to each of which
countermeasures may be available. Sonar, for example, may be spoofed by
returning an amplified signal, too big for a mine (so it looks like a whale
or wreck, perhaps?). Countermeasures may exist against magnetic anomaly
detection, but these are not disclosed in the unclassified literature
(since MAD is used to detect SSBNs this is hardly surprising). There may
well be other countermeasures included in Stonefish's suite.

>3. How reliably can Stonefish identify ships by their engine noise
signature?
    What happens if your cruiser's big ends are rattling?<

The noise signature is mainly a function of the screw design, which is why
there was so much fuss over the Japanese selling the USSR some CNC
machining equipment capable of manufacturing low-noise screws. A ship
trying to avoid acoustic detection will proceed slowly and as quietly as
possible, but it cannot conceal the characteristic noise caused by screw
cavitation at any speed above a knot or two.

>4. Does Stonefish rely on some sort of sonar transponder
    to distinguish friend from foe?<

I imagine not, since if the mine were to transmit sonar in order to trigger
transponders located on friendly ships then it would render the mine very
susceptible to detection and countermeasures.

> 5. What are the chances that Iraq already has the software?<

100% chance of posessing the basic software if they possess the originally
issued mines. Each mine would be supplied with it. Reverse-engineering it
is probably a matter of copying a number of unusual boards and devices, and
would depend upon posessing at least one working example and sufficient
parts. Circuit boards used in weapons systems are often of unusual shape
and construction and replication of them would probably not be particularly
easy.

The target-recognition software, on the other hand, would probably consist
of digitised acoustic samples for comparison with the signature of the
target. If the mines were to be used against US or UK warships then samples
of their signatures would be required. These would probably not be
forthcoming. Any type of ship an example of which had been sold to Iraq
might possibly be at risk (assuming the Iraqis to possess the sampling
apparatus and the ability and time to use it), as might any types possessed
by navies considered at any time to be hostile to the UK Royal Navy.

> 6. The sophistication of Stonefish's recognition system argues for some
kind
    of artificial intelligence. If it's that smart, would it know who was
    winning and change sides accordingly?<

Personally I wouldn't consider Stonefish to be an AI. I don't think the
problem posed is much of a risk to Stonefish operators....

>7. Isn't it time that Jane's produced 'All the World's Software'?<

Yes, but it would be a far slimmer volume than their usual anti-bookshelf
masterpieces (I had to get rid of all my old ones before my upstairs bedsit
suddenly turned into a downstairs bedsit). I don't honestly think enough
information would be disclosed to make it worthwhile.

>The implication is not that Stonefish has been sold bundled to Iraq, but
enough technical information is in dubious hands for the Iraqis to have a
good go at building a look-alike<

Personally I wouldn't back them to do it. If I were a naval commander I
should consider the threat of anti-ship missiles fired from the air or from
the Kuwaiti islands against targets acquired by aircraft possibly still
within Kuwaiti airspace to be a greater threat than that of Iraq managing
to copy a sophisticated underwater weapons system to a deadline.

If, on the other hand, Carlos Cardoen is telling fibs (which would not
perhaps be entirely out of character) then it's possible that he's sold the
Iraqis a few Stonefish already. If so, it seems unlikely to me that they'll
work properly without reprogramming for the target signatures of US and UK
shipping.

Incidentally, the classification >'UK restricted: commercial in
confidence'< is among the lowest available. Almost all documents used by
the armed services, including such things as reminder cards carrying the
NATO phonetic alphabet, are classified 'restricted'; the term 'commercial
in confidence' is applied merely in the fond hope that it will be
respected, since it has only commercial meaning. The >'Technical
Description and Specification' of Stonefish< may well be no more than the
sort of thing small boys (like me) fill their carrier bags with every other
year at Farnborough - I have collected stuff in the past on classified
systems like JP233, also deployed in the Gulf. What these documents do not
disclose is the true performance of the system and its strengths and
weaknesses. That sort of information, operationally significant, is usually
classified at a higher level (within NATO probably 'Secret').

Chaz                                          [All disclaimers apply.]


Flight simulator certification

<henry@zoo.toronto.edu>
Tue, 4 Sep 90 13:16:21 EDT
<> Does anyone know the certification requirements for simulators?)
>... I suspect there is nothing to cover
>simulators, since they are not directly involved in controlling flight.

There *are* certification requirements for simulators when they are involved in
pilot training, and in particular when used as substitutes for certain types of
in-flight training.  However, I think the emphasis has been more on precision
than on accuracy, i.e. on the sophistication and smoothness of the simulation
more than on its exact correspondence to reality.  Much is made, for example,
of the quality of the visual imagery provided.  I'm not sure how much has been
done on verifying faithful (as opposed to merely plausible) simulation of
behavior in obscure corners of the system.
                                          Henry Spencer at U of Toronto Zoology
                                          henry@zoo.toronto.edu   utzoo!henry


Glass cockpits

Martyn Thomas <mct@praxis.co.uk>
Mon, 3 Sep 90 15:49:50 BST
The UK has a "confidential human factors incident reporting programme",
run by the RAF Institute of Aviation Medicine, to allow aircrew to report
incidents which may reflect badly on their competence, so that others may
learn from them without any risk of the crewmember who made the report
suffering any penalty. They have a magazine - FEEDBACK - and the last issue
publicised the failure of most of the automated systems on an approach to a
UK airport in a storm last February. (Reported in RISKS, I believe).

This issue carries a questionnaire asking flight crew for their opinions of
automation. 78 questions of the form (ring the appropriate number)

"if the automatics fail it 1 2 3 4 5   if the automatics fail it
is always apparent              is never apparent "

The results of the survey will be *very* interesting. The survey is being
carried out on behalf of the UK Civil Aviation Authority. It is not clear
whether the results will be made public.

Martyn Thomas, Praxis plc, 20 Manvers Street, Bath BA1 1PX UK.
Tel:    +44-225-444700.   Email:   mct@praxis.co.uk


"wild failure modes" in analog systems

Kent Paul Dolan <xanthian@zorch.sf-bay.org>
Sun, 2 Sep 90 18:59:45 GMT
>From: Pete Mellor <pm@cs.city.ac.uk>

>Synopsis of: Forester, T., & Morrison, P. Computer Unreliability and
>Social Vulnerability, Futures, June 1990, pages 462-474.

>In contrast [to digital computers], although analogue devices
>have infinitely many states, most of their behaviour is
>*continuous*, so that there are few situations in which they
>will jump from working perfectly to failing totally.

Unless my understanding from readings in Chaos Theory is entirely flawed, the
second sentence is simply false; it is now well known that analogue devices can
also (through design infelicities or just the perverseness of the universe) do
inherently "wild" state switches.  The classic example is the simple dribble of
water from a faucet, which, in the absence of analogue catastrophes, would be a
steady stream, or an equally spaced series of droplets, but is instead a series
of droplets whose size and spacing is unpredictable except statistically.

More important, this is now understood to be the _usual_ state of affairs, not
the anomalous one, in dealing with realistic analogue systems.

So, if the original authors' intent in demeaning our increasing
reliance on (possibly "un-failure-proofable") digital systems is
to promote a return to the halcyon days of analogue controls,
this is probably misdirected by the time the controls approach
the order of complexity of operation of the current digital ones.

We may just have to continue to live with the fact, true throughout
recorded history, that our artifacts are sometimes flawed and cause
us to die in novel and unexpected ways, and that we can only do our
human best to minimize the problems.

Just an observation in passing.

Kent   <xanthian@Zorch.SF-Bay.ORG> <xanthian@well.sf.ca.us>


Faultless Software

Robert L. Smith <rlsmith@mcnc.org>
Tue, 4 Sep 90 13:25:43 -0400
    The recent discussion in RISKS of the need for "ultrareliable" --
i.e., faultless -- software and the impossibility of obtaining it has
been interesting, but lack of it is no compelling reason to prohibit
computers from life-critical service.
    Advocates of that conclusion forget the reliability advantage
software has over hardware and people system components, which is that
once a software bug is truly fixed, it stays fixed!  In contrast
consider the many times you repair hardware only to see it fail again
from the same cause, and coach people to do it right only to hear
they've forgotten later.
    Legal restrictions on software applicability would delay quality
improvements.  They would inhibit progress toward systems that truly
are safer than those whose logic elements reside solely in human
brains.  Software in life critical environments is like old age:  to
understand its desirability, one has only to consider the alternative.
    The question is, have more people died in life critical environ-
ments since software was installed than before, per man-hour of use?
If the answer is no, then the argument is specious.  Even if it is
yes, which I doubt, that is reason only to intensify testing and
debug.  Software engineering has not yet built all the tools
conceivable to that end.

Regards, rLs


Comment on Software Reliability Synopsis

"Martin Minow, ML3-5/U26 03-Sep-1990 2253" <minow@bolt.enet.dec.com>
Mon, 3 Sep 90 20:27:01 PDT
Thanks to Pete Mellor for posting
>Synopsis of: Forester, T., & Morrison, P. Computer Unreliability and
>Social Vulnerability, Futures, June 1990, pages 462-474.
> Conclusion (quoted):
>Accordingly, we recommend that computers should not be entrusted with
>life-critical applications now, and should be only if reliability improves
>dramatically in the future.

There is a risk/benefit that needs to be examined.  If we remove computers
from, say, hospital intensive care units because they are "unreliable" will
we save lives that might be killed by an errant computer or kill others whose
lives might have been saved by that same unreliable system.

>Given the evidence presented here, several ethical issues also emerge for
>people in the computer industry. For example, when programmers and software
>engineers are asked to build systems which have life-critical applications,
>should they be more honest about the dangers and limitations?

Does the article really claim that engineers who know they are building
life-critical applications are not honest about the dangers and limitations?
My experience has been that people are quite aware of their responsibilites.
On the other hand, systems that were never designed to be life-critical are
often used in unexpected ways.  Consider a speech synthesizer designed as a
speech aid for a disabled person.  While not designed as a "life-critical"
system, it IS the voice that that person must use to call for help.  What
is an ethical professional to do?  Refuse to build a speech synthesizer as it
*might* be used in a life-critical situation and the technology doesn't
yet allow us to design a perfect synthesizer?

>Are computer
>professionals under an obligation if a system fails: for example, if a patient
>dies on an operating table because of faulty software, is the programmer guilty
>of manslaughter, or malpractice, or neither?

Who cares?  Will charging programmers with manslaughter really yield better
quality software?  This seems like exactly the wrong thing to worry about.

>How is it that the computer industry
>almost alone is able to sell products which cannot be guaranteed against
>failure?

Because people buy the stuff.  Some software, by the way, *is* warrented
against failure.  All the type of warranty does is affect the price and
time to market.  I rather doubt that it effects the actual quality.

Martin Minow          minow@bolt.enet.dec.com


Database searches and counseling center confidentiality

&erek.Beatty@COSMOS.VLSI.CS.CMU.EDU>
Tue, 04 Sep 90 12:23:42 EDT
   Here's a minor variation on an old theme:

   The Carnegie Mellon's online library catalog includes a full-text
database of the faculty/staff directory, and can be used to look up
anyone at CMU given their telephone extension.  This brings to light
a problem with confidentiality and the university's counseling
center.  If the counseling center phones a student and must leave a
message, they leave only their telephone number and receptionist's
first name, to protect against any stigma that might be associated
with seeking their services.  Easy access to the reverse telephone
index function via database searching erodes this effort at
confidentiality even though the library database publicizes no new
information (it's all in the published hard copy directory).
   Here again a large quantitative change (in lookup time) introduces
qualitative differences (it becomes plausible that a roommate might
snoop out of idle curiosity).  Awareness of this might lead someone,
feeling lowly as a (internet?) worm, to forego psychological services.

  -- Derek Beatty,      grad student, CMU Computer Science

Please report problems with the web pages to the maintainer

Top