The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 3 Issue 82

Monday, 20 October 1986

Contents

o NASDAQ computer crashes
Jerry Leichter
Vint Cerf
o Sensors on aircraft
Art Evans
Henry Spencer
o Loss of the USS Thresher
John Allred
o Re: US Navy reactors
Henry Spencer
o Risks from Expert Articles
Andy Freeman
o Info on RISKS (comp.risks)

NASDAQ computer crashes

<LEICHTER-JERRY@YALE.ARPA>
20 OCT 1986 11:09:46 EST
         OTC stock market - Computer problems snag trading

Computer problems halted trading for about three hours throughout the day
Thursday [16 October 1986] in over-the-counter stocks listed through the
National Association of Securities Dealers Automatic Quotation system.
Craig Thompson, manager of marketing information for the National
Association of Securities Dealers, said the system was shut down from about
11:05 a.m. to 2 p.m. EDT, then five minutes before the 4 p.m. closing due to
a breakdown of equipment at its computer operations center in Trumbull,
Conn.  The exact nature of the problem had not been determined, Thompson
said.  "We don't think it will effect tomorrow's business as we hope it will
be corrected by then," Thompson said.
                                            {AP News Wire, 16-Oct-86, 16:48}


NASDAQ computer crashes

<CERF@A.ISI.EDU>
20 Oct 1986 06:47-EDT
Since so much of Wall Street operation is heavily dependent on automation
and communication, it would be very interesting to know more about the
causes and nature of the failure and how dealers/users coped with the
outage.  Obviously, neither Wall Street nor the economy collapsed, but it
might be instructive to know whether the ability to accommodate the failure
was a function of the length of the outage (how close to disaster did we
actually approach?  How much longer an outage could have been sustained
without permanent damage?).
                                        Vint Cerf


Sensors on aircraft

"Art Evans" <Evans@TL-20B.ARPA>
Mon 20 Oct 86 13:11:56-EDT
It's all well and good to propose a sensor that reports, "the left engine
isn't there," or, "the left ailerons are gone," or whatever.  But, how is
the sensor to work?  That is, just what do you propose to sense?  Sure, you
and I can look at the left wing and decide immediately, but what is the
sensor to do?  Moreover, how do you propose checking the reliability of a
sensor that, in the nature of things, almost never does anything?  I think
these are hard problems.

As for the JAL 747 disaster -- the flight crew knew precisely what the
problem was: With the loss of all three (or was it four?) hydraulic systems,
they had no control whatsoever over any control services.  They may not have
known what caused the problem, but they were all too aware of the effects.

Aviation Week published the transcript of the cockpit voice recorder not too
long after the accident, and it is the most terrifying such transcript I've
ever read.  The flight crew were dead, and they knew it.  They were still
flying around, but they were in effect test pilots in a new kind of aircraft
no one had ever thought much about before.  Their problem was simple:
control pitch attitude (nose up or down) with power, and control direction
with differential power (more power on one side than the other).  Well,
maybe with plenty of time to experiment someone might learn to fly a 747
that way.  They tried, as long as they could, but they just weren't able to
hack it.  Most power adjustments produced oscillations in attitude that they
were unable to damp out.  Finally, it got away from them in a way they
couldn't recover from, and they went down.  A brave attempt at the probably
impossible.

Art Evans


Aircraft self-awareness (Sensors on aircraft)

Henry Spencer <decvax!utzoo!henry@ucbvax.Berkeley.EDU>
Mon, 20 Oct 86 22:00:32 edt
I believe some of the DC-10 engineers proposed during development that it
should have a set of video cameras viewing things like the wings and tail,
so that the flight crew could get a look at the situation if they really
needed to.  (This is not as good as having it automatically brought to
their attention, but many classes of problems would come to their attention
quickly anyway...)  The proposal was rejected, I believe on grounds of cost
and weight.

In fairness, the only DC-10 crash I remember offhand where this might have
helped was the Chicago engine-separation one, and it's not clear that the
crew had time to study the problem.  I don't know what the proposal had
in the way of monitors, but for sheer reasons of panel space I suspect it
would have been a switchable monitor rather than a bank of screens showing
all views continuously.  That crash happened fast; I doubt that information
not available at a glance would have helped.

                Henry Spencer @ U of Toronto Zoology
                {allegra,ihnp4,decvax,pyramid}!utzoo!henry


Loss of the USS Thresher

John Allred <jallred@labs-b.bbn.com>
Mon, 20 Oct 86 13:31:40 EDT
Thresher, according to the information I received while serving on submarines, 
was lost due to a catastrophic failure of a main sea water valve and/or pipe, 
causing the flooding of a major compartment.  The cause of the sinking was 
reported by the mother ship during the boat's sea trials.  Scorpion, on the 
other hand, had no observer present.  No reason of loss has been given to the 
public.

The loss of reactor power, in and of itself, should not have caused the loss of
the Thresher.  Boats are usually trimmed to be neutrally bouyant, so the loss
of motiviation should not be fatal.

Does anyone out in netland have access to the report of the Thresher's loss?
It would be good to hear the true story.


Re: US Navy reactors

Henry Spencer <decvax!utzoo!henry@ucbvax.Berkeley.EDU>
Mon, 20 Oct 86 22:00:42 edt
Brint Cooper suggests that the USN's excellent reactor safety record might
stem from their deep distrust of automatic equipment.  Personally, I think
the connection is indirect.  It's not at all obvious that manually-run
reactors are safer than partly-automated ones.  Humans are better at coping
with unforeseen situations, *if* they truly understand the equipment they
are controlling.  If they're just being used as organic servomechanisms,
then they are less reliable than automatic equipment, which does not get
tired or bored (when things are going well) or frightened or tense (when
they aren't).  I suspect the USN reactor technicians have a pretty good
understanding of their hardware, given the general atmosphere of great care
surrounding USN reactors.  However, servomechanisms are probably still
safer when the problems have, in fact, been foreseen accurately.  This is
likely to be the case for the majority of problems.

The indirect connection I see is the obvious one:  distrust breeds caution.
Whether or not manually-operated reactors are safer than semiautomated ones,
*any* equipment clearly is going to be safer when elaborate care is taken
in materials, assembly, testing, crew training, and maintenance.  A high-
quality reactor run by carefully-trained humans is clearly safer than a
slipshod one run by rusty machinery.

Eugene Miya notes that there is some doubt about the reactor being blameless
in the loss of the Thresher.  True; I should have noted that.

Steve Woods notes:

> There is another factor to consider here, redundancy [cross-training] ...
> ... these are WARSHIPS, they need to be able to function even
> after suffering SEVERE damage and heavy casualties...

While I tend to agree that cross-training is a good idea, it's actually
not clear that the USN has thought this one through, for submarines in
particular.  It's not obvious to me that there is any likelihood of severe
damage and heavy casualties in a nuclear sub without catastrophic hull
damage as well.  Nuclear subs generally do not have internal pressure
bulkheads, as I recall, because there isn't enough buoyancy reserve for
the sub to survive with a flooded section anyway.  This means that a
serious hull breach is quickly fatal.

                Henry Spencer @ U of Toronto Zoology
                {allegra,ihnp4,decvax,pyramid}!utzoo!henry


Risks from Expert Articles

Andy Freeman <ANDY@Sushi.Stanford.EDU>
Mon 20 Oct 86 11:45:32-PDT
Scott@rochester.arpa (Michael L. Scott) wrote the following in RISKS-3.81:

        Why not?  To  begin  with,  we  cannot  anticipate  every  possible
   scenario  in  a  Soviet  attack.   Human commanders cope with unexpected
   situations by drawing on their experience, their common sense, and their
   knack for military tactics.  Computers have no such abilities.  They can
   only deal with situations they were programmed  in  advance  to  expect.

Dr. Scott obviously doesn't write very interesting programs. :-)

Operating systems, compilers, editors, mailers, etc. all receive input
that their designers/authors didn't know about exactly.  Some people
believe that computer reasoning is inherently less powerful than human
reasoning, but it hasn't been proven yet.

Most op-ed pieces written by experts (on any subject, supporting any
position) simplify things so far that they're actually incorrect.  The
public may be ignorant, but they aren't stupid.  Don't lie to them.
(This is one of the risks of experts.)

It can be argued that SDI isn't understood well enough for humans to make
the correct decisions (assuming super-speed people), let alone for them to
be programmed.  That's a different argument, and Dr. Scott is (presumably)
unqualified to give an expert opinion.  His expertise does apply to the "can
SDI decision be programmed correctly?"  question, which he spends just one
paragraph on.
                                                       -andy

Please report problems with the web pages to the maintainer

Top