The RISKS Digest
Volume 3 Issue 87

Sunday, 26th October 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

System Overload
Mike McLaughlin
Information Overload
Mike McLaughlin
SDI assumptions
Herb Lin
Info on RISKS (comp.risks)

System Overload

Mike McLaughlin <mikemcl@nrl-csr>
Sun, 26 Oct 86 21:13:56 est
Back in Systems 001 I was taught that an overloaded system, be it a reactor 
control or SDI, failed due to overload in the following manners:
    1.  Sacrificed quality of work.
    2.  Sacrificed throughput rate.
    3.  Failed catastrophically (crashed).
    4.  Any combination of the above.  
Can a given system be designed to fail in a _chosen_ manner, so that it does
not crash - i.e. "graceful degradation."  Of course.  I see no reason why new
systems cannot do the same - at least in regard to the overload portion of the
problem. - mikemcl@nrl-csr.arpa


Information Overload

Mike McLaughlin <mikemcl@nrl-csr>
Sun, 26 Oct 86 21:39:26 est
Undoubtedly we can load sensors on a system until it will no longer fly,
move, fight, or whatever due to the number of sensors.  Airplane cockpits
already provide more information than pilots can handle.  Combat sensor
systems provide more data than battle-managers can handle.  On the early
space flights we even instrumented the astronauts themselves — in a manner
that should not be discussed on a family forum.  There seems little point
in providing a cockpit display of the pilot's rectal temperature; but on the
ground someone cared.

One of the functions being performed by computers today is to filter the
information, so that the system operator sees relevant data.  One of the
tough parts is to decide what is relevant.  I submit that "operator
assistant" computers deserve special care in design and testing.  They seem
to be used where lives are at stake, and where data is available.  Relying
on the computer to decide what is "relevant" in a given situation is fraught
with risk.  Relying on a human to decide in advance of the situation is not
much better.

Another area of concern is the "transition" problem discussed in previous
issues.  I don't know that Navy Propulsion reactors are under-computerized
deliberately, accidentally or at all.  Having been a watch officer in the
Navy and having lived through a number of unexpected emergencies I can
personally attest to the seriousness of the "transition" problem - even
without computers.  To be awakened from sleep with alarm bells ringing and
bullhorns blaring "FIRE, FIRE, FIRE IN NUMBER TWO MAGAZINE!" - and then be
standing dressed, over the magazine, and in charge of the situation in less
than 60 seconds is quite an experience.  That I am here to recognize the
problem is due to excellent train- ing of the entire crew, not to any
specific actions on my part.  Frankly, I just "went automatic" and shook
after it was over, not during.  I suspect that any pilot, truck driver,
policeman, etc. could tell a dozen similar tales.

I'm not proposing any answers - except for extreme care.  

    - mikemcl@nrl-csr.arpa


SDI assumptions

<LIN@XX.LCS.MIT.EDU>
Sun, 26 Oct 1986 23:48 EST
    From: prairie!dan at rsch.wisc.edu (Daniel M. Frank)

    Much of the concern over "perfection" in SDI seems to revolve around
    this model (aside from the legitimate observation that there is no such
    thing as a leakproof defense).

I've said it before, but it bears repeating; no critic has ever said
SDI software must be perfect.  The only ones who say this are the
pro-SDI people who are criticizing the critics.

    The [SDI] dialogue would be better served by agreeing on a model, or set
    of models, and debating the feasability of software systems for
    implementing them.

Having a "set of models" means that those models share certain
characteristics.  There is one major characteristic that all SDI
software will share: we will never be able to test SDI software --
whatever its precise nature — under realistic conditions.  Then the
relevant question is "What can we infer about software that cannot be
tested under realistic conditions?"

Please report problems with the web pages to the maintainer

x
Top