The RISKS Digest
Volume 3 Issue 86

Sunday, 26th October 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Addition to Census of Uncensored Sensors
PGN
Military vs. civilian automatic control systems
Will Martin
Re: System effectiveness is non-linear
Scott E. Preece
SDI assumptions
Daniel M. Frank
SDI impossibility
David Chase
Editorial on SDI
Henry Spencer plus quote from David Parnas
Info on RISKS (comp.risks)

Addition to Census of Uncensored Sensors

Peter G. Neumann <Neumann@CSL.SRI.COM>
Sun 26 Oct 86 15:37:02-PST
On 23 October 1986, a United Airlines Boeing 727 jet (UA 616, on a 20-minute
flight from San Francisco Airport to San Jose) had the nose-gear indicator
light stay on after takeoff, suggesting that the landing gear might not have
retracted.  The plane landed again at SFO at 7:48 AM (8 minutes after
takeoff).  The problem was later attributed to a malfunctioning nose gear
indicator.  [Source: San Francisco Chronicle, 24 October 1986, p. 30]

This is another example for the discussion on the risks of using sensors to
detect aircraft behavior.  Yes, if someone worries about this problem in
advance, it is always possible to have redundant sensors and redundant
indicators.  (This is done in SRI's SIFT system [Software Implemented Fault
Tolerance], a prototype flight-control system running at NASA Langley AFB.)
The cost of that must be compared with the resulting costs.  The total cost
of even an 8-minute aborted flight (including fuel, landing fees, and delays
-- with requeuing for takeoff) is nontrivial.  There are of course all sorts
of hidden costs in delays, such as the costs to passengers, and snowball
effects if such a delay exhausts the pilot's flying time for the month and
requires the location of another pilot! (That actually happened to me
once...)


Military vs. civilian automatic control systems

Will Martin — AMXAL-RI <wmartin@ALMSA-1.ARPA>
Fri, 24 Oct 86 15:11:22 CDT
Many good points have been brought out about the rationale for the Navy not
having more automation in submarine control systems, including those on the
nuclear reactors. I think that a particular aspect of this needs emphasis.
There is a major difference in the basic concepts behind military systems
vs. civilian implementations — the mission may be more important than human
life in the military environment, but never so in civilian situations. (Also
the completion of the mission may be more important than the preservation of
property or things in the military.)

It may well be necessary to "tie down the safety valve" to prevent a reactor
scram on a submarine in order that the vessel complete the mission or action
in progress, even if the inevitable result is death by radiation poisoning
of the crew, or some fraction of them, or the destruction of the reactor or
the vessel itself after it has completed the action it is required to
perform. In a civilian situation, this is never true — the production of
electricity from reactor "X" can never be more important than the safety of
the population around or even the operators of that reactor. (We ignore here
the statistical probablity that the shutdown of reactor "X" will trigger a
cascade of blackouts which will eventually result in some number of deaths
due to related factors — patients on the operating table, people trapped in
elevators, etc.) In fact, the value of the reactor itself is more important
than its continuing production of electricity — it will be shut down to
prevent faulty operation causing damage to itself. For a military device,
completion of the wartime mission or task is often more important than the
continued safety or preservation of the device itself.

In the light of this, it is reasonable to expect relatively elaborate,
"idiot-proof", overriding automatic control systems in civilian
installations, and the absence of such in military versions of similar
devices (or perhaps the military system will have some for use only in
peacetime or training situations, which can be switched off in wartime).  It
may be necesary to operate devices "outside their envelopes" or to violate
various guidelines regarding safety in wartime missions. Also, of course,
military systems should continue to be at least somewhat usable even after
they have suffered damage and elaborate safety systems are merely something
more that will be liable to damage in combat. It is not acceptable to have
your power source turn off in the middle of a battle because a minor and
easily-controlled fire burned nothing vital but only some part of an
automatic safety system control circuit; it would be reasonable for a
civilian reactor to shut down given the exact same situation.

Note please that I am not saying that wartime operation would routinely be
done with a complete disregard for safety or that every mission is more
important than the lives of the people carrying it out. But there will be
certain exceptional circumstances where the missions are that important,
where the sacrifice of some lives (and certainly some amount of property) is
necessary for the achievement of larger goals. The military systems have to
support both routine operation and these rare exceptions.

Will Martin


Re: System effectiveness is non-linear

"Scott E. Preece" <preece%mycroft@GSWD-VMS.ARPA>
Thu, 23 Oct 86 13:52:06 CDT
Dave Benson argues that it is more reasonable and conservative to assume
that an overloaded system will fail entirely than to assume it will either
perform at its design limit but no more or perform above its design limit.

That's unarguably the conservative assumption.  I would deny that ANY
assumption was reasonable, given only a performance ceiling and the
knowledge that performance demand will exceed that ceiling.  It is obvious
that the system could be designed to perform in any of the suggested ways
when unable to cope with load.  Suggesting one response or another is simply
expressing an opinion of the designers' competence rather than any realistic
assessment of the risks of SDI.  Given that neither the design nor the
designers are determined yet, this is a silly exercise.

scott preece, gould/csd - urbana, uucp: ihnp4!uiucdcs!ccvaxa!preece
arpa:   preece@gswd-vms


SDI assumptions

Daniel M. Frank <prairie!dan@rsch.wisc.edu>
25 Oct 86 20:35:15 GMT
   It seems to me that much of the discussion of SDI possibilities and
risks has gone on without stating the writers' assumptions about the
control systems to be used in any deployed strategic defense system.

   Is it presumed that SD will sit around waiting for trouble, detect
it, fight the war, and then send the survivors an electronic mail message 
giving kill statistics and performance data?  Much of the concern over
"perfection" in SDI seems to revolve around this model (aside from the
legitimate observation that there is no such thing as a leakproof
defense).  Arguments have raged over whether software can be adaptable
enough to deal with unforseen attack strategies, and so forth.

   I think that if automatic systems of that sort were advisable or achievable,
we could phase out air traffic controllers, and leave the job to computers.
Wars, even technological ones, will still be fought by men, with computers
acting to coordinate communications, acquire and analyze target data, and
control the mechanics of weapons system control.  These tasks are formidable, 
and I make no judgement on which are achievable, and within what limits.

   Both sides of the SDI debate have tended to use unrealistic models of
technological warfare, the proponents to sell their program, the opponents
to brand it as unachievable.  The dialogue would be better served by
agreeing on a model, or set of models, and debating the feasability of
software systems for implementing them.

    Dan Frank,  uucp: ... uwvax!prairie!dan,  arpa: dan%caseus@spool.wisc.edu


SDI impossibility

David Chase <rbbb@rice.edu>
Sat, 25 Oct 86 13:54:36 CDT
I don't know terribly much about the physics involved, and I am not
convinced that it is impossible to build a system that will shoot down most
of the incoming missiles (or seem likely enough to do so that the enemy is
less likely to try an attack, which is effective), but people seem to forget
another thing; SDI should ONLY shoot down incoming missiles.  This system
has to tread the fine line between not missing missiles and not hitting
non-missiles.

I admit that we will have many more opportunities to evaluate its behavior
on passenger airplanes, the moon, large meteors and lightning bolts than on
incoming missiles, but we eventually have to let the thing go more or less
on its own and hope that there are no disasters.  How effective will it be
on missiles once it has been programmed not to attack non-targets?  To
avoid disasters, it seems that we will have to publish its criteria for
deciding between targets and non-targets (how much is an international
incident worth?  One vaporized weather satellite, maybe?  If I were the
other side, you can be sure that I would begin to try queer styles of
launching my peaceful stuff to see how we responded).

I think solving both problems is what makes the software hard; it's easy
to shoot everything if you have enough guns.  We could always put
truckloads of beach sand into low orbit.

David


Editorial on SDI

<decvax!utzoo!henry@ucbvax.Berkeley.EDU>
Fri, 24 Oct 86 00:31:56 edt
   > ... The signatures were drawn from over 110 campuses in 41 states, and 
   > include 15 Nobel Laureates in Physics and Chemistry, and 57% of the 
   > combined faculties of the top 20 Physics departments in the country...

Hmmm.  If a group of aerospace and laser engineers were to express an
opinion on, say, the mass of the neutrino, physicists would ridicule them.
But when Nobel Laureates in Physics and Chemistry express an opinion on a
problem of engineering, well, *that's* impressive.

NONSENSE.

Dave Parnas, on the other hand, actually *is* an expert on the subject he
has been expressing doubts about (the software problem).  Although I'm not
sure I agree with everything he says, I give his views a *lot* more credence
than the people mentioned above.

                Henry Spencer @ U of Toronto Zoology
                {allegra,ihnp4,decvax,pyramid}!utzoo!henry

   [I could have been a little more precise in my comment on Douglas 
    Humphrey's message in RISKS-3.84.  I said that Dave Parnas "does not
    PROVE that SDI is IMPOSSIBLE."  By my curious emphasis, I meant to imply
    that Dave never even tried to prove impossibility.  He said that the SDI
    software system would be untrustworthy.  "..we will never be able to
    believe with any confidence that we have succeeded.  We won't have any
    way of knowing whether or not SDI has succeeded."

    Because Dave's comments really add significantly to this discussion — and 
    because Henry set me up — let me quote an excerpt from a private note
    from Dave.  PGN]

      "SDIO's own report to congress quotes President Reagan about its
      goals.  It says it is going to make nuclear weapons impotent and
      obsolete.  They claim to be able to end the fear of nuclear weapons.
      They can do neither of these things unless they can make a trustworthy
      software system, one that we can rely upon.  Without that, neither
      side will give up their offensive weapons.

      "In short, the SDI software is not impossible, but ending the
      fear of nuclear weapons that way is."   [David Parnas]

Please report problems with the web pages to the maintainer

x
Top