The RISKS Digest
Volume 3 Issue 81

Sunday, 19th October 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

System effectiveness is NOT a constant!
anonymous
Aircraft self-awareness
Scott Preece
Re: US Navy reactors
Brint Cooper
Eugene Miya
Stephen C Woods
Editorial on SDI
Michael L. Scott
Info on RISKS (comp.risks)

System effectiveness is NOT a constant!

<[anonymous]>
16 Oct 86 20:03:00 [...]
There seems to be a tendency in the current SDI debate to fall into an old 
engineering fallacy: that systems scale up linearly.  Everyone seems to avoid
this trap when talking about cost and effort--it seems to be well accepted 
that a 10-million line program is much harder than 10 1-million line programs--
but (most) people are *not* avoiding the trap when they speak of SDI's 
effectiveness.  A recurrent argument seems to be that "SDI will be 80% 
[to use a number currently being bandied about] effective against a Soviet
attack of N missiles; thus the Soviets would have to build and launch 5N 
missiles in order to have N missiles reach their targets, which would be
economically ruinous."  The implicit assumption is that if SDI is x% 
effective against N, it will continue to be x% effective against N'.  This
is fallacious unless x is very close to 0 or 100%.  Assuming 80% effectiveness
and 1000 missiles, SDI stops 800.  Using the reasoning above, against 2000 
missiles, SDI would stop 1600; but this cannot be so.  If 1000 missiles 
strains the system to the point that it can only stop 800, why would anyone 
think it could stop more when the number of missiles and decoys is doubled, 
straining the system's ability to identify, track, and destroy missiles at 
least twice as much?  Or to put it another way, if SDI could stop 1600 out 
of 2000, shouldn't it be able to stop 1600 out of, say, 1800 (1800 is surely 
an easier problem than 2000!).  Or turn the argument around: if SDI can stop 
800 out of 1000--80% effectiveness--does this mean it can stop only 80 out 
of a 100-missile attack?  Or 8 out of a 10-missile attack?

When anyone says that SDI will have such-and-such effectiveness, they
must be made to state the assumptions used to calculate that effectiveness.  
Otherwise the numbers are meaningless.


Aircraft self-awareness

"Scott E. Preece" <preece%ccvaxa@GSWD-VMS.ARPA>
Tue, 14 Oct 86 10:15:09 cdt
A lot of recent RISKS messages have discussed one kind or another of
aircraft accident.  Many of the reports have included things like "The pilot
thought [X] but in fact [Y]" or "[X] occurred, though the indications were
that [Y] had occurred" or "[X], though there was no way for the flight crew
to know that".

So, what's going on in the area of improving flight crew/control system
awareness of the state of basic external structures?  Is anyone considering
whether the FAA should require external cameras or periscopes so that (for
instance) the pilot could find out that her entire vertical stabilizer had
fallen off or her starboard outboard engine exploded?

While there are many cases where the pilot would not, in any case, have time
to check, there are also cases like the Japan Airlines crash where the plane
stayed up for some time but the pilot had no way to determine the gross
condition of the control surfaces.  Some reports have said that that plane
might have been saved if the pilot had known what he had to compensate for.

Given that we are depending more and more on automated controls, should we
be spending more effort on sensors that can determine more basic kinds of
information?  Should the control surfaces be instrumented so that the flight
controls can tell the captain "Oh, the starboard outboard engine is no
longer on its pylon and the outer flaps on that wing seem to be missing." as
opposed to current systems just recognizing the effects of that loss and
trying to compensate, with the risk that the operator will be unaware of the
magnitude of that compensation and forced to guess at the state of the
aircraft by observing what the control system is doing to deal with the
effects of that state ("Oh, I'm having to turn the rudder vigorously to port
to maintain my heading; can't say why.").

scott preece, gould/csd - urbana
uucp:   ihnp4!uiucdcs!ccvaxa!preece


Re: US Navy reactors

Brint Cooper <abc@BRL.ARPA>
Thu, 16 Oct 86 8:33:57 EDT
Henry Spencer writes:
> A probable contributing factor here is that the US Navy's submarine people
> do not trust automation at all in crucial roles...  That's how deep the 
> distrust of complexity runs.  I'm not surprised that they have manually- 
> controlled reactors.
Then, he observes:
> The USN also has an outstanding reactor safety record — no big accidents,
> no serious radiation releases — with a stable of reactors comparable in
> numbers (although not in output) to the entire US nuclear-power industry.
> They are very fussy about materials, assembly, and operator training.

Perhaps we should suspect that the safety record follows directly from
the suspicion?
                                        Brint


RE: Reactors of the USN

Eugene Miya <eugene@AMES-NAS.ARPA>
Thu, 16 Oct 86 09:14:52 pdt
I generally concur with Henry Spencer's accessment.  The USN is very
conservative about its use of proven technologies and reliability
(also notice all new Navy jets have two engines [exclude older
A-4, A-7, and F-8s]).  But, while the Navy's record is certainly
outstanding, I must point out there is a question about "no big accidents."

One of the major contending theories on the loss of the USS Thresher in 1964
was sudden loss of reactor power.  We will never really if this is the
case, but it cannot ignored.

Excellent reading about the safety record, the conservativitism, and the
development of the nuclear navy is found in the 700+ page unauthorized
biography of Rickover.

--eugene


US Navy reactors [RISKS-3.80 DIGEST]

Stephen C Woods <scw@LOCUS.UCLA.EDU>
Fri, 17 Oct 86 11:43:17 PDT
There is another factor to consider here, redundancy.  Submariners are ALL
cross trained EXTENSIVELY (the ideal is that everyone can do everything,
usually they come fairly close to the ideal).

    Why, you may ask, does the Navy go to such lengths?  The answer is
fairly simple; these are WARSHIPS, they need to be able to function even
after suffering SEVERE damage and heavy casualties.  Just for normal day to
day operations there are at least 2 people for every job (watch on and watch
off), usually there are 3, often there are 4 or more.

The following from net.aviation may be of interest to you. (ESP the
quote).  You may be interested in the whole discussion there.     [scw]

>From: wanttaja@ssc-vax.UUCP (Ronald J Wanttaja)
>Newsgroups: net.aviation
>Subject: Re: Problems with flying by the book (a pithy comment)
>Date: 14 Oct 86 15:58:15 GMT
>Organization: Boeing Aerospace Co., Seattle, WA

<> I understand and appreciate your comments in the mod.risks about nth party/
<> hearsay stuff.  But, from the examples you gave, in case you are really
<> looking for some aviation accidents partially due to obedience to the
<> "book", here are two - both commercial accidents at Toronto International
<> (Now Pearson International).  Both from MOT (then DOT) accident 
<> investigations:
> 
  [...]

>"Rule books are paper:  They will not cushion a sudden meeting of stone and
> metal."
>                     - Earnest K. Gann


Editorial on SDI

Michael L. Scott <scott@rochester.arpa>
Sat, 18 Oct 86 17:51:36 edt
The following is an op-ed piece that I wrote for the Rochester, NY,
DEMOCRAT AND CHRONICLE.  It appeared on page 4A on September 29, 1986.

        'STAR WARS' CAN'T SUCCEED AS SHIELD, HAS OFFENSIVE CAPABILITY

        Can the Strategic Defense Initiative succeed?  The  answer  depends
   critically  on what you mean by success.  Unfortunately, the public per-
   ception of the purpose of SDI differs dramatically from the actual goals
   of the program.

        In his original "Star Wars" speech, President  Reagan  called  upon
   the   scientific   community  to  make  nuclear  weapons  "impotent  and
   obsolete."  He has maintained ever since that this is the SDI  goal:  to
   develop an impenetrable defensive shield that would protect the American
   population from attack.  With such a shield in place,  nuclear  missiles
   would  be useless, and both the United States and the Soviet Union could
   disarm.

        Can such a shield be built?  The most qualified minds in the  coun-
   try  say  "no."   In  an  unprecedented  move, over 6,500 scientists and
   engineers at the nation's research Universities have signed a  statement
   indicating  that "Anti-ballistic missile defense of sufficient reliabil-
   ity to defend the population of  the  United  States  against  a  Soviet
   attack  is  not  technically  feasible."  The signatures were drawn from
   over 110 campuses in 41 states, and include 15 Nobel Laureates  in  Phy-
   sics and Chemistry, and 57% of the combined faculties of the top 20 Phy-
   sics departments in the country.  Given the usual  political  apathy  of
   scientists and engineers, these numbers are absolutely staggering.

        The obstacles to population defense include a vast array  of  prob-
   lems  in physics, optics, astronautics, computer science, economics, and
   logistics.  Some of these problems can be solved with  adequate  funding
   for  research;  others  cannot.  Consider the single subject of software
   for "Star Wars" computers.  As a researcher in parallel and  distributed
   computing, I am in a position to speak on this subject with considerable
   confidence.  The computer programs for  population  defense  would  span
   thousands  of  computers  all  over the planet and in space.  They would
   constitute the single largest software system ever  written.   There  is
   absolutely  no  way  we  could ever be sure that the software would work
   correctly.

        Why not?  To  begin  with,  we  cannot  anticipate  every  possible
   scenario  in  a  Soviet  attack.   Human commanders cope with unexpected
   situations by drawing on their experience, their common sense, and their
   knack for military tactics.  Computers have no such abilities.  They can
   only deal with situations they were programmed  in  advance  to  expect.
   Before  we can even start to write the programs for "Star Wars," we must
   predict every situation that might arise and  every  trick  the  Soviets
   might pull.  Would you bet the future of the United States that the Rus-
   sians won't think of ANYTHING we haven't thought of first?

        Even if we could specify exactly what we want the computers to  do,
   the  task  of translating that specification into flawless computer pro-
   grams would be beyond our capabilities for many,  many  years,  possibly
   forever.   Current and projected techniques for testing and quality con-
   trol may reduce the number of  flaws  in  large  computer  systems,  but
   actual  use  under  real-life  conditions  will  always  uncover further
   "bugs."  (For details on the software problem, see  Dr.  David  Parnas's
   article  in the October 1985 issue of AMERICAN SCIENTIST.)  The only way
   to gain real confidence in "Star Wars" software would be to try  it  out
   in full-scale nuclear combat.  Such testing is clearly not an option.

        But if effective population  defense  is  impossible,  why  are  we
   spending  billions  of dollars on SDI, and why are the Russians so upset
   about it?  The answer is remarkably simple: because  population  defense
   is  not  the goal of SDI.  The kinetic and directed energy devices being
   developed for the "Star Wars" program will have a  tremendous  range  of
   uses  in  offensive  weapons and in increasing the survivability of U.S.
   land-based missiles.  The Soviets fear "Star Wars" for its  first-strike
   capabilities.   To make nuclear weapons impotent and obsolete, SDI would
   have to be perfect.  To shoot down Soviet  satellites,  to  thin  out  a
   pre-emptive  strike  on  U.S.  missile  fields, or to develop exotic new
   weapons for the conventional battlefield, SDI will only need to  succeed
   on a much more modest level.

        By focusing public attention on the unattainable goal of population
   defense,  the Administration has managed to avoid discussion of the more
   practical,  immediate  consequences  of  SDI  research.    The   weapons
   developed  for  "Star Wars" will have a profound impact on both our war-
   fighting strategy and our treaty obligations.  That impact should be the
   subject  of public and Congressional debate.  By pretending to develop a
   defensive shield, the President has  fooled  the  American  people  into
   funding  a program that is far less clear-cut and benign.  In effect, he
   has sold a system we cannot build in order to build a system  he  cannot
   sell.

   BYLINE:
       Michael L. Scott is an Assistant Professor of Computer Science at
       the University of Rochester.  His article was co-signed by 10 other
       faculty members  [almost the entire department]  and 36 doctoral
       students and researchers.  The views expressed should not be regarded
       as the official position of the University of Rochester or of its
       Computer Science Department.

           [We haven't had any RISKS mention of this topic in a long time. 
           Perhaps it is time to dust it off again in the light of Reykjavik.
           The nature of the offensive capability is not a new issue, but is
           clearly an enormous potential RISK — at least in the eyes of the 
           Soviets.  However, subsequent discussion on that issue probably
           belongs on ARMS-D.  Let's once again try to stick to issues 
           relevant to computers and related technologies.  PGN]

Please report problems with the web pages to the maintainer

x
Top