The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 1 Issue 01

1 August 1985


o Welcome!
o ACM Council Resolution of 8 October 1984
o An Agenda for the Future
o Computer-Related Incidents Illustrating Risks to the Public
o Strategic Computing Initiative
o Strategic Defense Initiative; David Parnas and SDI
Ed Frankenberry
David Weiss
o Herb Lin: Software for Ballistic Missile Defense, June 1985
Herb Lin
o Weapons and Hope by Freeman Dyson (minireview by Peter Denning)
Peter Denning
o Christiane Floyd et al.: The Responsible Use of Computers
Jim Horning
o Human safety (software safety)
o Computers in critical environments, Rome, 23-25 October 1985
o Afterword


Peter G. Neumann <Neumann@SRI-CSLA.ARPA>

This is the first issue of a new on-line forum. Its intent is to address issues involving risks to the public in the use of computers. As such, it is necessarily concerned with whether/how critical requirements for human safety, reliability, fault tolerance, security, privacy, integrity, and guaranteed service (among others) can be met (in some cases all at the same time), and how the attempted fulfillment or ignorance of those requirements may imply risks to the public. We will presumably explore both deficiencies in existing systems and techniques for developing better computer systems -- as well as the implications of using computer systems in highly critical environments.

Introductory Comments

This forum is inspired by the letter from Adele Goldberg, President of the Association for Computing Machinery (ACM), in the Communications of the ACM, February 1985 (pp. 131-133). In that message (part of which is reproduced below), Adele outlined the ACM's intensified concern with our increasingly critical dependence on the use of computers, a concern which culminated in the ACM's support for a Forum on Risks to the Public in Computer Systems (RPCS), to be developed by the ACM Committee on Computers and Public Policy (of which I am currently the Chairman). My involvement in this BBOARD activity is thus motivated by my ACM roles, but also by strong feelings that this topic is one of the most important confronting us. In keeping with ACM policy, and with due respect to the use of the ARPANET, we hope to attain a representative balance among differing viewpoints, although this clearly cannot be achieved locally within each instance of the forum.

For discussions on requirements, design, and evaluation techniques for critical systems -- namely, how to do it right in the first place so that a system can satisfy its requirements and can continue to maintain its desired abilities through ongoing maintenance and evolution, you will find a little solace in the literature on computer science, computer systems, and software engineering. There is even some modest encouragement from the formal verification community -- which readers of the ACM SIGSOFT Software Engineering Notes will find in the forthcoming August 1985 special issue on the verification workshop VERkshop III. However, it is not encouraging to find many developers of critical software ignoring what is known about how to do it better. In this RISKS forum, we hope that we will be able to confront some of those problems, and specifically those where risks to the public are present.

You should also be aware (if you are not already) of several related on-line services: HUMAN-NETS@RUTGERS for a variety of issues pertaining to people (but originally oriented to the establishment of WorldNet), SOFT-ENG@MIT-XX for software engineering, and perhaps SECURITY@RUTGERS for security -- it is still young and rather narrow (car-theft prevention is big at the moment, with a few messages on passwords and forged mail headers). (You can get more inforation from SRI-NIC.ARPA:<NETINFO>INTEREST-GROUPS.TXT.) I look at these regularly, so some cross-fertilization and overlap may be expected. However, the perspective of RISKS seems sufficiently unique to justify the existence of still another interest group!

RISKS Forum Procedures

I hope all this introductory detail does not deter you, but it seems to be worthwhile to set things up cleanly from the beginning. To submit items for distribution, send mail to RISKS@SRI-CSL.ARPA. For all other messages (e.g., list additions or deletions, or administrative complaints), send to RISKS-Request@SRI-CSL.ARPA.

Submissions should relate directly to risks to the public involving computer systems, be reasonably coherent, and have a brief explicit descriptive subject line. Flames, ad hominem attacks, overtly political statements, and other inappropriate material will be rejected. Carefulness, reasonably clear writing, and technical accuracy will be greatly appreciated. Much unnecessary flailing can be avoided with just a little forethought.

Contributions will generally be collected and distributed in digest form rather than singly, as often as appropriate. Subject lines may be edited in order to group messages with similar content. Long messages may have portions of lesser interest deleted (and so marked), and/or may be split across several issues.

Initially we will provide distributions to individuals, but as soon as there are more than a few individuals at any given host, we will expect the establishment of a file <BBOARD>RISKS.TXT or equivalent on that host -- with local option whether or not to forward individual copies. Back issues may be FTPed from SRI-CSL.ARPA:$lt;RISKS$gt;RISKS-"vol"."no", where "vol" and "no" are volume and number -- i.e., RISKS-1.1 for this issue. But please try to rely on local repositories rather than swamping the gateways, nets, and SRI-CSL.

ACM Council Resolution of 8 October 1984

Peter G. Neumann <Neumann@SRI-CSLA.ARPA>
Following are excerpts from ACM President Adele Goldberg's letter in the
Communications of the ACM, February 1985 (pp. 131-133).

  On this day [8 October 1984], the ACM Council passed an important
  resolution.  It begins:

    Contrary to the myth that computer systems are infallible, in fact
    computer systems can and do fail.  Consequently, the reliability of
    computer-based systems cannot be taken for granted.  This reality
    applies to all computer-based systems, but it is especially critical
    for systems whose failure would result in extreme risk to the public.
    Increasingly, human lives depend upon the reliable operation of 
    systems such as air traffic and high-speed ground transportation 
    control systems, military weapons delivery and defense systems, and
    health care delivery and diagnostic systems.

  The second part of the resolution includes a list of technical questions
  that should be answered about each computer system.  This part states that:

    While it is not possible to eliminate computer-based systems failure
    entirely, we believe that it is possible to reduce risks to the public
    to reasonable levels.  To do so, system developers must better
    recognize and address the issues of reliability.  The public has the
    right to require that systems are installed only after proper steps
    have been taken to assure reasonable levels of reliability.

    The issues and questions concerning reliability that must be addressed

    1. What risks and questions concerning reliability are involved when
       the computer system fails?

    2. What is the reasonable and practical level of reliability to
       require of the system, and does the system meet this level?
    3. What techniques were used to estimate and verify the level of

    4. Were the estimators and verifiers independent of each other
       and of those with vested interests in the system?

Adele's letter goes on to motivate the ACM's authorization of a forum on
risks to the public in the use of computer systems, of which this is an
on-line manifestation.  As you can see, I believe that the charter must be
broader than just reliability, including as appropriate other critical
requirements such as fault tolerance, security, privacy, integrity,
guaranteed service, human safety, and even world survival (in sort of
increasing order of globality).  There is also an important but probably
sharply delimited role for design verification (such as has been carried out
in the multilevel-security community in demonstrating that formal
specifications are consistent with formal requirements) and even code
verification (proving consistency of code and specifications), although
formal verification technologies seem not suited to mammoth systems but only
to selected critical components -- assuming that those components can be
isolated (which is the operative assumption in the case of security kernels
and trusted computing bases).  For example, see the VERkshop III proceedings
noted above.  PGN


Peter G. Neumann <Neumann@SRI-CSLA.ARPA>
One of the activities of the ACM Committee on Computers and Public Policy
will be the review of a problem list presented by Dan McCracken and his
committee in the September 1974 issue of the Communications of the ACM, and
an update of it in the light of dramatic changes in the use of computers
since then.  Three items from that problem list are particularly relevant to
our RISK forum.

   * Computers and money
   * Computers and privacy
   * Computers and elections

Indeed, in the latest issue of the ACM SIGSOFT Software Engineering Notes
(July 1985), I reported on a variety of recent money problems, security
problems, and a whole string of potential election-fraud problems -- in the
last case suggesting opportunities for Trojan Horses and local fraud.  On
the third subject, there was an article by David Burnham (NY Times) in
newspapers of 29 July 1985 (NY Times, SF Chron, etc.), on vulnerabilities in
various computerized voting systems.  About 60% of the votes are counted by
computer programs, with over a third of those votes being counted by one
program (or variants?) written by Computer Election Systems of Berkeley CA.
Burnham writes, "The allegations that vote tallies calculated with [that
program] may have been secretly altered have raised concern among election
officials and computer experts... In Indiana and West Virginia, the company
has been accused of helping to rig elections."  This topic is just warming

Items that also give us opportunities for discussions on risks to the
public include these:

   * Computers and defense
   * Computers and human safety 
   * Computer-user consumer protection
   * Computers and health
   * Informal and formal models of critical properties
     (e.g., not just of security or reliability, not 
      so high-level as Asimov's 3 Laws of Robotics)

Several items on computers and defense are included below.  There are also
some comments on software that is safe for humans.

I would think that some sort of Ralph-Nader-like consumer protection
organization might be appropriate for computing.  We have already had two
very serious automobile recalls due to program bugs -- the El Dorado brake
computer and the Mark VII computerized air suspension, and at least two
heart pacemaker problems (one which resulted in a death), as noted in the
disaster list below -- to go along with this summer's watermelon recall
(pesticides) and Austrian wine recalls (with the antifreeze-component
diethylene glycol being used as a sweetener).


Peter G. Neumann <Neumann@SRI-CSLA.ARPA>
Readers of the ACM SIGSOFT Software Engineering Notes have been alerted in
many past issues to numerous disasters and computer curiosities implying
potential or actual risks to the public.  A summary of events is catalogued
below, and updates earlier versions that I circulated in a few selected
BBOARDS.  Further details can be found in the references cited.  Awareness
of these cases is vital to those involved the design, implementation, and
operation of computer systems in critical environments, but is of course
not sufficient to prevent new disasters from occurring.  Significantly
better systems, and more aware operation and use, are also required.

              Compiled by Peter G. Neumann (21 July 1985)

The following list is drawn largely from back issues of ACM SIGSOFT Software
Engineering Notes [SEN], references to which are cited as (SEN vol no), where 
vol 10 = 1985.  Some incidents are well documented, others need further study.
Please send corrections/additions+refs to PGNeumann, SRI International, EL301, 
Menlo Park CA 94025, phone 415-859-2375, Neumann@SRI-CSL.ARPA.

Legend: ! = Loss of Life; * = Potentially Life-Critical; 
        $ = Loss of Money/Equipment; S = Security/Privacy/Integrity Flaw

-------------------------- SYSTEM + ENVIRONMENT ------------------------------
!S Arthritis-therapy microwaves set pacemaker to 214, killed patient (SEN 5 1)
*S Failed heart-shocking devices due to faulty battery packs (SEN 10 3)
*S Anti-theft device reset pacemaker; FDA investigating the problem (SEN 10 2)
*$ Three Mile Island PA, now recognized as very close to meltdown (SEN 4 2)
*$ Crystal River FL reactor (Feb 1980) (Science 207 3/28/80 1445-48, SEN 10 3)
** SAC/NORAD: 50 false alerts in 1979 (SEN 5 3), incl. a simulated attack whose
    outputs accidentally triggered a live scramble [9 Nov 1979] (SEN 5 3);
** BMEWS at Thule detected rising moon as incoming missiles [5 Oct 1960] 
    (SEN 8 3).  See E.C. Berkeley, The Computer Revolution, pp. 175-177, 1962.
** Returning space junk detected as missiles.  Daniel Ford, The Button, p. 85
** WWMCCS false alarms triggered scrams [3-6 Jun 1980] (SEN 5 3, Ford pp 78-84)
** DSP East satellite sensors overloaded by Siberian gas-field fire (Ford p 62)
** 747SP (China Air.) autopilot tried to hold at 41,000 ft after engine failed,
    other engines died in stall, plane lost 32,000 feet [19 Feb 85] (SEN 10 2)
** 767 (UA 310 to Denver) four minutes without engines [August 1983] (SEN 8 5)
*  F18 missile thrust while clamped, plane lost 20,000 feet (SEN 8 5)	
*  Mercury astronauts forced into manual reentry (SEN 8 3)
*  Cosmic rays halve shuttle Challenger comm for 14 hours [8 Oct 84] (SEN 10 1)
*  Frigate George Philip fired missile in opposite direction (SEN 8 5)
$S Debit card copying easy despite encryption (DC Metro, SF BART, etc.)
$S Microwave phone calls easily interceptable; portable phones spoofable

------------------------------- SOFTWARE ------------------------------------	
*$ Mariner 1: Atlas booster launch failure DO 100 I=1.10 (not 1,10) (SEN 8 5)
*$ Mariner 18: aborted due to missing NOT in program (SEN 5 2)
*$ F18: plane crashed due to missing exception condition, pilot OK (SEN 6 2)
*$ F14 off aircraft carrier into North Sea; due to software? (SEN 8 3) 
*$ F14 lost to uncontrollable spin, traced to tactical software (SEN 9 5)
*$ El Dorado brake computer bug caused recall of all El Dorados (SEN 4 4)
$$ Viking had a misaligned antenna due to a faulty code patch (SEN 9 5)
$$ First Space Shuttle backup launch-computer synch problem (SEN 6 5 [Garman])
*  Second Space Shuttle operational simulation: tight loop upon cancellation of
    an attempted abort; required manual override (SEN 7 1)
*  Second Shuttle simulation: bug found in jettisoning an SRB (SEN 8 3)
*  Gemini V 100mi landing err, prog ignored orbital motion around sun (SEN 9 1)
*  F16 simulation: plane flipped over whenever it crossed equator (SEN 5 2)
*  F16 simulation: upside-down F16 deadlock over left vs. right roll (SEN 9 5)
*  Nuclear reactor design: bug in Shock II model/program (SEN 4 2)
*  Reactor overheating, low-oil indicator; two-fault coincidence (SEN 8 5)
*  SF BART train doors sometimes open on long legs between stations (SEN 8 5)
*  IRS reprogramming cost USA interest on at least 1,150,000 refunds (SEN 10 3)
*S Numerous system intrusions and penetrations; implanted Trojan horses; 414s;
    intrusions to TRW Credit Information Service, British Telecom's Prestel,
    Santa Clara prison data system (inmate altered release date) (SEN 10 1).
    Computerized time-bomb inserted by programmer (for extortion?) (10 3)
*$ Colorado River flooding in 1983, due to faulty weather data and/or faulty
    model; too much water was kept dammed prior to spring thaws.
 S Chernenko at MOSKVAX: network mail hoax [1 April 1984] (SEN 9 4)
 S VMS tape backup SW trashed disc directories dumped in image mode (SEN 8 5)
$  1979 AT&T program bug downed phone service to Greece for months (SEN 10 3)
$  Demo NatComm thank-you mailing mistitled supporters [NY Times, 16 Dec 1984]
$  Program bug permitted auto-teller overdrafts in Washington State (SEN 10 3)
 - Quebec election prediction gave loser big win [1981] (SEN 10 2, p. 25-26)
 - Other election problems including mid-stream corrections (HW/SW) (SEN 10 3)
 - SW vendor rigs elections? (David Burnham, NY Times front page, 29 July 1985)
 - Alaskan DMV program bug jails driver [Computerworld 15 Apr 85] (SEN 10 3)
 - Vancouver Stock Index lost 574 points over 22 months -- roundoff (SEN 9 1) 
 - Gobbling of legitimate automatic teller cards (SEN 9 2)

-------------------------- HARDWARE/SOFTWARE ---------------------------------
!  Michigan man killed by robotic die-casting machinery (SEN 10 2)
!  Japanese mechanic killed by malfunctioning Kawasaki robot (SEN 10 1, 10 3)
    [Electronic Engineering Times, 21 December 1981]
!  Chinese computer builder electrocuted by his smart computer after he built 
    a newer one. "Jealous Computer Zaps its Creator"!  (SEN 10 1)
*  FAA Air Traffic Control: many computer system outages (e.g., SEN 5 3)
*  ARPANET ground to a complete halt [27 Oct 1980] (SEN 6 1 [Rosen])
*$ Ford Mark VII wiring fires: flaw in computerized air suspension (SEN 10 3)
$S Harrah's $1.7 Million payoff scam -- Trojan horse chip (SEN 8 5) 
$  Great Northeast power blackout due to threshold set-too-low being exceeded
$  Power blackout of 10 Western states, propagated error [2 Oct 1984] (SEN 9 5)
 - SF Muni Metro: Ghost Train reappeared, forcing manual operation (SEN 8 3)
*$ Computer-controlled turntable for huge set ground "Grind" to halt (SEN 10 2)
*$ 8080 control system dropped bits and boulders from 80 ft conveyor (SEN 10 2)
 S 1984 Rose Bowl hoax, scoreboard takeover ("Cal Tech vs. MIT") (SEN 9 2)

!!$ Korean Airlines 007 shot down [1 Sept 1983], killing 269; autopilot left on
    HDG 246 rather than INERTIAL NAV? (NYReview 25 Apr 85, SEN 9 1, SEN 10 3)
!!$ Air New Zealand crashed into mountain [28 Nov 1979]; computer course data
    error had been detected and fixed, but pilots not informed (SEN 6 3 & 6 5)
!  Woman killed daughter, tried to kill son and self after computer error led 
    to a false report of their all having an incurable disease (SEN 10 3)
*  Unarmed Soviet missile crashed in Finland.  Wrong flight path? (SEN 10 2)
*$ South Pacific Airlines, 200 aboard, 500 mi off course near USSR [6 Oct 1984]
*S San Francisco Public Defender's database accessible to police (SEN 10 2)
*  Various cases of false arrest due to computer database use (SEN 10 3)
$  A: $500,000 transaction became $500,000,000; B: $200,000,000 lost (SEN 10 3)
*  FAA Air Traffic Control: many near-misses not reported (SEN 10 3)

---------------- ILLUSTRATIVE OF POTENTIAL FUTURE PROBLEMS -------------------
*S Many known/past security flaws in computer operating systems and application
    programs.  Discovery of new flaws running way ahead of their elimination.  
*  Expert systems in critical environments: unpredictability if (unknowingly) 
    outside of range of competence, e.g., incompleteness of rule base. StarWars
$S Embezzlements, e.g., Muhammed Ali swindle [$23.2 Million], Security Pacific 
    [$10.2 Million], City National Beverly Hills CA [$1.1 Million, 23 Mar 1979]
    [These were only marginally computer-related, but suggestive.  Others
    are known, but not publically acknowledged.]

---------------------- REFUTATION OF EARLIER REPORT --------------------------
* "Exocet missile not on expected-missile list, detected as friend" (SEN 8 3)
   [see Sheffield sinking, reported in New Scientist 97, p. 353, 2/10/83]; 
   Officially denied by British Minister of Defence Peter Blaker
   [New Scientist, vol 97, page 502, 24 Feb 83].  Rather, sinking abetted by
   defensive equipment being turned off to reduce communication interference?

[See also anecdotes from ACM Symposium on Operating Systems Principles,
SOSP 7 (SEN 5 1) and follow-on (SEN 7 1).]



The Strategic Computing Initiative has received considerable discussion in the Communications of the ACM lately, including a letter by Severo Ornstein, Brian Smith and Lucy Suchman (ACM Forum February 1985), the response to them by Robert S. Cooper (ACM Forum, March 1985), and the three responses to Cooper in the August 1985 issue, as well as an article by Mark Stefik in the July 1985 Communications. Considerable variety of opinion is represented, and is well worth reading. PGN


The Strategic Defense Initiative (popularly known as Star Wars) is considering the feasibility of developing what is probably the most complex and most critical system ever contemplated. It is highly appropriate to consider the computer system aspects of that effort here. Some of the potential controversy is illustrated by the recent statements of David Parnas, who presents a strongly skeptical view. (See below.) I hope that we will be able to have a constructive dialogue here among representatives of the different viewpoints, and firmly believe that it is vital to the survival of our world that the computer-technical issues be thoroughly discussed. As in many other cases (e.g., space technology) there are many potential research advances that can spin off approaches to other problems. As indicated by my disaster list above, the problems of developing software for critical environments are very pervasive -- and not just limited to strategic defense. But what we learn in discussing the feasibility of the strategic defense initiative could have great impact on the uses that computers find in other critical environments. In general, we may find that the risks are far too high in many of the critical computing environments on which we depend. We may also be led to techniques for developing better systems that can adequately satisfy all of their critical requirements -- and continue to do so. But perhaps most important of all is the increased awareness that can come from intelligent discussion. Thus, an open forum on this subject is very important. PGN

News Item: David Parnas Resigns from SDI Panel

Ed Frankenberry <ezf@bbnccv>
12 Jul 85 13:56:29 EDT (Fri)
Plucked from SOFT-ENG@MIT-XX:

New York Times, 7/12/85, page A6:


By Charles Mohr
special to the New York Times

Washington, July 11 - A computer scientist has resigned from an advisory panel
on antimissile defense, asserting that it will never be possible to program a
vast complex of battle management computers reliably or to assume they will
work when confronted with a salvo of nuclear missiles.

  The scientist, David L. Parnas, a professor at the University of Victoria
in Victoria, British Columbia, who is consultant to the Office of Naval
Research in Washington, was one of nine scientists asked by the Strategic
Defense Initiative Office to serve at $1,000 a day on the "panel on computing
in support of battle management".

  Professor Parnas, an American citizen with secret military clearances, said
in a letter of resignation and 17 accompanying memorandums that it would never
be possible to test realistically the large array of computers that would link
and control a system of sensors, antimissile weapons, guidance and aiming
devices, and battle management stations.

  Nor, he protested, would it be possible to follow orthodox computer
program-writing practices in which errors and "bugs" are detected and
eliminated in prolonged everyday use. ...

  "I believe," Professor Parnas said, "that it is our duty, as scientists and
engineers, to reply that we have no technological magic that will accomplish
that.  The President and the public should know that."  ...

  In his memorandums, the professor put forth detailed explanations of his
doubts.  He argued that large-scale programs like that envisioned for the
program only became reliable through modifications based on realistic use.

  He dismissed as unrealistic the idea that program-writing computers,
artificial intelligence or mathematical simulation could solve the problem.

  Some other scientists have recently expressed public doubts that large-scale
programs free of fatal flaws can be written.  Herbert Lin, a research fellow
at the Massachusetts Institute of Technology, said this month that the basic
lesson was that "no program works right the first time."

  Professor Parnas wrote that he was sure other experts would disagree with
him.  But he said many regard the program as a "pot of gold" for research
funds or an interesting challenge.

[The above article is not altogether accurate, but gives a flavor of the
Parnas position.  The arguments for and against feasibility of success need
detailed and patient discussion, and thus I do not try to expand upon either
pro or con here at this time.  However, it is hoped that a serious
discussion can unfold on this subject.  (SOFT-ENG@MIT-XX vol 1 no 29
provides some further material on-line.)  See the following message as well.

Obtaining Parnas' SDI critique

David Weiss <>
Mon 22 Jul 1985 16:09:48 EST
Those who are interested in obtaining copies of David Parnas' technical
critique of SDI may do so by writing to the following address:

  Dr. David L. Parnas, Department of Computer Science, University of
  Victoria, P.O. Box 1700, Victoria B.C.  V8W 2Y2  CANADA

BMD paper

Mon, 29 Jul 85 16:53:26 EDT
The final version of my BMD paper is available now.  

            "Software for Ballistic Missile Defense"

Herb Lin, Center for International Studies, 292 Main Street, E38-616,
MIT, Cambridge, MA 02142, phone 617-253-8076.  Cost including postage =

         Software for Ballistic Missile Defense, June 1985


  A battle management system for comprehensive ballistic missile defense
  must perform with near perfection and extraordinary reliability.  It
  will be complex to an unprecedented degree, untestable in a realistic
  environment, and provide minimal time for human intervention.  The
  feasibility of designing and developing such a system (requiring upwards
  of ten million lines of code) is examined in light of the scale of the
  project, the difficulty of testing the system in order to remove errors,
  the management effort required, and the interaction of hardware and
  software difficulties.  The conclusion is that software considerations
  alone would make the feasibility of a "fully reliable" comprehensive
  defense against ballistic missiles questionable.

IMPORTANT NOTE: this version supersedes a widely circulated but earlier
draft entitled "Military Software and BMD: An Insoluble Problem?" dated
February 1985.

Minireview of Freeman Dyson's Weapons and Hope

Peter Denning <pjd@RIACS.ARPA>
Tuesday 30 Jul 85 17:17:56 pdt
I've just finished reading WEAPONS AND HOPE, by Freeman Dyson, published
recently.  It is a remarkable book analyzing the tools, people, and concepts
used for national defense.  The goal is to set forth an agenda for
discussing the nuclear weapons problem.  The most extraordinary aspect of
the book is that Dyson fairly and accurately represents the many points of
view with sympathy and empathy for each.  He thus transmits the impression
that it is possible for everyone to enter into and participate intelligently
in this important debate, and he tells us what the fundamental questions we
must address are.  This book significantly altered the way in which I
personally look at the problem.  I recommend that everyone read it and that
we all use it as a point of departure for our own discussions.

Although Dyson leaves no doubt on his personal views, he presents his very
careful arguments and lays out all the reasoning and assumptions where they
can be scrutinized by others.  With respect to the SDI, Dyson argues
(convincingly, I might add) that the greatest risk comes from the
interaction of the SDI system with the existing policies of the US and
Soviet Union -- it may well destabilize that interaction.  His argument is
based on policy considerations and is largely insensitive to the question
whether an SDI system could meet its technical goals.  For other reasons he
analyzes at length, he considers the idea of a space defense system to be a
technical folly.

Most of the arguments I've seen computer scientists make in criticism of the
"star wars" system are technically correct and support the technical-folly
view but may miss the point at a policy level.  (I am thinking of arguments

  "Computer scientists ought to oppose this because technically it
  cannot meet its goals at reasonable cost.")  

The point is that to the extent that policy planners perceive the technical
arguments as being largely inessential at the policy level, they will not
take seriously arguments labelled

  "You must take this argument seriously because it is made by computer

Politicians often argue that is their job to evaluate the spectrum of
options, make the moral judgments, and puts risks in their proper place --
technologists ought to assess the risks, but when it comes to judging
whether those risks are acceptable (a moral or ethical judgment),
technologists have no more expertise than, say, politicians.  So in a
certain important sense, computer scientists have little special expertise
to bring to the debate.  For this reason, I think ACM has taken the right
approach by giving members [and nonmembers! <PGN>] a forum in which to
discuss these matters as individual human beings but without obligating ACM
to take official positions that are easily dismissed by policy planners as
outside ACM's official expertise.

Peter Denning

[In addition, you might want to look at "The Button: The Pentagon Strategic
Command and Control System", Simon and Schuster, 1985, which is also a
remarkable book.  It apparently began as an attempt to examine the
survivability of our existing communications and rapidly broadened into a
consideration of the computer systems as well.  I cite several examples in
the catalog above.  Some of you probably saw excerpts in the New Yorker.  PGN]

Responsible Use of Computers

Jim Horning <horning@decwrl.ARPA>
30 Jul 1985 1149-PDT (Tuesday)
  You might want to mention the evening session on "Our Responsibility
  as Computer Professionals" held in conjunction with TAPSOFT in Berlin,
  March 27, 1985.  This was attended by about 400 people.  Organized by R.
  Burstall, C. Floyd, C.B. Jones, H.-J. Kreowski, B. Mahr, J. Thatcher.
  Christiane Floyd wrote a nice position paper for the TAPSOFT session,
  well worth abstracting and providing a reference to.  

Jim Horning

[This position paper is apparently similar to an article by Christiane
Floyd, "The Responsible Use of Computers -- Where Do We Draw the Line?",
that appears in two parts in the Spring and Summer 1985 issues of the CPSR
Newsletter (Computer Professionals for Social Responsibility, P.O. Box 717,
Palo Alto CA 94301).  Perhaps someone can send us the TAPSOFT abstract.  PGN]


Peter G. Neumann <Neumann@SRI-CSL>
An important area of concern to the RISKS forum is what is often called
Software Safety, or more properly the necessity for software that is safe
for humans.  ("Software Safety" could easily be misconstrued to imply making
the software safe FROM humans, an ability that is called "integrity" in the
security community.)  Nancy Leveson has been doing some excellent work on
that subject, and I hope she will be a contributor to RISKS.  There are also
short letters in the ACM Forum from Peter Fenwick (CACM December 1984) and
David Nelson (CACM March 1985) on this topic, although they touch only the
tip of the iceberg.  I would expect human safety to be a vital topic for
this RISKS forum, and hope that we can also help to stimulate research on
that topic.


In sticking to my convictions that we must address a variety of critical
computing requirements in a highly systematic, unified, and rigorous way, I
am involved in the following effort:

  23-25 October 1985, Rome, Italy (organized by Sipe Optimation and T&TSUD,
  sponsored by Banca Nazionale del Lavoro).  Italian and English.  Organizers
  Roberto Liscia (Sipe Optimation, Roma, via Silvio d'Amico 40, ITALIA, phone
  039-6-5476), Eugenio Corti (T&TSUD, 80127 Napoli, via Tasso 428, ITALIA),
  Peter G. Neumann (SRI International, Menlo Park CA 94025).  Speakers include
  Neumann, Bill Riddle, Severo Ornstein (CPSR), Alan Borning (U. Washington),
  Andres Zellweger (FAA), Sandro Bologna (ENEA), Eric Guldentops (SWIFT).  The
  program addresses a broad range of topics (including technical, management,
  social, and economic issues) on the use of computer systems in critical
  environments, where the computer systems must be (e.g.) very reliable,
  fault-tolerant, highly available, secure, and safe for humans.  This 
  symposium represents an effort to provide a unified basis for the 
  development of critical systems.  Software engineering and the role of
  the man-machine interface are addressed in detail.  There will also be
  case studies of air-traffic control systems, defense systems, funds 
  transfer, and nuclear power.  Contact Roberto Liscia (or Mrs. De Vito) at 
  SIPE, or Peter Neumann at SRI for further information.


Congratulations to you if you made it through this rather lengthy inaugural issue. I hope you find this and subsequent issues provocative, challenging, enlightening, interesting, and entertaining. But that depends in part upon your contributions having those attributes. Now it is in YOUR hands: your contributions and suggestions will be welcomed. PGNeumann <Neumann@SRI-CSL>

Please report problems with the web pages to the maintainer