The RISKS Digest
Volume 3 Issue 07

Friday, 13th June 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Eastport Study Group report ("Science" article)
Pete Kaiser
An additional SDI problem: sensor technology
Jon Jacky
Shuttle software and CACM
James Tomayko [and Herb Lin]
Privacy laws
Bruce O'Neel
A mini-editorial on running the RISKS Forum
PGN
Info on RISKS (comp.risks)

Eastport Study Group report ("Science" article)

Systems Consultant <kaiser%renko.DEC@decwrl.DEC.COM>
Thursday, 12 Jun 1986 04:54:22-PDT
"Science", in the issue of 9 May 1986, contains an article on "Resolving the
Star Wars Software Dilemma".  The subhead reads:

  A panel of computer scientists has concluded that computers will be able
  to manage a strategic defense system — but only if battle management is
  designed in from the beginning.

More, from within the article:

  ...The report is in fact a scathing critique of the way the Pentagon
  handles high-technology weapons design in general and software
  development in particular.  It deals with important questions about the
  limits of computing, the nature of reliability, the organization of
  large, complex systems, and the nature of strategic defense itself.

  And in a striking paradox, it validates what the program's many critics
  have been saying about the infeasibility of Star Wars software.  ...

  First, they say, battle management is tractable only if SDIO and its
  defense-industry contractors give up their tacit assumption that
  software is an "applique," something that can be sprinkled on
  preexisting weapons and sensors like pixie dust to turn them into a
  working defense system.  This assumption was quite evident in SDIO's
  so-called "Phase I" architecture studies, which were completed in 1985
  and which seemed to concentrate almost exclusively on hardware.

The "paradox", as I read the Study Group's findings in the article, is that
although it might be possible to design systems that did effective battle
management (in some interpretation of "effective") by integrating software
and hardware from the earliest stages of design, there is no sign whatever
that this could happen in the real world of military contractors and
politics.  Thus, in the report's view, it is effectively impossible to build
workable Star Wars systems.

Recommended, but not comforting, reading.  (The name of the full report from
the Eastport Study Group is "Summer Study 1985: A Report to the Director of
the Strategic Defense Initiative Organization", December 1985.)

---Pete        
Kaiser%furilo.dec@decwrl.dec.com         decwrl!furilo.dec.com!kaiser
DEC, 2 Iron Way (MRO3-3/G20), Marlboro MA 01752  617-467-4445


Re: An additional SDI problem: sensor technology

Jon Jacky <jon@uw-june.arpa>
Thu, 12 Jun 86 22:32:55 PDT
> (Eugene Miya writes:) ... Where there are various groups watchdogging
> computing, but the more hardware oriented, EE areas such as radar have
> fewer opposition elements.

Sensors and signal processing comprise a larger portion of 
the SDI effort than anything else, according to many reports.

The most informative comments I have heard were by Michael Gamble, a 
vice president (I think) at Boeing, and head of that company's 'Star Wars'
research programs. He administers about half a billion dollars worth of
contracts.  In a talk to the Seattle chapter of the IEEE on Nov. 14, 1985,
he noted that the total SDI budget requests for fiscal years 1985 through 
1990 would total about $30 billion, broken down as follows:  Sensors $13B,
directed energy weapons $7B, kinetic energy weapons $7B, Battle Management
$1B, Survivability $2B.  Sensors comprise almost half the total. (I do not
know whether these proportions are maintained in the somewhat reduced 
budgets that get approved.)

Gamble also explained why he thought missile defense was once again 
plausible, after being debunked in the early 70's.  "What has changed 
since then?" he asked rhetorically, and gave five answers, three of which 
involved sensors: first, long wave infrared detectors and associated cooling
systems, which permit small warheads to be seen agains the cold of space;
second, "fly's eye" mosaic sensor techniques (like the ones used on the 
F-15 launched ASATS and in the 1984 "homing overlay experiment") — these
are said to "permit smaller apertures" (I didn't catch the significance of
that);  and third, low-medium power lasers for tracking, designation, and
homing.  The other two factors were long-life space systems and powerful
onboard computing capabilities.

There is a large computing component in the sensor field: digital signal
processing.  However, this area is not so well known to computer science
types.  Boeings largest SDI contract - over $300M - is for the "Airborne
Optical Adjunct," an infrared telescope and a lot of computers mounted 
in a 767 airliner, apparently for experiments in sensing and battle
management for midcourse and terminal phase.  Two of the systems people
involved in this project gave a seminar at the UW computer science department
last January.  They mentioned that the signal processing was being handled
by the sensor people and they just regarded it as a black box.

I can think of two reasons why this area has received relatively little 
attention.  First, there were no galvanizingly absurd statements about sensors 
from relatively prominent SDI proponents - nothing like James Fletcher 
calling for "ten million lines of error-free code," or all that bizarre stuff
in the Fletcher report and elsewhere about launching pop-up X-ray lasers 
under computer control.  Second, there is a lot secrecy in the sensor area--
unlike battle management, where the important issues do not turn on classified
material.  Gamble noted that "there is not that much that is classified about
SDI, except things like, 'How far can you see?  How far do you have to see?'"
Needless to say, talking in detail about sensors would reveal how much we know
about Soviet warhead characteristics, how good our early warning systems 
really are, and so forth.

-Jonathan Jacky
University of Washington


Shuttle software and CACM

<James.Tomayko@sei.cmu.edu>
Thursday, 12 June 1986 09:04:12 EDT
As referenced in the recent RISKS, the CACM case study is a somewhat decent
introduction to the Shuttle onboard software. However, I would like to warn
readers that the case study editors interviewed IBM FSD personnel *only*,
with no attempt to talk to the customer, NASA, or the users, the astronauts.

I was under contract with NASA for three years to do a study of its use of
computers in space flight, and my interviews with crews and trainers
provided a somewhat more critical view of the software. Also, it is useful
to remember that the primary avionics software system documented in the CACM
study runs on four computers. Last count was that there are something over
200 processors on the orbiter (Source: Jack Garman, Johnson Space Center).

So, please take the CACM articles with a grain of salt.

Jim Tomayko
Software Engineering Institute

P.S. To forestall some mail: The earliest NASA will release my Technical
Report is late 1987. 

  [In addition, Herb Lin responded to David Smith, included here for emphasis:
   "This issue of CACM *is* a pretty good review of shuttle software.  On
    the other hand, you must remember that the interview was with the
    people who were in primary charge of the project.  Thus, you would be
    rather unlikely to hear about problems and so on that remained
    unresolved.  That claim doesn't diminish the value of the article,
    but it should prompt caution in accepting the general impression it
    gives that all was (or is) just fine...  Herb"  ]


Privacy laws

Bruce O'Neel <ZWBEO%VPFVM.BITNET@WISCVM.WISC.EDU>
Thu, 12 Jun 86 10:55 EDT
In response to the House law about computer communications privacy, I
believe that the following is correct.  Right now, communications are
protected if they are telephone, mail, and other "traditional"
technologies.  One can not "wiretap" you without a warrant.  The current
laws don't cover computer communications or car phones or other "new"
communications technology.  According to what I read in Wash. Post
this bill would consider car phone communications, computer communications,
and others the same as the mail and land based phone calls.

      Bruce O'Neel  <zwbeo@vpfvm.bitnet>


A mini-editorial on running the RISKS Forum

Peter G. Neumann <NEUMANN@SRI-CSL.ARPA>
Fri 13 Jun 86 00:25:40-PDT
Life is usually a delicate balance among many tradeoffs.  Running
RISKS is no different:

  The subject of Risks to the Public in Computer Systems involves
  tradeoffs among technical, social, economic, and political factors.
  These are very hard to assess, because each person's basis for
  judgment is likely to be different.  (All of these factors are
  relevant in the broad sense, although we generally try to focus on
  the technical issues.)  Some risks are intrinsic in technology; the
  question is under what circumstances are they worthwhile — and that
  involves all of the factors (and more).

  If messages were too superficial or issues too infrequent, most of
  you would lose interest.  If issues and/or messages were very long or
  too frequent, you would most likely be overwhelmed.  (But I occasionally
  get requests for single-message mailings from BITNET subscribers [who
  have not yet discovered undigestifiers?], although that presents many
  difficulties.)

  If I put too much of my time into RISKS, my other responsibilities may 
  suffer.  If I put too little time in, you may suffer.   

  If I turn down the threshold and accept contributions that violate
  the masthead requirements (relevancy, soundness, objectivity,
  coherence, etc.), we all suffer.  If you contribute junk and I don't
  reject it, you and I suffer.  If I turn up the threshold and reject
  many contributions, I defeat one of the main purposes of RISKS,
  which is to be an open forum.  

  If RISKS were to take itself too seriously, or alternatively to become 
  too frivolous, that would be bad.  [I try to keep my pun level down,
  but occasionally I may slip a little.])

So, thanks for sticking with us in this experiment in communication on a
vital topic.  Please complain to RISKS-Request or to me when you are
really unhappy.  It can only help.  Peter

Please report problems with the web pages to the maintainer

x
Top