The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 2 Issue 52

Wednesday, 14 May 1986


o Launch failures
Phil R. Karn
o Brittleness of large systems
Dave Benson
Scott Dorsey
Dave Sherman
o Word processing -- reroute [reroot?] the discussion
Chuq Von Rospach
o Info on RISKS (comp.risks)

Launch failures

Phil R. Karn <>
Mon, 12 May 86 03:46:15 edt
There are a couple of minor errors in your mod.risks article. Delta, not
Atlas-Centaur, had the streak of 43 successful launches since 1977,
and it was Delta-178, not the Titan 34D, whose main engine shut down 71
seconds into flight.
            [Blame AP for that one, not me.  Never trust what you read in
             the papers (or anywhere else, apparently)!  THANKS.  PGN]

I would heavily discount the possibility of a range safety signal
causing the failure of Delta-178. There are only two commands available
to the range safety officer, ARM and FIRE. The latter causes an engine
shutdown alright, but immediately follows it by the detonation of the
destruct explosives. The fact that the range safety system worked perfectly
20 seconds after the shutdown indicates that an unauthorized signal is
unlikely to have been the cause of the shutdown.  Besides, the media
has been reporting that the investigation has revealed strong evidence
from telemetry of a short circuit in the engine control circuit.

        [Ah, yes, but (a) a short circuit could easily trigger the
         shutdown command, and (b) strong evidence could also be wrong.

         Well, just for the record, I might as well mention here the misfire
         on 25 April (not reported until 9 May) of the Nike Orion, which had
         flown successfully 120 consecutive times -- and that was its first
         failure.  The burned-out Nike first stage failed to separate
         before the second stage Orion ignited.  Murphy strikes again,
         but in spades over recent months.  PGN]

brittleness of large systems

Dave Benson <benson%wsu.csnet@CSNET-RELAY.ARPA>
Profoundly, utterly and completely disagree that probability theory can be
used to characterize the brittleness of large systems.  Using probability
theory and mathematical statistics to assess the likelyhood of failure
requires experience, enough experience to know the frequency of failure of
parts, the frequency of failure of the interaction of parts, etc.

The one big attempt to do this was the Rasmussen report, WASH-(I don't
remember the number think it was 1400), which attempted to use fault-tree
analysis to predict the failure frequency of large nuclear reactors such as
the Three Mile Island set.  The actual accident which occured at TMI was not
even considered in the Rasmussen report, thus assigned probabilty zero.  By
twisting the "causes" of the accident at TMI, one might find a probability
attached to this accident in the Rasmusen report.  Those attempting this
have come up with the TMI accident as have an     "incredible"    probability,
i.e., about one chance per billion reactor years.

Nancy Leveson at UC-Irvine is preparing a long survey [mentioned earlier in
RISKS] of work on safety related issues in software.  She was so kind as to
send me a pre-publication version of the report.  I highly recommend the
finished report to the RISKS readership.  It is good.  But as Prof.
Leveson's survey makes clear, there are no new, good ideas for
characterizing brittleness.

She does survey the use of fault-tree analysis for producing reliable
software.  This technique will certainly help improve the current state of
the art in real-time software design.  But the Rasmussen report--TMI
accident demonstrates that the real world is not (and, I believe, cannot) be
completely characterized by such techniques.

Let me remind you that according to Fox, "Software and its Development" the
Enroute Air Traffic Control System (a large but not very large real-time
C**3-tye system) has to date, only executed about .001 to .003 of all
possible paths throught the code.

So, we have not the data to use probability and statistics.  Therefore, the
brittleness of large real-time software (C**3*I military systems, SDI, major
transaction processing software, etc.) needs something else.  Here is a
thought about that "something else":
  The traditional means of studying the most important aspect of our world,
  people and their societies, has been the humanities.  Language, culture,
  history, writings, anthropology, classics, literature...  and do not forget
  theology, perhaps the subtlest of all.  Recently (that is in the last
  hundred years) these have been supplemented by psychology and the social
  sciences.  This has become possible only AFTER a very long tradition in the

My suggestion is to study software, large software, with the intellectual
tools of the humanists.  I would very much like to hear and read what
theologians have to say about software.    Comments?

       [By the way, the AP story of 12 May on Washington State's Hanford
        nuclear reservation says that in the mid- to late 1940s, thousands
        of residents may have received doses of radioactive iodine-131 at
        levels hundreds of times greater than levels considered safe today.
        Reactors and plutonioum factories "spewed the gas out at levels that
        today would qualify as a major nuclear accident, thousands of times
        greater than levels recorded at TMI."  The standards have since been
        changed, but at the time it was apparently considered routine.  PGN]

HBO (RISKS-2.49)

Scott Dorsey <kludge%gitpyr%gatech.csnet@CSNET-RELAY.ARPA>
Sat, 10 May 86 11:36:55 edt
   I am told by a friend that the HBO studio-transmitter link is a landline.
Alhough this cannot be easily overridden with a mobile transmitter, cases
exist (like that at the Virginia Tech campus radio station) where the
landline was cut along its path and replaced with an originating source (in
this case, perhaps a VTR, in the Va Tech case, a casette player).

HBO (RISKS-2.49)

Mon, 12 May 86 17:20:53 PDT
  To: utzoo!ihnp4!ucbvax!SRI-CSL.ARPA!RISKS
  Subject: Re: RISKS-2.49

  >              Or do descramblers
  >let "normal" signals through O.K. .... I don't think so.

Someone else mentioned on RISKS that they do. I would think they'd have to.
Our cable company periodically runs "free Pay-TV weekends" in the hope that
viewers will like what they see on Pay-TV and sign up after the free period
is over.  And paying customers certainly don't have to disconnect their
descramblers at such times.

Dave Sherman, Toronto
{ ihnp4!utzoo  pesnta  utcs  hcr  decvax!utcsri  } !lsuc!dave

Word processing -- reroute [reroot?] the discussion

Chuq Von Rospach <chuq%plaid@SUN.COM>
Mon, 12 May 86 22:44:04 PDT
Word processing and bad english are well within the domain of the group
mod.mag -- you may want to toss a pointer there, and if there is
interest I might put a mailing list on my machine to tie it all up
for the Arpaland.

As someone who publishes a magazine electronically, gets most of its
submissions electronically, and is generally an electronic network
junkie (gotta get my compuserve fix...), they are right.  It isn't the
medium in itself, though, but its tendency to let you toss things off
without thinking first (such as this message).

chuq     [I should also put in a pointer to COMPUTERS&SOCIETY as a
         source of discussion on such topics, for example, a piece
         by "Bruce_A._Hamilton.OsbuSouth"@Xerox.COM entitled
         I continue to reject a slew of responses on this topic as too
         marginally related to RISKS.  Thanks anyway.  PGN]

Please report problems with the web pages to the maintainer