The RISKS Digest
Volume 2 Issue 56

Friday, 30th May 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

A joke that went wrong
Brian Randell
Computer Program for nuclear reactor accidents
Gary Chapman
On risks and knowledge
Alan Wexelblat
Technical vs. Political in SDI
Dave Benson
Are SDI Software predictions biased by old tactical software?
Bob Estell
Culling through RISKS headers
Jim Horning
Info on RISKS (comp.risks)

A joke that went wrong

Brian Randell <brian%kelpie.newcastle.ac.uk@Cs.Ucl.AC.UK>
Thu, 29 May 86 10:48:45 bst
From the Guardian (London) 29 May 1985:
ELECTRONIC GOODBYE SHOCKS JOKER
by John Ezard

  Mr Dean Talboy's attempt to leave a harmless electronic memento for his
former workmates earned him instead a high place in the almanac of computer
horror stories, a court was told yesterday.
  News of his little prank, and its dire results for the High Street
electronics giant Dixons, sent a frisson of sympathy through computer buffs.
These are often as tempted by practical jokes as he was. But they also know,
as his experience confirms, that one mistyped symbol in a long programme can
introduce a monstrous bug into the entire machine.
  Mr. Talboys, aged 26, a highly-educated computer consultant, admitted
criminal damage at Acton crown court in the first British prosecution for
electronic graffiti. His farewell slogan was the innocuous "Goodbye, folks".
Mr. Austen Issard-Davies, prosecuting, said he intended that this should
flash up on Dixon's head office computer screen whenever his leaving date
was entered by an operator.
  But Mr. Talboys inadvertently inserted a "stop" code in his programme,
causing the programme to disconnect midway through its run.
  Mr. Talboys was crafting his masterpiece while the computer was in test
mode. But the machine then transferred it into "production" or operational
mode - in which the "stop" symbol is illegal. The outcome of his error was
that every screen - in a headquarters which processes the work of 4,500
employees - hiccuped and went blank whenever any operator keyed in anyone's
leaving date.
  "Unlike most graffiti, which can be rubbed out or painted over, it cost
Dixons more than (Pounds)1,000 to investigate, discover what he had done and
put it right," Mr Issard-Davies told the court.
  The blame was immediately traced to Mr Talboys - "rather like a burglar who
has left his visiting card". He had agreed with police that he had acted
irresponsibly. Yesterday he was conditionally bound over and ordered to pay
the firm (Pounds)1,000 compensation.
  The computer language in which Mr Talboys accidentally wrote his bug is
called Mantis.
  Judge Kerry Quarren-Evans said: "Offices without a certain amount of humour
would be very dry and dusty places. But this is not the type of medium for
practical jokes."
  Mr Talboys said: "My advice to anyone else is don't bloody do it. It has
been 18 months of hell. It was simply a prank and I have learned my lesson.
My backside has been well and truly tanned."

    [I guess we'll be praying mantis will eat more bugs in the future.  PGN]


Computer Program for nuclear reactor accidents

Gary Chapman <chapman@su-russell.arpa>
Thu, 29 May 86 11:48:19 pdt
An article in the new Electronics Magazine (McGraw-Hill) May 26, page 14,
describes a prototype parallel computer system that would simulate and
analyze the chain of events in a complex nuclear accident faster than the
accident would actually occur. The system, which is being developed at the
University of Illinois Champaign-Urbana campus would combine the power of a
parallel processor, with an artifical intelligence/expert system that would
examine where a problem is headed and give advice on possible corrections to
avoid a disaster.  The program does both forward and backward chaining, and
is written in Portable Standard Lisp. The system would take inputs from over
1000 sensors on an operating reactor and perform a real-time simulation of
the reactor operation.  According to the calculations, this package will be
able to simulate a reactor accident 10 times faster than real time.  The
programmer stresses that the system is designed as a monitoring mechanism
and decision aid for a human operators, not as an automatic control system.


On risks and knowledge

Alan Wexelblat <wex@mcc.arpa>
Fri, 30 May 86 21:02:10 CDT
One topic so far untouched by RISKS is the intimate connection between risks
and knowledge.  That is, how can we expect to assess risks when we lack
knowledge or worse, when knowledge is deliberately withheld.  These thoughts
were prompted by the article below:

From "The Guardian", May 21, 1986 (NY, not UK) by Jonathan A. Bennet

The presidentially-appointed Rogers Commission dramatically denounced
solid rocket booster manager Lawrence Mulloy, while continuing to conceal
multiple cases of perjury by top NASA officials and NASA-White House
complicity in that perjury.

The Rogers Commission stopped far short of accusing Mulloy or anyone else
of perjury, despite clear contradictions between what its investigators
have learned and repeated statements under oath by NASA officials.
Instead, the commission merely accused Mulloy of having "almost covered up"
and of "glossing over" the truth.

...    [I have excerpted the first few paragraphs from a longish message
        which is sufficiently important to RISKS to be called to your
        attention, but which is sufficiently non-computer-specific that I did
        not want to include it in its entirety.  It is available for FTPing
        from SRI-CSL:<RISKS>RISKS-2.56WEX for those of you who can get to it.
        (Perhaps it can be found in ARMS-D.  See next message!)  PGN]


Technical vs. Political in SDI

Dave Benson <benson%wsu.csnet@CSNET-RELAY.ARPA>
Thu, 29 May 86 20:38:48 pdt
A while back a RISKS contribution plaintively stated something to the effect
that SDI issues were strictly for experts.  Not so.  There are two somewhat
separable matters, the technical (Can SDI be done at acceptable risk/cost)
and political (Do we want it anyway? Does it improve security, etc.).

Now RISKS is a place to consider, well, computer risks. Thus it seems
appropriate here to explore SDI software issues.  The strictly political/
policy issues are on ARMS-D.  Since the two aspects of SDI are not entirely
separable, some overlap is going to occur.

The contributor of the above mentioned note might like to read msg 787
on ARMS-D from crummer.


Are SDI Software predictions biased by old tactical software?

<estell@nwc-143b>
30 May 86 10:09:00 PST
I'd like to offer an minority opinion about SDI software; i.e., I infer that
most RISKS readers agree with the assessments that "... SDI will never be
made to work..."  At some personal risk, let me say at the outset that SDI,
as ballyhooed in the popular press, may never work - certainly not in this
decade.  But I believe that our projections of the future are inextricably
linked to our past.  So let me share some observations on Navy tactical
software as of 1979.

Much of the OLDER tactical software:
 Was written in assembly language, or CMS-2.  Powerful languages like
   FORTRAN and C were not used.
 Was implemented by people who may not have ever sailed or flown in combat.
 Was not well defined functionally by the end users, for lack of "rapid
   prototyping" tools.
 Was written before modern notions like "structured programming" were used.
 Was "shoehorned" into very old, small, slow, unsophisticated computers
   (no hardware floating point, no virtual memory, 4 microsecond cycle).
 "Froze" the modules, instead of the interfaces.

Carriers ran tactical software on machines built of early 1960's technology
(germanium diodes).  They were remarkable computers for that era, having
almost the power of an IBM 7090 in a refrigerator sized box.  They severely
restricted software development.  If replaced, tactical software could be
written in several languages, not only Ada (DoD's choice), but also FORTRAN,
BASIC, Pascal, C, etc.; the goal is to use standard languages appropriate
to the task; and to incorporate modules, and support libraries, already
developed and debugged elsewehere.
------

Turning now to the more common arguments, they seem to be:
(1) COMPLEXITY; i.e., there are too many logical paths through the code;
(2) HISTORY; i.e., no deployed CCCI program has ever worked the first time.

The complexity argument leads one to wonder HOW the human brain works.  It
has trillions of cells; each has a probability of failure.  Some failures
are obvious: we forget, we misunderstand, we misspeak; etc.  But, inspite
of these failures - or because of them - we SATISFICE.  Even when some go
bonkers, the rest of us try to maintain our sanity.  Similarly, one errant
SDI computer need not fail the entire network - anymore than one failing
IMP need crash the entire ARPANET.

The historical argument leads to an analogy.  Suppose that after World War
II, President Truman had asked Congress for an R&D program in medicine, to
treat many of the physical wounds of the war.  Doctors would have pointed
out that lost limbs and organs were lost, period.  But the progress in the
last 25 years changed that.  Microsurgery, new drugs, artificial joints,
computer assists, including one system that bridged a damaged spinal cord,
reinterpreting nerve signals so that a paraplegic could walk again.

The "complexity" and "historical" arguments even interact.
Peter Denning observed years ago that the difficulty of understanding a
program is a function of size (among other things).  He speculated that
difficulty is proportional to the SQUARE of the number of "units of under-
standing" (about 100 lines of code).  Old tactical software, in assembly
language, tends to run into the hundreds of thousands of lines of code;
e.g., a 500,000 line program has 5000 units of understanding, with a diffi-
culty  index of 25 million.  That same program, written in FORTRAN, might
shrink to 100,000 lines thus only 1000 units of understanding, thence a
difficulty index of one million.  That's worth doing!

The medical analogy uncovers another tacit assumption in the SDI argument;
neither pro-SDI nor anti-SDI debaters have dealt with it well.  It is the
"perfection" argument.  A missile defense is worth having if it is good
enough to save only 5% of the USA population in an all-out nuclear attack.
That shield might save 75% of the population in a terrorist attack, launched
by an irresponsible source; this is far more likely than a saturation attack
by a well armed power like the USSR.  As bleak as this prospect is, the
facts are that if an all-out attack were launched today, whether by malice,
madness or mistake, by either side; and the other side retalliated in full
force, the human race would be doomed by fallout, and by nuclear winter.
-----

I am NOT saying that we have the answers within our reach, much less our
grasp.  I am NOT saying that SDI "as advertised" will be made to work ever,
certainly NOT in this decade; I am saying that if we don't try, we won't
progress.  We know at the outset that SDI will be flawed, though perhaps
someday acceptable.  That's the status of most of today's high technology;
e.g., air traffic control systems, hospitals, electronic banking,
telephone systems, mainframe operating systems, ARPANET, ad infinitum.

But my point is that we must not shun the challenge to TRY to improve the
software in the field, and the tools used to design and build and test it.
That's throwing out the baby with the bathwater!  Nor can we extrapolate
the successes of the 1990's from the common practices of the 1970's.
Rather than deplore the past, we must deploy the technology now developed
in Bell Labs, MIT, IBM, Livermore, and other leading computing centers.
When I worked in tactical software ('68 - '79), we were about a decade
behind the state of the art; e.g., we got high level programming languages,
symbolic debuggers, well stocked function libraries, and interactive tools
for writing and compiling, in the late '70's; we patterned them on systems
at MIT and Berkeley of the late '60's [MULTICS and GENIE].
I wonder just how much of the mid '80's technology is available to tactical
developers?  Are any tactical computers now offering the architecture and
performance of say a CONVEX C-1?  Is Prolog available to tactical program-
mers?  Has the "Ada environment" developed the full set of Programmer's
Workbench tools that UNIX [tm] offers? and it is widely available?
-----

The disparity between what scientists know MIGHT be done, and what poli-
ticians are claiming is a dilemma; how can we pass through its horns?
Tell the SDI proponents in DoD and Congress that:
(1) A perfect shield is a vain wish; and
(2) much progress CAN be made, if RDT&E is done reasonably; and that
(3) the real threat is from terrorists, not Russians.

I think it very likely that we cannot deter SDI, at least not before '89;
and even then, Americans will insist on "adequate defense" - even as they
complain bitterly about the cost of it.  So I suggest that we not try to
block SDI, but rather that we refocus its energies and emphases.
With luck, we can build a system that will work marginally.  It will cost
billions; weigh several tons; and consume megawatts of power.  In other
words, it will be confined to land sites only - not ships, and certainly
not space.  Thus, it will be fit ONLY for defense.  It will be impossible
to attack with it.  It will become a sort of "Maginot Bubble."  Then we
could sell the plans to our NATO allies, and to members of the Security
Council, including the USSR and China.  They won't be able to attack us
with them.  Perhaps such a demonstration of goodwill would cool the arms
race.  The longterm economic benefits to the USA are attractive; we could
sell systems to nations that wanted them, but couldn't build their own.
Some of the revenue could be plowed back into R&D in a many fields, not
just defense.  The software engineering progress made in behalf of SDI
probably would apply immediately to many other computerized systems.
Think about it.

Bob


Culling through RISKS headers [ACCIDENTALLY LOST IN RISKS-2.55]

Jim Horning <horning@src.DEC.COM>
Tue, 27 May 86 11:51:06 pdt
   [In the message to me that I edited down to nothing in RISKS-2.56 and
    then added the New York Times excerpts, Jim raised the question of the
    message headers on RISKS mailings looking rather uninformatively like
      53) 16-May RISKS FORUM     RISKS-2.53 (10331 chars)
      54) 25-May RISKS FORUM     RISKS-2.54 (10389 chars)
      55) 28-May RISKS FORUM     RISKS-2.55 (16307 chars)
    and wondering whether anything could be done about it.  I responded
    that I did not see how much useful information could be squirreled
    away in the message header, but did suggest that a summary of the
    topics and authors might be useful.  So, I think I will simply collect
    the "CONTENTS:" lines into one issue for each of Vols 1 and 2, and let
    you do context searches on them.  See RISKS-1.46 (NEW!) and RISKS-2.57,
    respectively, which will be distributed separately.  PGN]

Please report problems with the web pages to the maintainer

x
Top