The RISKS Digest
Volume 4 Issue 64

Monday, 16th March 1987

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Computer-lighting board nearly causes WWIII
Brent Laminack
Computerized telephone sales pitch meets emergency broadcast number
Brent Laminack
Furniture risks — Vanishing Diskettes
Lee Breisacher
Reprise on the UK Government's ACARD Report
Brian Randell
Last minute changes
Roy Smith
Risk in ``High'' Financing
Michael Wester
Risk at Crown Books
Scott R. Turner
Human errors in computer systems — another reference
Jack Goldberg
Requests for War Stories in Scientific Programming
Dennis Stevenson
TFR and F-111s
Eugene Miya
An Open University Text Book
Brian Randell
US NEWS article on 'Smart' Weapons - questions and concerns
Jon Jacky
Info on RISKS (comp.risks)

Computer-lighting board nearly causes WWIII

Mon, 16 Mar 87 16:07:51 est
    With the recent discussion of computer-controlled lighting
boards, I ask "How resistant to Electron Magnetic Interference
should these boards be?"  Not such as academic point as one would 
think.  An audio engineer friend of mine related this incident:

    He was running sound for the Broadway production of "A Chorus
Line" a few years back.  Then-president Ford came to a performance.
Secret Service men everywhere.  One by the sound board, one by lighting,
etc.  All is quiet until about the mid-point of the play.  Then the
Secret Service man standing by the lighting board (an early Nicholson
model I believe) got a call on his walkie-talkie.  To reply, he depressed
the push-to-talk switch, sending out a couple of watts of RF, and
Presto! the entire theatre was plunged into inky darkness.  Chaos
ensues, PERT guns are drawn, etc., etc.  It completely wiped out the
CMOS memory of the lighting board.

    Questions: should mil-spec EMI resistance be built into only military
equipment?  Who would have thought that a lowly theater lighting board would
be of critical national importance if for only a few moments?  Could a
high-tech John Wilkes Booth use some knowledge such as this for the next
asassination attempt?

    Brent Laminack (gatech!itm!brent)


Computerized telephone sales pitch meets emergency broadcast number

<itm!brent%gatech.UUX%ncar.csnet@RELAY.CS.NET>
Mon, 16 Mar 87 16:08:10 est
    About 18 months ago here in Atlanta, a string of phone-related
accidents caused much confusion and consternation in the lives of
at least one family.

    To begin with: one of these "computerized" telephone sales pitches was
calling through a mid-town exchange offering "you have won a free Bahamas
vacation.  Just call xxx-xxxx!" As it was walking through the exchange, it
hit an unlisted number.  This phone was an emergency override number into
the metro Atlanta cable television system.  In the case of extreme
emergency, the Mayor or head of the CD would call this number.  The incoming
phone line would override the audio portion of ALL cable channels currently
in use.  It was about 10:30 a.m., so there wasn't as big an audience as if
it had been prime-time, but yes, all of Atlanta's cable subscribers were
informed they had just won a free trip.  Chaos ensued.  Especially for the
poor family whose telephone number was one digit different from the
call-back number.  Through no fault of their own they got one call every 20
seconds all that day.

    Reducing the RISK of this repeating itself could take place at any
step: Legislation limiting "computerized" sales pitches (this hasn't been
done), a security code on the emergency phone number (this has been done)
and for the poor lady getting the wrong numbers, not much.  If
any RISKS readers are unfamiliar with the design process that went into
the design of the Touch-Tone (TM) keypad, it makes interesting reading.
The designs were a speed vs. accuracy trade-off.  The lady could only
wish that The Labs had put a higher priority on accuracy.

This was sort of an information-age Orson Welles "War of the Worlds".

    Brent Laminack (gatech!itm!brent)


Furniture risks

<Breisacher.OsbuSouth@Xerox.COM>
16 Mar 87 09:10:18 PST (Monday)
A friend of mine at a nearby company received an issue of their OA
Bulletin which contained this little item:

  Diskette Data Disappears in Desks

  Data stored on diskettes may be lost if the diskettes are kept in the new
  Haworth modular furniture now in use in some offices.  The drawer divider in
  the utility drawer of these units is held in place with a magnetic strip.
  These magnetic strips can erase the data stored on a diskette.  Also, the
  task light ballast can erase the data stored on a diskette placed flat on
  the shelf immediately above the ballast.

  To protect your data with these units, store your diskettes....


Reprise on the UK Government's ACARD Report

Brian Randell <brian%kelpie.newcastle.ac.uk@Cs.Ucl.AC.UK>
Mon, 16 Mar 87 15:52:01 gmt
A little while ago there was quite a debate in RISKS about the comments in
the ACARD report concerning certification, and the use of formal methods,
for safety-critical programs. Last week's Computer Guardian (an insert in
the daily paper, The Guardian) carried a splendid article on this report by
Keith Devlin, one of their regular contributors, who is in fact on the
Faculty of the Department of Mathematics at Lancaster University. I and my
colleagues here enjoyed its content and style so much that, even though it
is somewhat lengthy, and the major points it makes are not new, we thought
that it should be offered to RISKS, in its entirety, so that the rest of you
can see what the UK national press is occasionally capable of!

  DISCRETE CHARMS OF APPLICATION

  Keith Devlin, The Guardian, 12 March 1987

  Flicking through a report produced by a research advisory council to the
  Cabinet Office recently, my eye was caught by some rather amazing figures.
  The subject under consideration was the use of mathematical techniques to
  verify the accuracy of computer programs - surely a laudable aim if ever
  there was one.  According to the committee who assembled this report, the
  current best practice in the creation of commercial software produces on
  average just one error in every one thousand lines of code in the final
  product.  (How does one convey raised eyebrow using the written word?)
  Given better verification techniques, the report went on, one could
  "realistically expect" an error rate of just one error per hundred thousand
  lines of code.

  Well, in these days of PR hype one does become used to extravagant claims,
  but this one must take the biscuit.  In a report that is presumably intended
  to shape future research directions, this is a ludicrous proposition to
  make.  Worse still, the report did not stop there.  The ultimate goal was,
  it appears, for the mathematician who certifies the said program as being
  "correct to within an error rate of 0.001%" to be henceforth held legally
  responsible for any future failures of the system (including the possibly
  lethal consequences thereof).

  Now, while I am in complete agreement with the idea of professionals having a
  responsibility for what they do, the fact of the matter is that the committee
  who prepared this particular report have not the faintest idea of just what
  mathematics is about, and their faith in the notion of a "rigorous
  mathematical proof" would be touching if in the present context it were not
  so potentially dangerous.

  To put it simply, a mathematical approach to the writing of computer programs
  is highly likely to result in better, more efficient, and more reliable
  programs, than would a less structured approach, but that is all.  There
  can be no, repeat no, question of such an approach giving rise to a
  guaranteed product of the kind suggested.  A mathematical proof of any
  reasonable length is just as likely to contain an error as is a computer
  program of the same length.  Mathematics helps.  It cannot cure.

  The writers of the aforementioned report would do well to read the article on
  program verification written by De Millo, Lipton and Perlis in the recently
  published book "New Directions in the Philosophy of Mathematics," edited by
  Thomas Tymoczko and published by Birkhauser Verlag of Basel in Switzerland.
  Indeed, in spite of its possibly daunting title, I can recommend this book to
  anyone interested in mathematics and, in particular, its relationship to
  computing.  Though - as with any compilation - there is some variation in the
  quality of the various articles, overall the book is worth getting hold of.

  One particular chapter deserves special mention, and that is the account
  of the "Ideal Mathematician" written by Philip Davis and Reuben Hersh.  As
  well as being hilariously funny, this succeeds in providing an uncannily
  accurate portrait of the typical, present day pure mathematician.  (Indeed,
  I suspect that its humour is a direct consequence of its accuracy.) Read it
  if you want to discover what characters like me to for the greater part of
  our working day.

  The Davis and Hersh piece is taken directly from their award-winning book
  "The Mathematical Experience," published by Harvester Press (Brighton) in
  1981.  If you have not yet come across it, make sure you do.  It is, quite
  simply, the best general book on mathematics that has ever been written. So
  good, in fact, that when I heard that the same two authors had written a
  second book, I wrote at once to the publisher asking for a copy to review in
  this column.

  When my copy of "Descartes Dream" by Davis and Hersh (Harvester, 1986) duly
  arrived, what a disappointment! Gone is the life and vitality of the
  previous book.  The short, unstructured chapters I found dull and
  unrewarding; the theme suggested by the title and the introductory chapters
  barely discernible in the rest of the book; and some of the writing is just
  plain bad.  (I hope it is just the effect of trying to follow a huge
  success, and before long another gem will be on the way.)

  Somewhat similar to "Descartes Dream" is another new book from Birkhauser:
  "Discrete Thoughts" by Mark Kac, Gian-Carlo Rota, and Jacob Schwartz.  But
  where Davis and Hersh fail to convey any feeling of the vitality of
  mathematics in what they write, the assorted articles in this compilation
  are full of life, and consequently enjoyable to read.  Anyone interested
  enough to read this column regularly should get a lot out of this book,
  written by three of the world's best mathematicians/computer scientists.  So
  too should all those professional mathematicians who take their art too
  seriously, and those whose expectations of mathematics are far in excess of
  reality.  But this is where I came in.


Last minute changes

Roy Smith <phrivax!allegra!phri!phrivax.phri!roy@ucbvax.Berkeley.EDU>
Sun, 15 Mar 87 08:40:57 EST
In RISKS-4.63 David Barto writes:
> I thus ignored the problem, went to USENIX, and while I was gone the
> problem was reported.  (See what you get for making changes on a friday
> before going on a trip? :-)

Dave says this in jest, but it's got a lot more truth to it than he lets on.
All the careful planning and testing you may normally do isn't worth a damn
if you are willing to make last minute changes just before you lose control
over the situation, whether that means making a change just before you go on
vacation or adding the latest feature the day before you ship your product
to the customer.  It doesn't make much difference if we're talking computer
software or toasters.  
                                      Roy Smith, System Administrator, 
Public Health Research Institute, 455 First Avenue, New York, NY 10016


Risk in ``High'' Financing

Michael Wester <wester@aleph0.unm>
Thu, 12 Mar 87 22:17:09 MST
Excerpt from ``Risky moments in the money markets'' in U.S. News & World Report
of March 2, 1987

   According to the New York Fed, Wall Street's average daily volume of bank
wire transactions totals at least $1.2 trillion---an amount equal to one
quarter of the U.S.'s total annual economic activity---and could be as much as
$500 billion a day higher, though no one really knows.  Even at the lower
figure, that's five times the daily flow since the start of the decade.  Each
year, transaction volume leaps by nearly 25%, or double the annual growth rate
in the 1970s.  [...]

   The obvious fear is a financial accident that could bring the system down.
Banks must settle accounts daily, and a failure to pay up by one could cause a
chain reaction of problems for others.  Close calls in settling are more common
than is generally known.  A Federal Reserve Board official acknowledges the
number of ``breathless moments'' averages 10 a year.

   One of those scary scenarios developed in 1985, when the government-
securities market was severely disrupted by a computer software ``bug'' at the
bank of New York, preventing settlement for a day and a half.  Only a $23.6
BILLION [my emphasis] emergency loan by the Fed got the wheels unstuck.  [...]

   Each day, billions of transactions move from country to country over a pair
of wire systems: The Clearing House Interbank Payments System, called CHIPS,
operated by 140 banks specializing in international finance, and the Federal
Reserve Systems's Fedwire, which links 7000 domestic banks and does the
bookkeeping for Treasury securities transfers among banks.

   The electronic linkages make it possible for money to whiz from computer to
computer so quickly that the same dollars can be used to finance up to seven
deals a day, compared to two in times past when paper checks were the principal
method of payment.  [...]

   One danger signal: Last year's daily transaction value was 24 times greater
than the amount of reserves banks had on deposit with the Federal Reserve
System, up from a 9.4 multiple in 1980.

   It has become common practice for banks to go deeply into hock each day,
often exceeding total assets in anticipation of payments they will receive
before it is time to balance their books at closing.  Such ``daylight
overdrafts'' account for as much as $110 billion to $120 billion on the
Fedwire and Chips.  [...]

   What makes the climbing debt even more unsettling is that payments move over
CHIPS and Fedwire systems that [Gerald] Corrigan [, head of the New York
Federal Reserve Bank,] describes as a ``hodgepodge of facilities, equipment,
software and controls that have little in common with each other.''  Even if
the hodgepodge is capable of handling the flow now, Corrigan and others worry
about it remaining adequate if the transaction volumes continues to grow as
astronomically as it has in recent years.  ``The money spent on computer
systems has not kept pace with the tremendous explosion in electronic
payments,'' says a Fed official.

Michael Wester --- University of New Mexico (Albuquerque, NM)
  ~{anlams|convex|csu-cs|gatech|lanl|ogcvax|pur-ee|ucbvax}!unmvax!aleph0!wester


Risk at Crown Books

<srt@CS.UCLA.EDU>
Fri, 13 Mar 87 11:26:17 PST
Crown Books here in Los Angeles has taken to using an inventory control
system where magnetic tags inside books are scrambled by passing them over
a strong permanent magnet after the books are sold.

Then Crown Books started selling software.    

    Scott R. Turner    UUCP:  ...!{cepu,ihnp4,trwspp,ucbvax}!ucla-cs!srt
    DRAGNET:  ...!{channing,streisand,joe-friday}!srt@dragnet-relay.arpa

         [We've had various hi-tech systems that were trivial to beat 
         with lo-tech.  This one converts remanence into remnants.   PGN]


Human errors in computer systems — another reference

Jack Goldberg <JGOLDBERG@CSL.SRI.COM>
Fri 13 Mar 87 12:49:19-PST
Jens Rasmussen "The Human As A Systems Component", chapter 3 in Human
Interaction with Computers, edited by H.T. Smith and T.R.G. Green, Academic 
Press, 1980, paperback, London and New York editions.  The book is a nice and 
diverse collection.  Rasmussen discusses operator error, but not designer 
error.  The chapter by Green, "Programming as a Cognitive Activity", touches 
on errors in program design.  He criticizes a paper by Mills that implies that 
topdown design is the way to design good programs and praises a 1976 paper by 
Denning that rejects the proposition that the process of creating a 
well-structured design is (in general) well-structured.


Requests for War Stories in Scientific Programming

Dennis Stevenson <steve@hubcap.UUCP>
16 Mar 87 18:15:11 GMT
I have to give a pitch about software environments for developing scientific
programs.  One of the points that I would like to bring out is the "cost" of
having a scientific model improperly coded and therefore spuriously rejected.
Can anyone provide me with anecdotes (all names will be withheld) concerning
this point?  Also, if anyone has cogent arguments on the use of development
environments/automatic programming in the scientific context, I'd appreciate
them.
         D. E. Stevenson  csnet: dsteven@clemson UUCP:  gatech!clemson!steve


TFR and F-111s

Eugene Miya <eugene@ames-nas.arpa>
Mon, 16 Mar 87 11:04:48 PST
In RISKs, you proposed some interesting scuttlebutt about mortar rounds and
TFR.  From what I know about the workings of radar, I would tend to be
skeptical of the incident because radar systems have to take things like the
dielectric constant of materials in account [slightly more complex than this
but I'm not here to talk about radar] ("earth" differs greatly from chaff,
and moving earth (explosion debris) is even different than static; earth
particle size is also significant), but what is interesting is what we don't
really know about radar.  I say, "I am skeptical," not "You are are wrong."
What the radar/avionics person would tend to do would be to go empirically
check this out.  All aircraft are checked out in chambers to determine their
base radar signature (empirically) because we don't have good models on
radar return.  So I would think the people at Hughes (Hugh Aircrash ;-)
would have tested their radar under this circumstance as soon as it was
proposed.  BTW this is what is now also done for EMP testing.

--eugene miya


An Open University Text Book

Brian Randell <brian%kelpie.newcastle.ac.uk@Cs.Ucl.AC.UK>
Fri, 13 Mar 87 17:46:16 gmt
A colleague of mine recently lent me the following:

Understanding Systems Failures, by V. Bignell and J. Fortune 
(Manchester University Press) 1984 (p/b).  To quote from the blurb on the back:

   "This book outlines a common approach to the understanding of many
  different kinds of failure: failure of machines, of individuals, of groups
  and businesses.

    "A dozen case histories are discussed by the authors. They range from the
  accident at the Three Mile Island power station to the collapse of
  Rolls-Royce and the sinking of a North Sea rig, each a result of a variety
  of faults and failures. Failures are then analyzed through an approach based
  on the identification of the systems that failed and a comparison of these
  with a variety of standard systems.

    "The stories of many of these failures have never been written from such a 
  perspective before, and this is the first time that a wide range of
  studies has been brought together to provide an understanding of failure
  in its widest possible sense.

    "Understanding Systems Failures is the set book for the Open University
  course T301, 'Complexity, Management and Change: Applying a Systems
  Approach'. It will be useful to students and teachers of management,
  business studies, administration and engineering."

The above seems a fair description to me. There is, as far as I can tell,
nothing explicitly related to computers in the entire book, but it is
nevertheless a book which might be of interest to the RISKS community - it
would, for example, provide a good (and cheap!) source of quite detailed
background factual material for students who were being required to analyze
what part computers might play in decreasing (or increasing!) the likelihood
and seriousness of various types of system failure.

Brian Randell - Computing Laboratory, University of Newcastle upon Tyne

  UUCP  : <UK>!ukc!cheviot!brian      JANET : brian@uk.ac.newcastle.cheviot


US NEWS article on 'Smart' Weapons - questions and concerns

Jon Jacky <jon@june.cs.washington.edu>
Fri, 13 Mar 87 09:55:16 PST
The cover story of the March 16, 1987 issue of US NEWS AND WORLD REPORT
is a long and colorfully-illustrated story on various high-technology
tactical weapons.  The story is somewhat informative but isn't real clear
about which weapons already exist and are deployed, which are in development
now, and which are just gleams in someone's eye.  In particular, the article
blurs the distinctions between what appear to be three rather distinct 
categories of weapons:

1. Precision guided weapons - The soldier selects the target and guides
the weapon all the way to the target.  These include the TOW optic fibre
guided rockets and the various laser-guided bombs (which work because someone
focuses light on the target, which the bomb homes in on).  These are by now
deployed all over the place and often work well, although they are not 
panaceas.  A difficulty is that the soldier must often remain exposed during
the whole flight time of the weapon.

2. "Fire and forget" weapons - The soldier selects the target, but the weapon
guides itself to the target.  This is significantly harder.  The most 
effective examples seem to depend on the target making itself very 
conspicuous, for example the HARM anti-radiation missiles that home in on
radar beacons.  The article also describes AMRAAM,  an air-to-air missile of
which it is said "a pilot can fire as soon as he detects an enemy aircraft.
He can immediately steer clear while the missile tracks and kills the enemy
with no further help." The story says AMRAAM is "costly and controversial"
but is "now being tested."  Is this for real?  I vaguely recall hearing
about AMRAAM off and on for many years, and thought it was in a lot of
trouble, a bit like the Sgt. York.

3. Autonomous weapons - The weapon itself selects the target.  I have a lot 
of trouble with this one.  For one thing, it is obviously a lot more difficult
technically than even "fire and forget;"  The article rather blurs this 
distinction.   The article says, 

"Smart bombs that require human control might not be good enough. ... A simple
stick-figure picture of a target, such as a railroad bridge, is put into one
"autonomous guided bomb" under development.  Launched at very low level with
a strap-on rocket, the bomb flies a preplanned route until it sees something
to attack that matches its computer's picture."

Does anyone recognize the project refered to here?  Is this thought feasible?
Based on my understanding of the state of the art in image understanding, I 
would have thought not.  Does this possibly represent some reporter's 
understanding of some rather speculative document like the 1983 DARPA
Strategic Computing Report?

Another autonomous weapon which is evidently farther along is SADARM:

"The Army's Sense and Destroy Armor (SADARM) smart-weapon system uses
advanced radar, heat sensors, and a miniature onboard computer ...  Fired
from artillery, ... the submunitions, each a small, self-contained weapon,
would pop a small parachute and spin slowly down as it scans for telltale
signatures of self-propelled guns.  Once it sensed the presence of a target,
it would aim for the center and fire an explosively formed slug of
metal that slams into the lightly armored top of the vehicle, filling the
crew compartment with a hail of deadly shrapnel."

What are these "telltale signatures?"  Are they all that discriminatory?
Elsewhere, the article implies that distinguishing tanks from trucks and
jeeps is not much of a problem.  Is that true, _in the context of this
kind of weapon_?

The article strives for journalistic balance in the usual way:  Proponent
A says these are necessary and would be effective, critic B charges they
may be ineffective and we should not become too dependent.  What I find
missing is the notion that perhaps such judgments need not be based on 
personal opinion, that it ought to be possible to design tests that determine
these things.  That is, maybe A is right and B is wrong (or vice versa).
I assume the people who work on these understand that, but
the concept never really appears in the article.  Also, the article implies 
that the strategy and doctrine of relying rather heavily on this kind of
stuff is almost dogma by now, rather than still being provisional and 
much debated in strategy circles.  Is that true?

The article is especially good in in explaining why such weapons are thought
necessary: 

Population trends tell the story ... West Germany has the world's lowest
birth rate. ... By 1994 the draftee pool will shrink nearly in half.  In 
America, political realities impose an equally inflexible obstacle.  "How 
far do you think a President would get who wanted to reinstate the draft,
expand the standing armies by three or four times, and deploy a major portion
of that force overseas?" asked Joseph Braddock (of the defense think-tank,
BDM).  "We don't have much choice," adds former Defense Secretary Harold 
Brown.  "We've got to choose quality over quantity."

-Jonathan Jacky, University of Washington

Please report problems with the web pages to the maintainer

x
Top