The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 4 Issue 19

Wednesday, 26 November 1986

Contents

o Very Brief Comments on the Current Issues
Kim Collins
o The Audi discussion is relevant
Hal Murray
o Audi 5000
Roy Smith
o Laser-printer health risks; also, how to get ACARD report
Jonathan Bowen
o Data point on error rate in large systems
Hal Murray
o Re: Program Trading
Roger Mann
o Technical merits of SDI
from Richard Scribner
o Info on RISKS (comp.risks)

Very Brief Comments on the Current Issues

"Kim P. Collins" <kpc%duke.csnet@RELAY.CS.NET>
Wed, 26 Nov 86 13:32:05 EST
Verification
  It seems to me that there are a priori limits to the usefulness of
  verification for software engineering.  To prove that a program will
  work, even with the best system, is no doubt a non-trivial process, and
  hence subject to some of the same problems that the software design
  process has.

Relevance of contributions
  I think that we need not have computers involved for contributions to
  be relevant.  I see the limits being only those things that definitely
  belong elsewhere.  Computer science is a cybernetic science and a 
  science of cybernetics.  Cybernetics covers a lot.

Audi 5000
  The car must be incredibly powerful, or its control system INCREDIBLY
  unstable, to have caused so much damage.  From an engineering perspective, 
  assuming that it is not human error that is causing these occurrences, it
  seems that the unwise thing was to use an active control system (hence a
  risky one) with such a powerful machine.

New subject
  Any comments on active vs. passive control structures?  For instance,
  having a skyscraper that has flexible material so that in high winds
  it bends and does not fail, VS having a skyscraper that has guy wires
  connected to winches that are controlled by a computer that tests wind
  velocities, etc.

  My opinion is that ceteris paribus (and even ceteris non paribus in many
  cases) passive control structures are to be trusted and used far more
  than active control structures.  With active control structures, there
  are far more layers of abstraction and far more theories, designs, and
  sometimes materials that can fail.  (Other reasons exist.)

Computerization of nuclear power plants
  Computers can reduce the risks of cognitive overload and other human 
  problems, but they also have some of the problems raised above.  One
  advertisement by Carolina Power and Light during the most heated part
  of the Shearon Harris plant controversy here in NC said that the plant
  here would fail by dint of gravity in a relatively safe manner.  The
  plant in Chernobyl, it said or implied (I don't remember which), would
  not.  This is the active/passive control structure dichotomy applied to 
  one particular part of the computerization of a nuclear plant.  I think
  that we need to look at the different parts during the design stage and
  make certain that we minimize the active.  (No opinion on nuclear power is
  intended here.)

CSNET: kpc@duke, UUCP:  {ihnp4!decvax}!duke!kpc


The Audi discussion is relevant

<Murray.pa@Xerox.COM>
Wed, 26 Nov 86 17:09:20 PST
"The Audi case is one in which computer relevance is not at all clear."

It seemed quite relevant to me on two grounds.

First, adding a computer to an automobile is an important social
experiment, even if Audi didn't know they were taking part in one. I
can't think of any other application where people who probably don't
know much about computers are now depending upon computers as part of a
large complicated system where errors can easily kill people. I don't
watch TV, so I'm pleased to see that sort of information in RISKS.

The second aspect is the normal computer engineering problem (in a high
risk situation). I would like to know what went wrong. Hardware?
Software? System integration? Specification oversight? .... It's
probably a small computer and thus not very exciting relative to big
systems with megabytes of memory and millions of lines of code. Since
the results of a problem have been demonstrated to be very important (to
at least a few people), I think we should investigate this case in hopes
of learning something. Maybe it will even be easier to analyze because
the computer part of the system is so small.


Audi 5000

Roy Smith <allegra!phri!roy@ucbvax.Berkeley.EDU>
Wed, 26 Nov 86 00:23:49 est
I also saw the 60 Minutes episode.  From the tone of the various messages in
RISKS 4.17, it sounds like everybody believes Audi is at fault.  All I saw
was a lot of anecdotal evidence and a lot of people who seem to think that
if they say something often enough and with enough emotion, it will become
true.  Lacking any real facts, I can't begin to make up my mind what the
answer is.  I'm certainly not going to decide based on the 60 Minutes
testimony of a woman who ran over her own son.  This is admittedly a
terrible thing to happen, but why should we give her claim that she had her
foot on the brake pedal any more or less credence than the claims of the
Audi engineers?  A comment:

  > Clive Dawson <AI.CLIVE@MCC.COM>
  > One of the more memorable quotes from Audi: "We're not saying we can't FIND
  > anything wrong with the car; we're saying there ISN'T anything wrong with
  > the car."

    Indeed, this is such a patently stupid thing to say that I'm now
almost *convinced* that there must be something wrong with the car.  Any
company that could hire somebody that would say something so absurd must
have problems.  Imagine somebody telling you "I'm not saying we can't FIND
any bugs in the SDI system, I'm telling you there AREN'T any." :-)

Roy Smith, {allegra,cmcl2,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016


Laser-printer health risks; also, how to get ACARD report

Jonathan Bowen <bowen%sevax.prg.oxford.ac.uk@Cs.Ucl.AC.UK>
Wed, 26 Nov 86 15:22:08 GMT
  Front page headlines from Computer News, 20 November 1986:

  `Health risk fears spur CCTA to probe laser standards'

  `Fears over laser printers have spurred the government's computer purchasing
  agency into questioning health and safety standards.  The Treasury's Central
  Computer and Telecommunications Agency (CCTA) has said it will investigate
  claims that the printers can cause chest infections, blindness and other
  serious health problems.  Already one major UK user, British Rail (BR), has
  delayed a decision on buying printers because of a lack of published safety
  standards.  
  ....leading laser printer-makers Apple, Hewlett-Packard and Xerox
  denied their products could be harmful.
  ....A senior CCTA official said: "We have looked at lasers...they
  can cause temporary blindness to some people."
  ....white collar union Apex, said: "...Many of our members within
  the industry have reservations about the safety of laser printers."
  Already the use of laser printers has caused a three-day strike
  by Danish postal workers until they were given safety assurances
  by the government.
  A report from a leading Danish laboratory has said damage to the
  retina and lungs can be caused by laser printers.
  ...In 1981, IBM voluntarily withdrew one substance, trinitroflurenone (TNF), 
  which was a photoconductor constituent in its Model 1 3800 laser printer.
  An IBM spokesman said: "We established it had a potential to be harmful,
  although not in the way we were using it."'

Is this going to be the same sort of scare as that associated
with VDUs? Has anyone else heard of these problems? Are there
appropriate safety standards in the US or elsewhere?

By the way, for anyone interested in the ACARD report, here is
an HMSO address:

  Her Majesty's Stationery Office, PO Box 276, London SW8 5DT, England
  Tel +44-1-622-3316

The cost of the report is 6 pounds. The HMSO will invoice you if
you apply to the above address. (Be prepared to pay in pounds.)

Jonathan Bowen


Data point on error rate in large systems [Grapevine rot?]

<Murray.pa@Xerox.COM>
Wed, 26 Nov 86 18:55:35 PST
Grapevine is the mail system used by the Xerox R+D community. It has been
operational since 1981. Currently, there are 21 servers and roughly 4000
users. The servers have accumulated roughly 75 server-years of up time.

This spring, we discovered a fatal bug in the server code. It's been there
from the start. It was a simple recursive error in a very unlikely case.
Because the case was also uninteresting, nobody had bothered to "try it".

Fine print, if anybody cares:

The Grapevine database has two types of entries: groups and individuals.  An
individual is normally a person who reads/sends mail. A group is normally a
distribution list or an access control list. The members of a group can be
either individuals or other groups. Aside from the membership list, a group
also has a list of owners. The owners of a list are allowed to update it.
There are also pseudo groups. If you send a message to "Owners-xxx", the
system distributes the message to the owners (rather than members) of xxx.

Since a group can have members that are groups, there is the obvious
recursive problem. To check for this, the code that expands the
membership of a group runs up the call stack to see if another instance
of itself is already expanding this group. Unfortunately, the code that
processed Owners-xxx asked if anybody was already expanding xxx, while
they all thought they were working on Owners-xxx. Thus if Owners-xxx was
an owner of xxx, and anybody asked if Joe was an owner of xxx, poof.

PS: Mike Schroeder told me that they used to discover a new horrible
bug/oversight roughly every time the size of the system doubled.


Re: Program Trading

<RMann%pco@HI-MULTICS.ARPA>
Wed, 26 Nov 86 13:48 MST
I apologize to anyone for carrying this on further, but I am still not
convinced that computers are creating the wide stock price swings that
we see today in the market.  Assuming a model of some sort that detects
"inefficiencies", there must be a range of stock or option or futures
for which the inefficiency holds.  Beyond those thresholds, the no-lose
situation does not exist and should be avoided.

Now I am a dabbler in stocks and I know about limit orders.  Limit orders
are filled if the price of the stock is below a certain price on a purchase
or if the price is above a certain price on a sale.  This is extremely
useful when trying to establish a hedged position.  Now, I can't imagine
these super-sophisticated arbitrageurs issuing MARKET orders -- it is too
absurd to imagine.  If the hedger issues limit orders, the trades do not
occur and the stock price stays relatively stable.

Now, is there anyone out there who has direct knowledge of these things
and is willing to spill the beans and give us the straight scoop ?  Are
computers the risk here or not ?


Technical merits of SDI

Peter G. Neumann <Neumann@CSL.SRI.COM>
Wed 26 Nov 86 13:22:31-PST
The following note from Richard A. Scribner, Committee on Science, Arms 
Control and National Security at the AAAS may be of interest to RISKS readers.

  A detailed discussion of the technical merits of SDI, particularly software,
  will be held as part of the First Annual AAAS Colloquium on Science, Arms
  Control, and National Security, 4-5 December 1986 in Washington DC. Among
  the distinguished speakers will be Lt.Gen. James Abrahamson, director of
  SDIO; James R. Schlesinger, Center for Strategic and International Studies;
  William Graham, science advisor to the President; Adm. Noel Gayler; Albert
  Carnesale, Dean of the Kennedy School of Government at Harvard; Dante 
  Fascell, chairman of the House Committee on Foreign Affairs and chairman of
  the House subcommittee on arms control; Kosta Tsipis, director of the MIT
  Program on Science and Technology for International Security.  For 
  information and registration details, please call the American Association
  for the Advancement of Science at 202-326-6490.

Please report problems with the web pages to the maintainer

Top