The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 12 Issue 60

Wednesday 6 November 1991


o Driver arrested in computer muddle: Data protection problem
o Computer Saboteur Pleads Guilty
Rodney Hoffman
o Blaming the computer (again)
Randal L. Schwartz
o YAHIR (Yet another human interface risk)
Friedrich Knauss
o Certified Voting Program
Brian A Wichmann
o Electronically controlled bus transmission
Mark Seecof
o V-22 Tiltrotor Roll Sensors and Triple Redundancy
Mike Allard
o Re: FDA-HIMA Conference on Regulation of Software
Frank Houston
o RISKS of propagating legendary RISKS
Paul Karger
o Software safety, formal methods and standards
Jonathan Bowen via Jim Horning
o Info on RISKS (comp.risks)

Driver arrested in computer muddle: Data protection problem.

paj <>
6 Nov 1991 15:26:44-GMT
According to Computer Weekly, Oct 31 1991, a youth was mistakenly arrested
after the DVLA (Driving & Vehicle Licensing Authority) computer in Swansea
allowed two cars to be given the same registration plate.  When the poor guy
asked the DVLA for information on previous owners of his car, in an attempt to
sort out the mess, the DVLA refused.  The Data Protection Registrar has now
backed the DVLA.

It seems a pity that legislation that is supposed to protect the innocent
citizen from this sort of thing has in fact made life more difficult.

   [By coincidence, I had just sent off my January 1992 Inside Risks column,
   called What's In a Name, devoted to such problems...  Here's one more to add
   to our rather large list of name- and ID- related horror stories!  PGN]

Computer Saboteur Pleads Guilty

Rodney Hoffman <>
Wed, 6 Nov 1991 06:50:55 PST
In RISKS-11.95, PGN reported on "Programmer Accused of Plotting to Sabotage
Missile Project."  Here's the next installment:

Computer Saboteur Pleads Guilty: Michael John Lauffenburger, 31, a former
General Dynamics computer programmer who planted a destructive `logic bomb' in
one of the San Diego defense contractor's mainframe computers, pleaded guilty
to one count of attempted computer tampering.  He faces up to one year in
prison and a fine of $100,000.

Federal prosecutors said Lauffenburger had hoped to increase his salary by
creating a problem only he could solve:  a program that was designed to destroy
a database of Atlas Rocket components.  He set the program to activate, then
resigned, hoping, investigators say, that the company would rehire him as a
highly paid consultant once it discovered the damage.  But another General
Dynamics programmer inadvertently ran across the program and alerted security,
which disarmed the program.

[Source: Wire service report in the `Los Angeles Times', 5 Nov. '91, p. D2]

Blaming the computer (again)

Randal L. Schwartz <>
Wed, 6 Nov 91 14:46:53 PST
Background: Oregon recently passed a property tax limitation.  Much gnashing of
teeth was heard when property owners received their recent tax bills, in which
property values had soared between 20% and 200%(!)  in the last 18 months,
leaving many owners with *larger* bills than in the previous cycle.

In today's Oregonian (Portland Oregon):

           _Oregon assessments go up, but this one is just ridiculous_

  Californians are used to hearing stories of Oregonians giving them a
hard time.  But the tax assessment of nearly $100 million on one
California couple's Josephine County farmland [eastern oregon] was
only a computer error.  Honest.  [....]
  [The county officials] are scrambling to send out new tax bills to
the county's other 40,260 property owners to make up for the $986,312
that was incorrectly billed to the Millers.
  County officials discovered the error when Carol Miller called the
county assessor Oct. 25 to complain about the taxes on a 38.8-acre
parcel near Williams that she and her husband own.  Because there was
only a barn on the land, which was assessed as farmland, they should
have received an $8,850 assessment, instead of the $97 million property
valuation. Their tax bill should have been for just $117 [...].
  [Bill for someone's $70K home will go from $710 to $760 to make up
for the deficit from the bad math.]
  "It has been absolute bedlam around here," said [the county deputy
treasurer].  She said she had just about given up blaming the error on the
computer.  "So we are just sitting here taking the blame."
  Rhodes [the county assessor] said that the erroneous tax bill can be blamed
in part on [the recent legislation].  The tax limitation measure requires
assessment notices to be sent with the tax bills. Had the assessment notices
been sent out in the spring, as in previous years, the error would have been
caught before tax rates were computed and bills sent out.  It was a change
"that created the crack through which the error fell through," Rhodes said.
  County officials still are trying to figure out what exactly went wrong.  As
near as anyone can tell, it occurred when the assessor's office was updating
farm assessments.  A glitch of some kind occurred as the computer was figuring
the Miller's property.  "And it just kept on going until it ran out of digits,"
Rhodes said.
  The error affected only the one property, so everything else appears to be
functioning normally.  Rhodes said he hopes to be able to update the
17-year-old [!] software so that the computer will scan tax roles for these
kinds of anomalies.

I find it amazing that they are using 17-year-old software.  I also find it
amusing that they had no cross check for "are we in the right ballpark for
total county assessments", and that they believe that everything is correct

Just another homeowner in Oregon, Randal L. Schwartz, Stonehenge Consulting
Services (503)777-0095

YAHIR (Yet another human interface risk)

Friedrich Knauss <megatek!>
Wed, 6 Nov 1991 00:41:01 GMT
At our company (as with many) the computer center is separate from the
engineering department. Administrative requests are sent to the support
division to be processed. Recently, we needed to retrieve a file from a
moderately recent backup tape. We sent in the request, and the retrieval was
done as requested. As an undesired fringe benefit that entire neighborhood in
the directory tree was restored as well, overwriting several days worth of work
in the process. The cause for this: When support receives a request they print
it out and process the request from hardcopy. The laser printer used for this
does not wrap lines (a not uncommon feature). As a result, the path printed out
in the request was truncated at a point several directories short of the actual
path, and the restore was done on the truncated tree overwriting everything
below it. Although several different varieties of safeguards could have
prevented this, none are in use. Other potential risks of this are left as an
exercise to the reader.

Certified Voting Program

Brian A Wichmann <>
Mon, 4 Nov 91 15:38:12 GMT
An example of Certified software for Elections


In July 1990, the Church of England changed the regulations concerning the
method of undertaking elections to allow computer programs to be used. The
Church of England uses the Single Transferable Vote method for most of its
elections, and therefore the counting process is non-trivial.

The new regulations required that any software used by the Church was
certified by the Electoral Reform Society (ERS) as adhering to the counting
method specified.

The author was asked to advise ERS as to the suitability of a particular
program for certification by them. The actual system consisted of several
programs, but the main logic was undertaken by a program consisting of
about 1,000 lines of standard Pascal. Since the Church wanted to use the
program in October 1990, only about 36 man-hours could be devoted to the
certification process *{This was an unofficial activity of the author}*.


The process used to check the program was based upon the statements metric
applied just to the main program. The reason for analysing just the main
program was that errors elsewhere are likely to be immediately apparent,
while the work needed to check the main logic is quite significant.

The main program was transferred from an IBM-PC to an Archimedes. This transfer
was done for practical reasons but acted as a cross-check on the code. The
program was then instrumented by hand to discover the statements executed.
Special test data was then constructed to execute all the statements. All
statements proved executable except two which would not be executable on either
the Archimedes or IBM-PC. In fact, a program was already available to generate
random test cases, so there was a potentially large source of data.

The specification of the computer program was to follow the same logic as the
hand counting rules. Hence, in principle, it was easy to check the output from
any specific test. However, for the larger tests, the amount of hand-checking
is significant, so it was important to minimise the checking required. This was
does by computing the minimum number of test cases which would ensure all the
(feasible) statements were executed. This left 13 test cases which were hand
checked by the ERS expert --- the ideal `oracle' in this case.

The results of this exercise was that one significant bug was found and about
six minor ones. No fault has been found subsequently in the program (although a
minor fault has been found in another program concerned with the data

The author therefore concludes that this is a cost-effective method of
improving the quality of software (assuming that the original development did
not include this same process).


Two Diocese used the computer program to elect their representatives to the
General Synod in 1990. This Synod will decide on the final stages of
admitting women to the ministry.

The Church of England wish the program extended to include provisions for
`constraints'. However, it is clear that the general problem of including
constraints is NP complete.

Brian Wichmann (

Electronically controlled bus transmission

Mark Seecof <>
Tue, 5 Nov 91 11:36:07 -0800
At a hearing of the safety board investigating the crash of a chartered bus
carrying Girl Scouts (which killed several and injured the rest) the maker of
the bus confirmed that its automatic transmission was designed to shift up when
the engine was in danger of "over-revving" regardless of the gear range
selected by the driver using the electronic controls.

Also, the bus maker explained that the transmission would not obey a control
selection of a lower gear range if the engine were already running fast.  This
would moot any attempt by a driver to obtain greater compression braking after
a partial or complete brake failure.

The California Highway Patrol investigation had concluded that the front brakes
of the bus (which ran off a cliff while descending a steep mountain near Palm
Springs because the driver allowed it to go too fast) were out of adjustment,
that the rear brakes had overheated and failed, and that the automatic
transmission had been in too high a gear for the engine to provide adequate
compression braking for safety.  The gear position of the automatic
transmission was determined by examining the wreckage.  It is not clear that
the driver had selected the high gear the bus was in.  The CHP has suggested
that the crash need not have occurred had the bus been in the proper gear for
the downward drive.

The driving instructor employed by the bus company testified that he was
unaware that the transmission would shift up even if low range had been
selected, so he did not train the driver of the ill-fated bus to avoid this
potential occurrence.  The instructor was surprised to learn that the
transmission was designed to disobey its control setting.  The driver was
killed in the crash, so it is not possible to question him about his operation
of the bus.

I remember seeing something in RISKS about automatic transmissions on certain
recently-built passenger cars.  Certainly this situation reminds one of the
A-320 control limits.

It would be entirely proper to sacrifice a bus engine, even the whole drive
train, to save the lives of a busload of Girl Scouts.  I think mechanisms which
override the controls of a vehicle or other device to protect the machine from
harm at the expense of its users are wicked.

Also, it's poor M-M interface design to have a control which doesn't work.  Why
have a "low range" setting on an automatic transmission if the thing will shift
up regardless?  And why change a very standard design (the availability of low
range on automatic transmissions for safety purposes) without need or warning?
If there's a lawsuit, I hope the bus maker loses.

Mark Seecof <>

V-22 Tiltrotor Roll Sensors and Triple Redundancy

Mike Allard <acd4!IEDV5!mja@uunet.UU.NET>
Mon, 4 Nov 91 14:44:06 EST
The following is an excerpt from "V-22 Tiltrotor Test Flights Resume," from
the November 1991 issue of "AOPA Pilot" magazine (used without permission;
all spelling errors are mine):

"Aircraft number five crashed on June 11 during its first test flight at
New Castle County Airport in Wilmington, Delaware. [...] The pilots were
attempting to land when the V-22 became unstable in roll, and the left-hand
engine struck the ground.  The aircraft lifted, rolled left, and crashed on
the runway, ending up on its back.  The Navy halted further flights pending
an investigation.

"The Navy probe, concluded in September, attributed the crash to faulty
hardware connections in the V-22.  Two roll-rate sensors, which provide
roll-rate information to the flight control computer, were hooked up
backward, according to the Navy.  There are three such sensors, which
provide a triple-redundant system; if one sensor sends an erroneous signal,
it is 'voted out' by the other two.  Because two of the three sensors were
reverse-wired, the input from the sole sensor providing correct roll
information was canceled out.  The result: 'The aircraft went divergent in
the lateral axis and impacted the ground.'

"Further investigation revealed that one out of three roll-rate sensors was
reverse-wired in two other [V-22] Ospreys, but that snafu has been corrected."

An observation by a pilot and programmer here: "I guess fault tolerance only
works if you wire up your sensors right."

Mike Allard, Applied Computing Devices, Inc. <uunet!acd4!mja>

Re: FDA-HIMA Conference on Regulation of Software

Frank Houston <>
Tue, 5 Nov 91 15:21:37 EST
I read Mr. Horn's report on the HIMA/FDA Conference with interest.
There are some misconceptions that need clearing up.

First, FDA regulation of software is not new.  The HIMA conference
was the latest and strongest public statement by FDA acknowledging
that software is regulated when it is used in certain ways.

Some of the history needs to be cleared up.  Mr. Horn writes:

<>First, what does the FDA regulate?
<>   1) Under the 1936 Act, any medical device, drug, or practice.
<>   2) Under the 1990 Safe Medical Devices Act, authority to examine
<>    devices was expanded.

This is sort of true, but incomplete.  The Medical Devices Act of 1976 first
gave FDA explicit authority and responsibility to regulate commerce in medical
devices.  Before 1976 the FDA's authority was implicit.  FDA has the authority
to regulate "manufacturing" practices, not medical practice.  The 1990 act
amended this authority in several ways, including expanded authority to inspect
medical device manufacturing firms.

Mr. Horn goes on:

<>Software may be involved in any of four ways:
<>   1) It may be a device
<>   2) It may be used in the manufacture of a device or drug
<>   3) It may be used in record keeping
<>   4) It may be contracted or purchased from a third party for one
<>      of the above.

Regulated software must either be a medical device or "a component, part, or
accessory" of a medical device.  Of the remaining categories 2, and 3 are
subject to audits based on the FDA Good Manufacturing Practice regulations to
the extent they are substantially involved in controlling manufacturing
processes and keeping records of design, manufacturing, and service.  For
software in the 4th category, FDA investigators look for evidence that the
purchased or contracted software is safe and "fit for use."  Medical device
software may require stronger evidence of safety and fitness than manufacturing
or recordkeeping software.  The kind of evidence that FDA typically seeks
consists of V&V plans, Safety Plans, and records that the firm has followed the

Next comes a big misunderstanding:

<>FDA approval involves two steps: approval to market and approval to
<>sell. Approval to market involves one of two things:
<>   1) A PMA for new medical technologies (see an expert now).
<>   2) A 510(k) for equivalent medical technologies (substitutes for
<>   some previously approved device).
<> . . .
<>Then comes approval to sell.  This is based upon a Good
<>Manufacturing Practices (GMP) inspection.  Again, the inspection
<>detail will be a function of the risk to the patient and others.
<>For a minor risk item, they might not inspect at all.  Most likely,
<>they just verify by spot checks that the claims made in the 510(k)
<>are being kept.  For a major risk item, they may inspect a lot.  If
<>someone actually gets hurt, expect an army of inspectors swarming
<>over everything.

It is more accurate to say that permission to market a medical device involves
two processes, a premarket process of review and a process of periodic
inspection and surveillance of the firm.  Permission initially may be granted
through an approval in the case of PMA, or it may be granted through a finding
of "substantial equivalence to a previously marketed device") in the case of
510(k).  PMA stands for Pre-Market Approval, and is the only device "approval"
that FDA acknowledges officially.  Once a firm has been granted permission to
market it is subject to inspection by FDA investigators.

In theory, the rigor of an inspection does not depend on the level of risk
associated with a device.  In practice, it might.

More misunderstanding:

<>For a 510(k) approval there are three categories of approval
<>difficulty based upon the hazard to patients and others:
<>   1) minor, little risk of injury either direct or indirect
<>   2) moderate,
<>   3) major, risk of death

These categories are used to decide how closely to review an application to
market a medical device.  They have nothing to do with inspections by FDA field
investigators.  These are hazard categories and the list has been limited to
three.  They are not legislated categories.  Those categories are: Class I,
products for which no special controls or standards are needed; Class II,
products which need special controls like standards; and Class III, products
which require premarket approval.  These classes do not represent a hierarchy
of risk, although Class III devices are usually riskier than Class II which are
usually riskier than Class I.

By all means:

<>For more details ask the FDA for a copy of the 510(k) reviewers
<>guidance.  This is the document used by the 510(k) reviewer and is
<>freely available to the public.

Call the Division of Small Manufacturers Assistance, (301)443-6597.

The report goes on:

<>. . . there is no assumption of validity for off the shelf

Knowing the current state of affairs with off-the-shelf software, this is a
rational choice for risk avoidance.  However it places a burden on the company
that wants or needs to use shrink-wrap applications.  Firms that use
shrink-wrap software should be forewarned to at least put the package through
its paces, i.e.  verify and validate their particular application, before
turning it loose in quality control monitoring, record keeping, etc.  You might
still be written up, but you have a stronger rebuttal than if you did no
planned testing.  Mr. Horn repeated a horror story that one drug firm presented
to illustrated the practices that can be written up as violations according to
the letter of the GMP regulations.

<>For more details, the FDA provides copies of GMP practices
<>regulations to anyone who asks.

Call the division of Small Manufacturers Assistance.

Next misunderstanding:

<>This attention to software is new at the FDA.  It went into effect
<>this summer and more regulations take effect this fall.

The attention to software is not new.  The Therac incidents raised
the consciousness of the agency.  What Mr. Horn perceives as new is
the fruition of several years of internal training and discussion.

<>The other area that is catching people by surprise is the extent of
<>the definition of device and manufacture.

The legal definition of a medical device is in part: "... an instrument,
apparatus, implement, machine, contrivance, implant, in vitro reagent, or other
similar or related article, including any component, part or accessory which is
. . . (2) intended for use in the diagnosis of disease or other conditions, or
in the cure, mitigation, treatment, or prevention of disease, in man or other
animals ... (3) intended to affect the structure or any function of the body of
man or other animals."  Not a limiting definition is it?  Note that the
definition hinges on the way the product is used.  If the product materially
affects diagnosis, treatment, etc., when it is used as intended, then it is a
medical device.  As further examples demonstrate, claims made in labeling and
advertising are considered evidence of intended use.

Manufacture includes such activities as repackaging for commercial
distribution, that is buying a product and reselling under a different label.
In such cases the repackager is held responsible for assuring the quality of
the product, not the supplier.

<>Most recently, the makers of blood bank software were hit.  They had
<>not previously realized that the database software for tracking blood
<>donations was a medical device and probably a class 3 device.

The blood bank situation is unfortunate.  Everybody loses, FDA, the firms in
the business, and the public.  I am not, however, aware of any plan to classify
blood bank software in class 3.  As I explained earlier, medical device
classification is not tied directly to risk.  You must read the law to
appreciate the complexity of classification, but I do not think blood bank
software, HIS software, or LIS software meets the legal criteria for anything
higher than class 2.

Blood bank software is used for much more than tracking donations.  Often blood
bankers depend on the computer to maintain the integrity of test results for
serious or fatal diseases that could be spread by infected blood.  These
results are reviewed as part of the decision to make blood units available for
human use.  Undetected errors will be fatal for the recipient of the improperly
released blood, because there is seldom time for redundant testing before the
blood is used.  It is a tough call to decide which is worse, the risk that a
patient might die because blood is not available or the risk that the patient
will contract AIDS if he survives.

<>The FDA approach differs from that of MoD and others in that there
<>is no FDA approved methodology. ...  They claim that this allows
<>them to accept new methodologies as they are proven.  It also lets
<>them reject anything and not expose them to the risk of making a

Regrettably true.  That is one reason to foster industry standards for
acceptable methodologies.  FDA can be influenced by the weight of evidence and
expert opinion.  If industry standards produce demonstrably better software,
FDA will be hard-pressed to ignore them.  Similarly, a substantial consensus of
expert opinion is hard for FDA to ignore, for example, FDA has embraced the
concept of V&V in its recommendations for software review.

<>If anything goes wrong, its your fault and you (not the FDA)
<>are liable.

This is the case whether or not there are FDA approved development
methodologies.  Compliance with FDA regulations will not protect any firm from
product liability suits.  FDA regulations are completely compatible with good
business; they are incompatible with practices where cutting corners may cause
undue harm.

Frank Houston, Software Safety Champion, Food and Drug Administration Center
for Devices and Radiological Health.  All the usual disclaimers apply in that I
am not contributing in an official FDA capacity.  I just want the readers to
know that this is a well-informed contribution.

RISKS of propagating legendary RISKS (Fulmer, RISKS-12.52)

Paul Karger, 1 617 621 8994 <>
Tue, 05 Nov 91 18:11:13 -0500
In RISKS-12.52, there was a discussion about the market pushing out poorly-
designed products as part of a discussion of licensing software engineers. (Christopher E Fulmer) wrote:

>2.  The market does tend to push out poorly-designed products.  However, for
>some products, it may not be desirable to wait for the market to decide.  After
>all, Audi's sales dropped after the problems with "Instant Acceleration" were
>found by real people, not before.

While it is true that there can be a time-lag in pushing out poorly-designed
products, the Audi "Instant Acceleration" problem that cost 50% of Audi's
sales, turned out to be a result of driver error as Audi had claimed all along.

(It is true that one could criticize Audi on human factors related to the pedal
placement, but that is very different from a criticism that the transmissions
and/or engine computers were faulty.  Since then, Audi and other manufacturers
have placed interlocks onto automatic transmissions to prevent shifting into
gear without having your foot on the brake pedal.)

This problem and the resolution that it was indeed driver error had been
discussed in RISKS at great length several years ago.  It is hard both for the
readers and our moderator (who works very hard and does an admirable job in
editing RISKS) to remember that this accusation was in fact disproven.  It is
very easy to criticize a manufacturer for producing an unsafe product without
actually proving that the manufacturer was actually at fault.  It is much
harder to undo that damage if the product was in fact OK.  Audi unjustifiably
lost over 50% of its market share and has yet to fully recover in the US.

Software safety, formal methods and standards [via Jim Horning]

Jonathan Bowen <>
6 Nov 91 12:29:45 GMT
I am writing a review paper on software safety, formal methods and standards.
In particular, I am looking at the recommendations for the use of formal
methods in software safety standards (and draft standards). So far I have
considered the (some draft) standards listed below. If anyone has any
recommendations of others to consider, please send me details of how to obtain
them (or the document itself if possible!).

I am also interested in general references in this area. I have
quite a few already, but I may have missed some. If they are
obscure and you can send me the paper itself, so much the better.

If there is enough interest, I will summarize the responses.

Jonathan Bowen, Oxford University Computing Laboratory, Programming
Research Group, 11 Keble Road, Oxford OX1 3QD, England.
Tel:     +44-865-272574 (direct) or 273840 (secretary)
FAX:     +44-865-272582 (direct) or 273839 (general)

ESA software engineering standards,
European Space Agency, 8-10 rue Mario-Nikis, 75738 Paris Cedex, France,
ESA PSS-05-0 Issue 2, February 1991.

Programmable Electronic Systems in Safety Related Applications:
 1. An Introductory Guide,
Health and Safety Executive,
HMSO, Publications Centre, PO Box 276, London SW8 5DT, UK, 1987.

Programmable Electronic Systems in Safety Related Applications:
 2. General Technical Guidelines,
Health and Safety Executive,
HMSO, Publications Centre, PO Box 276, London SW8 5DT, UK, 1987.

Software for computers in the application of
 industrial safety related systems,
International Electrotechnical Commission,
Technical Committee no. 65, 1989. (BS89/33006DC)

Functional safety of programmable electronic systems: Generic aspects,
International Electrotechnical Commission,
Technical Committee no. 65, 1989. (BS89/33005DC)

Standard for software safety plans,
Preliminary - subject to revision,
P1228, Software Safety Plans Working Group,
Software Engineering Standards Subcommittee,
IEEE Computer Society, USA, July 1991.

The Procurement of Safety Critical Software in Defence Equipment
(Part 1: Requirements, Part 2: Guidance),
Interim Defence Standard 00-55, Issue 1,
Ministry of Defence, Directorate of Standardization,
Kentigern House, 65 Brown Street, Glasgow G2 8EX, UK, 5 April 1991.

Hazard Analysis and Safety Classification of the Computer and
 Programmable Electronic System Elements of Defence Equipment,
Interim Defence Standard 00-56, Issue 1,
Ministry of Defence, Directorate of Standardization,
Kentigern House, 65 Brown Street, Glasgow G2 8EX, UK, 5 April 1991.

Safety related software for railway signalling,
BRB/LU Ltd/RIA technical specification no. 23,
Consultative Document, Railway Industry Association,
6 Buckingham Gate, London SW1E 6JP, UK, 1991.

Jonathan Bowen, <>
Oxford University Computing Laboratory.

   [Message forwarded to RISKS by (Jim Horning),
   who thinks RISKS readers could help here...  PGN]

Please report problems with the web pages to the maintainer