The RISKS Digest
Volume 4 Issue 17

Monday, 24th November 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Computer Risks and the Audi 5000
Howard Israel with excerpts from Brint Cooper
Charlie Hurd
Clive Dawson
Risks of changing Air Traffic Control software?
Greg Earle
Re: the UK Software-Verification Proposal
Bard Bloom
Program Trading
Howard Israel
Eric Nickell
dmc
Decision Making
Clive Dawson
Info on RISKS (comp.risks)

Computer Risks and the Audi 5000

Howard Israel <HIsrael@DOCKMASTER.ARPA>
Sun, 23 Nov 86 23:49 EST
The 23 November 60-Minutes tore apart Audi, Inc.  It seems that the Audi
5000 model (automatic transmission) has a terrible habit of accelerating
when moved from PARK to either FORWARD or REVERSE.  The problem has been
denied by Audi.  They blame driver error ("They step on the gas instead of
the brake").  Of course this appears to only be a problem with drivers of
the 5000 model, none of the other models has such poor drivers.

The "alleged" defect is blamed for about 250 known accidents, and at
least one death.                                       [See more below.]

The alleged causes of the alleged problem (I should have been an alleged
lawyer) include 1) excessive pressure build-up in the transmission, 2) a
faulty "vacuum" (not sure of the exact words ??) unit (which Audi has
voluntarily notified its customers needs replacement, or will result in
"performance" problems), and 3) a faulty on-board computer.

Although Audi insists that they cannot find a problem with the car, an
"independent" expert hired by a group of people that have all experienced
problems with the car (there are enough victims out there that a self-help
group was formed) actually demonstrated the gas pedal *visibly* moving
downward when the car was put into gear causing the alleged surging.

Even some valet parking garages have posted signs that they will not
accept the Audi 5000 with automatic transmissions.

Audi is so convinced that the problem is driver error that they have
issued a recall notice to install a safety switch so that the driver
could not change the gear unless pressure was on the brake.

Footnote:  Three accidents have occurred similiar to the stated alleged
problem that had the brake safety switch installed.  Audi said that 2 of the
cars had the switch improperly installed, and the third was unexplained.
      [Clive Dawson recalled "driver error" being cited for the third case.]

This whole incident is reminiscent of the Ford Pinto fiasco.

The Federal Transportation Safety Board (??)  is investigating.

Audi (The Art of Engineering) came out looking very bad.  (But what else
would you expect from 60 Minutes?)

Corporate responsibility appears very low.  I would not be surprised if
they came out with a corporate apology within a week (in time for the
next broadcast) to try to save face.
                                               Howard Israel

   [It is unusual for RISKS to get four different reviews of the same TV
    program! Excerpts from the others follow, with moderator's effort to
    minimize duplication and achieve accuracy.  PGN]

  Excerpts-From: Brint Cooper <abc@BRL.ARPA>

    Even while the driver (quite literally) stands on the brake pedal, the
    car roars ahead.  One young woman ran over and killed her own
    three-year-old son. 

    The "idle stabilizer" was said to be responsible for keeping a minimum
    flow of fuel to the engine during idle when the brakes are applied.  The
    idle stabilizer is either a part of a computer-controlled system or is
    controlled by an on-board computer; it wasn't clear which.  

    Audi denies that anything is wrong.  Two Audi representatives appeared
    on camera to assert that they could find nothing wrong with the car.  
    They even claimed that the motorists are stepping on the wrong pedal.

         Brint

  Excerpts-From: churd@labs-b.bbn.com <Charlie Hurd>

    The cars have accelerated with enough force to punch through walls.
    Many of the cars have been totalled.

    Audi has checked the cars in question and failed to find any defects.  They
    claim that the drivers became confused and pressed on the gas pedal instead
    of the brake.  The drivers (one of them a police officer trained to drive
    under extreme conditions) maintain that they were trying to put the brake
    pedal through the floor, without effect. 

    It seems to me that this is good response to Peter Stokes's question
    (RISKS-4.5) about the risks of buying/driving a car with a computer-
    controlled engine.  The only question I have is why the brakes did not
    stop the car.  Some of the victims said that they had to turn off the
    car to stop it.  Do Audi 5000s have anti-skid braking?  Could this have
    allowed the cars to keep moving?  Is this an example of many small
    malfunctions resulting in a *major* problem?

         Charlie

  Excerpts-From: Clive Dawson <AI.CLIVE@MCC.COM>

    This has resulted in at least one death. A young (6-year-old?) boy
    was let out of the car to open the garage door, after which the mother
    stepped on the brake and shifted to forward.  The car hit the boy, pushed
    him completely through the garage door and pinned his already-crushed body
    against the rear wall of the garage.  A heavy black skid mark was left
    which showed how even then the wheels continued to spin at a high rate of
    speed.  The Audi people claim that all of these accidents are the result of
    driver error, in which the accelerator is mistaken for the brake.  One of
    the more memorable quotes from Audi: "We're not saying we can't FIND
    anything wrong with the car; we're saying there ISN'T anything wrong with
    the car."

    Attention is focusing on a microprocessor-controlled mechanism which
    regulates the idle speed.  Apparently Audi has sent letters to all owners
    of the vehicles involved stating that this part will be replaced by Audi
    for "performance reasons".  The report didn't make it clear whether the
    microprocessor was an integral part of this part or not, so I don't know
    if this replacement will involve a change in the processor or its software.

    I don't know what the final verdict on this will be.  But listening
    to that devastated mother tell how she witnessed the death of her
    son, and knowing the cause might eventually be tracked down to
    some software bug sent chills down my spine.

         Clive


Risks of changing Air Traffic Control software?

Greg Earle <elroy!smeagol!earle@csvax.caltech.edu>
Fri, 21 Nov 86 22:47:04 pst
I haven't seen it mentioned yet, but I believe that last week I saw
a news story that purported to blame a crash of a small light plane
in the Southern California area on a changeover of software in
either a radar system or a general flight controller computer system,
causing either the plane to be lost from the screens or directed into
a hillside.  Since my memory is vague, perhaps someone else can
provide a better recollection of this RISK of computer software.

Greg Earle, JPL

            [The delay in running this item was due to an unsuccessful
             attempt to get further information...  PGN]


Re: the UK Software-Verification Proposal

Bard Bloom <bard@THEORY.LCS.MIT.EDU>
Sun, 23 Nov 86 12:41:15 est
Disclaimer: I'm a grad student working in semantics of programming languages,
and therefore qualified to pretend to know the answers to these questions.  I
haven't been studying semantics all that long, though.  These are solely my
opinions and bear no necessary resemblance to those of my advisor, my
department, or my ceramic dragon.

> From: dplatt@teknowledge-vaxc.ARPA (Dave Platt) (<a href="/Risks/4/16">RISKS 4.16</a>)

> 1) Are existing programming languages constructed in a way that makes
>    valid proofs-of-correctness practical (or even possible)?  I can
>    imagine that a thoroughly-specified language such as Ada [trademark
>    (tm) Department of Defense] might be better suited for proofs than
>    machine language; there's probably a whole spectrum in between.

No, they are not.  Actually, there are a few existing programming languages
(Euclid, for one) which are, but most popular ones are not.  A
precisely-specified language is easier to prove things about than an
imprecisely-specified one, of course.  I haven't seen anything approaching a
precise mathematical semantics for Ada; if the research in semantics of
distributed semantics goes very well we might be able to give you one in ten or
fifteen years if we're lucky.  The best languages for proving things about are
functional languages (FP, Hope, Lucid, ISWIM).  I have yet to hear of a "real
program" written in any of these.  

> 2) Is the state of the art well enough advanced to permit proofs of
>    correctness of programs running in a highly asynchronous, real-time
>    environment?

No.  Not even remotely.  We can't cope with slightly-asynchronous,
non-real-time environments in any general way.

> 3) Will the compilers have to be proved mathematically correct also?  or
>    might something like the Ada compiler/toolkit validation be adequate?

The compiler will have to be proved too, if the idea of proving programs
correct is to make any sense.

> 4) The report seems to imply that once a system is proven correct/safe,
>    it can be assumed to remain so (for the [limited] lifetime of its
>    License to Operate) so long as maintenance is performed by a
>    certified software engineer.  Is this reasonable?  [...]

It is reasonable if you re-prove the patched system.  I can't imagine it being
reasonable otherwise.  Note: you can probably patch the proof also, if it is
arranged in a nicely modular form.  

> 5) Many of the program "failures" I've encountered in "stable" software
>    have been due to unexpected inputs or unplanned conditions, rather than
>    to any identifiable error in the program itself.  Can any proof-of-
>    correctness guard against this sort of situation?

Not really.  All the proof guarantees is that the software does what the
specification does.  That's a big help, since you don't usually have even that.
But you have to get the specification right.

(I can't even pretend to answer questions about legal aspects.)

> 8) Military systems such as the SDI control software would appear to belong
>    to the "disaster-level" classification... will they be subject to this
>    level of verification and legal responsibility, or will they be exempted
>    under national-security laws?  [Of course, if an SDI system fails,
>    I don't suppose that filing lawsuits against the programmer(s) is going
>    to be at the top of anybody's priority list...]

That's a terrifying thought: don't verify Star Wars, it's too secret to have
the code so exposed!  

-- Bard Bloom


Program Trading

Howard Israel <HIsrael@DOCKMASTER.ARPA>
Sun, 23 Nov 86 23:49 EST [Other half of Howard's message]
Today's Washington Post, Sunday, November 23, 1986, pg K1 [Business
Section] contains an interesting article on program trading and the
Finance theory behind it all.  (The following is partly based on the
article and partly from my own knowledge.)

The basic idea is to view all of the different financial instruments as
interrelated, even though the instruments may be traded on different
markets across the country (or around the globe).  The people that make
it all work are called "Quants" (standing for Quantitative analyst).
The "Quants" create models based on the markets and their
interelationships and known financial theory.  When an "inefficiency"
occurs (i.e., the price differential of an underlying security in two or
more markets occurs that is big enough to cover the cost of the
transactions involved), the computers that monitor the information issue
simultanous buy and sell orders in the appropriate places.  The net
effect is the *total* elimination of risks once the initial set of
transactions are complete (the winding up).  (This is a simplification
of it all.  But the previous assertion in today's RISKS entry concerning
the "last trader" losing money is not accurate.)

The profit is "locked in" when the first set of trades are completed,
but will not be actually known until the positions are closed out
(winding down) at the end of the finanical instruments life.  The
"published" profit margin is said to be in the 7% to 9% (annualized)
range.  (Anything above the current T-bill rate is considered good.)
However, only each trader really knows what he is making.  (A personal
friend on "the street" claims that the profits are really much, much
higher because the "invested money" stays in the market a very short
time.  I am not convinced of this based upon my knowledge of the trading
--and "margin"-- necessary.)

The "Quants" differ from "Qualitative" traders, in that, Qualitative
traders base their trades on the perceived quality of the companies
(traditional recommendations of buy company ABC and sell company XYZ).

A nice analogy is made in the article to the gambling world.  The
"Quants" are the bookies, while the "Qualitative" traders are the River
Boat Gamblers that bet on instinct.

Has anybody thought of the implications (since the computers, based on
its programmed models and incoming data) of an error ?  Not only is big
money involved (it is estimated that one needs a *minimum* of $50
million to play the game), but so are bigger reputations (not just the
brokerage houses, but insurance companies, too).

Is it "bad" for the market ?  I think not.  When the computer generated
trades are executed they force market correction.  The article makes a
point that new financial instruments are emerging that will "play the
game", much like a mutual fund does now for the small investor.  These
instruments will limit the "downside" loss, while maintaining unlimited
"upside" gain.  Then these new instruments can be used in conjuction
with the already existing ones to create even more instruments, the end
result, potentially being, that in time anyone can bet the market in any
way.
                                              ---H


Re: RISKS DIGEST 4.16, Computer-based stock trading

<Nickell.pasa@Xerox.COM>
Sun, 23 Nov 86 19:33:07 PST
In response to Roger Mann:

(I mentioned this about a year ago in our last discussion of computerized
stock markets.)  Instantaneous and non-instantaneous negative feedback to
not produce the same results.  In this case, the fact that thousands of
computers can respond to the possibility of profit before the effects of the
responses get back (through whatever feedback loop) to any of them, opens
the door for disaster.

Eric Nickell


Computer-based stock trading

<dmc%videovax.tek.csnet@RELAY.CS.NET>
Mon, 24 Nov 86 10:37:48 PST
The problem of decreasing system stability as time constants change is not a
new one.  Take the steam engine: "Watt's use of the flyball governor can be
taken as the starting point for the development of automatic control as a
science.  The early Watt governors worked satisfactorily, no doubt largely
due to the considerable amounts of friction present in their mechanism, and
the device was therefore widely adopted. ... However, during the middle of
the 19th century, as engine designs changed and manufacturing techniques
improved, an increasing tendency for such systems to hunt became apparent;
that is, for the engine speed to vary cyclically with time. ... This problem
of the hunting of governed engines became a very serious one (75,000
engines, large numbers of them hunting!) and so attracted the attention of a
number of outstandingly able engineers and physicists.  It was solved by
classic investigations made by Maxwell, who founded the theory of automatic
control systems with his paper "On Governors," and by the Russian engineer
Vyschnegradsky, who published his results in terms of a design rule,
relating the engineering parameters of the system to its stability.
Vyschnegradsky's analysis showed that the engine design changes which had
been taking place since Watt's time - a decrease in friction due o improved
manufacturing techniques, a decreased moment of inertia arising from the use
of smaller flywheels, and an increased mass of flyball weights to cope with
larger steam valves - were all destabilizing..."
    "The Development of Frequency-Response Methods in
      Automatic Control",  Alistair G. J. MacFarlane,
      IEEE Trans. Automat. Contr., pp. 250 - 265, Apr. 1979


Decision Making

Clive Dawson <AI.CLIVE@MCC.COM>
Mon 24 Nov 86 13:34:19-CST
Those interested in the recent item on the science of decision-making [see
Jim Horning, "Framing of Life-and-Death Situations", Risks 4.13] might find
this reference a bit more accessible:

   "Decisions, Decisions", by Kevin McKean.  DISCOVER Magazine,   June, 1985.

This article is a well written account of the work done by a number
of researchers, notably Daniel Kahneman and Amos Tversky, and has several
very nice examples of how the framing of a question affects the
decision making process.  

Anybody who had trouble locating CMU's "1986 Accent on Research Magazine"
would have better luck with Discover Magazine.

Please report problems with the web pages to the maintainer

x
Top