The RISKS Digest
Volume 10 Issue 46

Friday, 28th September 1990

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Sellers Use Computer Glitch to Buy Illegal Winning Lottery Tickets
Nathaniel Borenstein
Safer to Fly or Drive?
David Levine
Re: Expert system in the loop (Matt Jaffe
2)
Bookkeeping error begs for machine help — maybe
Jim Purtilo
Re: Hi-tech advertising
Brinton Cooper
Re: Reliability of the Space Shuttle
Chris Jones
Henry Spencer
Automated vehicle guidance systems
Will Martin
Computer 'error' in the British RPI
Chaz Heritage via Richard Busch
Info on RISKS (comp.risks)

Sellers Use Computer Glitch to Buy Illegal Winning Lottery Tickets

Nathaniel Borenstein <nsb@thumper.bellcore.com>
Fri, 28 Sep 1990 11:22:52 -0400 (EDT)
Because of a software screwup, the lottery database system let 6 winning
tickets be purchased after the winning numbers had been drawn in the Tri-State
Megabucks game for Vermont, New Hampshire and Maine.  There was one legitimate
ticket for $1.1 million.  The problem was caught before any payoffs could be
collected.  In addition, some belated daily lottery ticket winners did collect,
up to $5,000, although the state is apparently insured against the loss.  The
system is supposed to halt ticket sales 10 minutes before the drawings, but
did not.   [Source: an AP item by Frank Baker, 28 Sept 90, summarized by PGN]


Safer to Fly or Drive?

David Levine <levine@crimee.ICS.UCI.EDU>
Thu, 27 Sep 90 11:02:13 -0700
The following is extracted [and with edits by dll] from Daniel J. Holt's
editorial in September 1990 _Aerospace Engineering_.  Holt summarizes the paper
"Is it Safer to Fly or Drive" by Leonard Evans, Michael C.  Frick, and Richard
C. Schwing of General Motors Research Labs, published in _Risk Analysis_, Vol.
10, No. 2, 1990.  The statistics, and more specifically some of the assumptions
behind them, may be of interest.

    [...] Apparently the researchers found fault with the cliche that
    the most dangerous part of an airline journey is the drive to the
    airport.  Over 98% of the intercity travel in the U.S. is via
    airline and automobile.  On a daily basis, 18,000 airliner take-
    offs and landings and 370 million mile car trips are completed in
    a safe manner.

    [claim commonly quoted fatality rates of] 0.6 deaths per
    billion miles of flying and 24 deaths per billion miles of driving.
    Specifically, they claim these rates are incorrect for three reasons:
    — The airline rate is passenger fatalities per passenger mile,
       whereas the road rate is all fatalities (all occupants,
       pedestrians, etc.).
    — Road travel that competes with air travel is on the rural
       interstate system, not average roads.
    — Driver and vehicle characteristics, and driver behavior,
       lead to car-driver risks that vary over a wide range.

    [... 40 year-old, seat-belted, alcohol-free drivers (do they
    assume alcohol-free *other* drivers?)] are slightly less likely
    to be killed in 600 miles of rural interstate driving than in
    regularly scheduled airline trips of the same length.
    For 300-mile trips, the driving risk is half that of airline
    trips of the same length.  Thus the researchers concluded that
    for this set of drivers, car travel provides a lower fatality
    risk than air travel for trips in the distance range for which
    car and air travel are likely to compete.

    As for the cliche that the drive to the airport is riskier than
    the flight, the researchers concluded that average drivers with
    the age distribution of airline passengers are less likely to be
    killed on a 50-mile, one-way trip to the airport than on the flight.

David L. Levine, Dept. of ICS, University of California, Irvine Irvine,
CA 92717    BITNET: levine@ucivmsa UUCP: {sdcsvax|ucbvax}!ucivax!levine


Re: Expert system in the loop (Henry Spencer, RISKS-10.42)

Matt Jaffe <jaffe@safety.ICS.UCI.EDU>
26 Sep 90 20:02:46 GMT
>The data was generated by computer, [...]

>>>>Such decisionmaking is de facto *governed* by computer: without
>>>>computer prompts, no retaliatory decision at all would be taken;
<> Again, incorrect.  Decisions to fire were made long before we
<> had computers — they are not required to make these decisions.

The term "prompt" is one key to some understanding of this type of situation.
Aegis (and similar USN combat systems) is designed to permit tactical decisions
to be implemented through Aegis mechanisms via three modes:

    (1) Automatic - the digital system itself makes the decision
    (2) Semi-automatic (a bad name, but retained for historic
            reasons) - the digital system specifically prompts the
        operator with a specific, recommended decision, e.g.,
        "recommend engagement of target X with weapon Y"
    (3) Manual - the system provides controls to allow an operator
        to implement a decision

The Vincennes incident involved a manual decision.  One might reasonably ask to
what extent was the "manual" decision conditioned by the data presented by
Aegis.  Here the observation about the "windowless control room" is valid but
there are still several gradations possible.  If, for example, Aegis were to
have routinely displayed "automatically" computed threat rankings in a color
coded format (Aegis did not do that then; I doubt that it does it now), a
decision to engage based upon viewing a target in blinking bright red (with all
other targets being a steady, soothing blue) would still officially be a
"manual" decision but certainly much more "computer governed" than a similar
decision based on viewing a screen with all targets a homogeneous blue.  (It is
also possible - although not a factor in the Vincennes incident - for a system
to provide controls for the implementation of decisions made manually based on
data not mediated at all by the action-implementing system.)  The Vincennes
incident involved misinterpretation of displayed data, not interpretation of
misleading or provocatively displayed data.  (Of course, the data could be
wrong; more on that in a minute.)  My personal opinion is that the Captain of
the Vincennes would probably have made the same decision had the same data been
printed out for him in narrative text form.  (That is the apparent significance
of the Navy's finding that the Captain made the "correct" decision based on the
data available to him.)  The real issues are three fold:

    (1) How "true" was the digital systems "view" of reality?
    (2) Given the state of the art in algorithmic reasoning, could
            the digital system have been designed to model (I will NOT say
            "understand") reality any better?
    (3) How good was the MMI at presenting the digital "truth"?

For question (1), the answers appears to be mixed.  The system
reported a Mode II SIF on a target that did not (ignoring paranoid
possibilities) in fact have a mode II capability.  The altitude readouts
were apparently correct.

For question (2), it is my personal opinion that the system was designed
as well as we could do so at the time.  The reasons for the SIF
mis-assignment I and others have already discussed in other
correspondence.  I do not know of any design decision we could have made
differently that would have prevented this error.  The altitude reports
were as accurate as the underlying sensor technology permitted.
(Reasons for not trusting Mode C height, if it was available, were also
discussed previously and should be reasonably obvious - if one believes
a target might be hostile, one cannot place too much credence in data
the target itself provides.) Providing a better Z resolution would have
significantly increased the direct and indirect costs of a sensor
already critiqued (at that time) as far too expensive.  Given the inherent
limits on instantaneous height measurements and the inherent
manuverability of aircraft in the Z axis, Z' (rate of ascent or
descent)information will generally remain highly inacurrate.

It is question (3) then, that is the heart of the matter to me.  I don't
have the time here to fully recap all the issues, but I would like to
summarize my conclusions to date:

    (1)  We gave a lot of thought and discussion to the Aegis MMI.
    We had good engineers, MMI experts, operationally experienced Naval
    personnel, lots of review and open, no-holds barred discussions.
    We may have been wrong (I am still unconvinced, but in the sense
    that I don't know, not that I am sure we were right.), but we
    were as  methodical and as careful as we could be.
    (2) We might have tried to find a way to display the age of last
    correlated IFF hit (per mode) - that would have suggested that
    the target was not CURRENTLY squawking mode II (but was still
    squawking other modes) - but we were awfully tight on display
    space and the Navy and our own MMI experts were already telling
    us that we were displaying too much data.
    (3) I don't think we could have done much about the Z' problem,
    although I have not read the Navy's report to see their specific
    criticism or suggestions. ( I would be grateful to anyone willing to
    mail me a hard copy version - or reference to a classified
    document identifier.)

I believe that there were two classic technical problems involved in the
Vincennes incident (as well as the political decision making ones well
discussed elsewhere):

    (1) How to make available and prioritize for tactical operators
    the display of all the information that might be of significance
    some of the time without constantly saturating them all the time
    with displays and controls that are not currently relevant.
    (2) How (or whether) to display probabilistic and uncertain data
    to humans for used in highly stressed decision environments.

There is also the issue of training and comprehension, which straddles the
boundary between technical and institutional: How to ensure that key decision
makers (shipboard as well as policy) understand the limits of the technology
that they are employing?

It is our limited ability to answer these questions that makes incidents
like the Vincennes shootdown inevitable.


Re: Expert system in the loop (Thomas, RISKS-10.37)

Matt Jaffe <jaffe@safety.ICS.UCI.EDU>
26 Sep 90 20:33:49 GMT
>Amos Shapir, National Semiconductor (Israel) P.O.B. 3007, Herzlia 46104, Israel
>[Quoted from the referenced article by jaffe@safety.ICS.UCI.EDU]
<>The point is that the issue of designing Aegis to handle commercial flight data
<>was addressed and rejected as not cost-effective.  Whether one agrees with this
<>specific decision or not, the general point is that no military system (or any
<>system) can be designed to deal with all contigencies that someone thinks of as
<>appropriate.

>The point is, I don't think Aegis had to be designed to keep track of
>all aerial traffic in the area; I'm pretty sure that *Air Force* systems
>in the area did have a positive ID on everything that was flying at
>the time.  The trouble is, I also suspect that there was no way the captain
>could just call somebody and ask "Hey, what's that on my screen?"

Good point, with a couple of caveats:
    (1) Aegis was designed to "keep track" of everything in the
    area.  Aegis was designed to integrate identification information
    from all digitally accessible systems.  Your comment is equivalent to
    asking, "Did the Captain avail himself of other possible sources of
    information?"
    (2) You are presumably referring to AWACS; I don't know if there
    was an AWCAS bird covering that area at that time.
    (3) I'm not an AWACS expert; I doubt that they process
    commercial flight plans, though.  But they might not have had
    the mode II SIF confusion the Vincennes did.
    (4) The time pressure the Captain felt he was under would be a
    factor (assuming there was an AWACS bird available).  One would
    have to consider the time to make the call (depending on the
    tactical comm plan in effect, that could be simple or could
    be tough) and the perceived likelihood that the callee could
    tell one something that one did not already know.

So there probably was a way to call; the questions are instead, was there
anyone useful (in the Captain's mind) to call and did he feel he had time to do
it?  The asking "Hey, what's this on my screen?" is solved procedurally,
although to be fair, the actual use of such procedures (common coordinates and
pro-words) with the Air Force may or may not have been something the Vincennes
felt comfortable with.


bookkeeping error begs for machine help — maybe

Jim Purtilo <purtilo@cs.UMD.EDU>
Wed, 26 Sep 90 21:17:31 -0400
Not yet a computer-related risk, but food for thought from today's Post:

>    Md. Admits Freeing Inmate Later Held in Three Slayings
>            by Richard Tapscott
>        Washington Post, September 26, 1990.
>
> Maryland corrections officials conceded today that an error led to the early
> release of a Harford County man who is charged in three killings committed
> during a time he should have been in prison.
>
> During a legislative hearing today, an attorney for the Division of
> Correction said the state could be sued by relatives of the victims if John
> F. Thanos's release 18 months early is found to have been caused by
> negligence.
>
> Until today, corrections officials had said only that they were
> investigating whether there had been proper application of a complex set of
> guidelines used in determining how much time to deduct from Thanos's
> sentence for good behavior.  "Good Time" is awarded as a method of
> maintaining discipline in prisons and to ease crowding.
>
> But Bishop L. Robinson, secretary of public safety and correctional
> services, told lawmakers this afternoon that Thanos was "erroneously
> credited" with 543 days of good conduct.

The article continues, noting that the chap served four years of a seven
year sentence for robbery.  Since the release, he has been charged with two
murders, an attempted murder and two robberies.  The attempted murder charge
could be upgraded as the victim has since died.

Robinson points out in the article that corrections officials have a
"monumental problem" in calculating good time under incentive programs
linked to behavior, work, education and prison crowding.  "The difficulty is
compounded ...  when overlapping or concurrent sentences are involved."

Refreshingly, a `computer error' is not yet being pointed to as the problem.
But as I read this, I have visions of politicians looking for ways to automate
the process `to avoid this tragedy in the future.'  Should this occur, one
wonders how to design the bookkeeping problem so it fails safe.  Regardless, we
have yet another example for my software engineering class who occasionally
will ask ``why do I care about fancy techniques to test a spreadsheet — its
not like I'm writing a program that lives will depend upon.''

When the system dumps core, just dial 911, right?
                                                            Jim


Re: Hi-tech advertising

Brinton Cooper <abc@BRL.MIL>
Wed, 26 Sep 90 17:04:50 EDT
Dave Turner reports on advertising in which floppy disks touting some
product are "junk mailed" to our homes.  He correctly observes:

    "...the risk of viruses spreading will increase rapidly. A few
    thousand deviant floppies sent to several large corporations and
    schools will produce marvelous results."

I've been reading RISKS-Digest for several years and (erroneously?)
consider myself well-informed on topics of interest here.  Yet, as I
read Dave's item, the risk which he states so well DID NOT OCCUR TO ME
UNTIL I READ HIS LAST PARAGRAPH!  This leads to one of two conclusions:

    a. In spite of our good efforts to be vigilant, the plethora of
new methods of attack overwhelms our defenses.

    b. I turn off my thinking apparatus when reading e-mail.

In case it's "a," many of us need to be more wary.

_Brint


Re: Reliability of the Space Shuttle (RISKS-10.45)

Chris Jones <clj@ksr.com>
Wed, 26 Sep 90 16:44:52 EDT
>                   I would like to simply point out that the space
>shuttle has had many more successful launches than any other launch system
>employed to date.

Hardly.  The US space shuttle has had many more successful launches than any
other launch system for US manned spacecraft.  Almost every Soviet launch
system has had many more successful launches (including the SL-4, used to
launch every Voskhod and Soyuz manned spacecraft), and many US unmanned launch
systems have exceeded the shuttle's totals as well.

Chris Jones    clj@ksr.com    {world,uunet,harvard}!ksr!clj


Reliability of the Space Shuttle

<henry@zoo.toronto.edu>
Thu, 27 Sep 90 13:03:43 EDT
>... I would like to simply point out that the space
>shuttle has had many more successful launches than any other launch system
>employed to date...

Surely Peter jests.  The Soviet "A" booster, in several variants, has been
launched successfully over 1000 times.  This is more than all Western
launch systems, including the shuttle, put together.  By normal aviation
standards, this is the *only* space launcher that has been properly
tested.  (A new aircraft typically flies hundreds or thousands of times
before it is released to customers.)

Perhaps he meant only Western launchers?  Even there, this is grossly
wrong.  The user's manual for Scout, the smallest of the "traditional"
US boosters, lists 77 successful launches.  I'm fairly sure that Delta
beats that, and Atlas and Titan probably likewise.

Perhaps he meant only manned launchers?  Sorry, I think the "A" booster wins
again.  It has launched every Soviet manned mission, from Gagarin to the Mir
crews.
                           Henry Spencer at U of Toronto Zoology   utzoo!henry


Automated vehicle guidance systems

Will Martin <wmartin@STL-06SIMA.ARMY.MIL>
Mon, 10 Sep 90 13:36:06 CDT
There is a program called "European Journal" which is aired on PBS (and, I
suppose, some cable channels). In the program broadcast on Sunday, Sept. 9
on KETC in St. Louis, I saw a segment on automated vehicle guidance systems
that are being installed in Britain and Germany (I'd say "West Germany" but
it may not be that when you read this... :-). This was a system designed to
route traffic around snarls, tieups, and gridlocks, providing the driver the
"best" [see below] route to a punched-in destination from the current site.

The system depends on traffic-light-mounted signal sources, which seemed to
be infrared emitters. The dashboard contains an LCD-type display that
has arrows displayed indicating to the driver which turn should be taken
next. Interestingly, the philosophy of operations varies between the two
installations. In Britain, this is a private-enterprise venture, and the
spokesperson stated that their aim was to provide the driver with the
best route from the driver's point of view. In Germany, this is a
government activity, and the aim is to provide the most efficient
traffic flow throughout the city. (I suppose individual drivers could be
shunted into dead ends or off the street into a canal or something if it
would make the traffic flow as a whole work better. :-) This is, of
course, the RISK — will the system's advertised philosphy be the one
that really controls its operation?

The British system sounded expensive to me — $400 initial fee, plus a
$200 per year subscription. Interviews with potential customers didn't
sound too promising; no one seemed interested in spending that much for
what it would give them — they figured they could do it well enough for
themselves. Don't know what the German costs will be, or who will use it.

The point of what would happen if the driver "disobeyed" the system or
violated a traffic regulation (like running a red light) was asked. The
British spokesman said their system was in no way limiting; they were
only trying to help the driver, and that there was no way that violations
of traffic rules would be reported. No mention was made of the German
situation. (I suppose the next infrared sensor/transmitter unit to pick
up the offending car there melts it with a laser or something... :-)

The "fail" mode of the display, if the driver ignores the turn-indicating
arrow and goes straight, for example, was interesting.  It seemed to take
the rejection of the recommendation as a negation of the program, and
displayed a rosette of arrows pointing in every possible direction.  If the
thing had a voice unit, I would have expected it to say something like,
"WELL!  If YOU want to go your OWN way, don't pay any attention to ME!  Go
ANYwhere YOU like!" :-)

Regards, Will Martin
wmartin@st-louis-emh2.army.mil OR wmartin@stl-06sima.army.mil


Computer 'error' in the British RPI

&ichard_Busch.sd@xerox.com>
Fri, 28 Sep 1990 10:37:47 PDT
In his Tue, 25 Sep 1990 11:50:53 PDT Peter G. Neumann quotes from Computing
(UK), 20 September 1990, submitted by Dorothy R.Graham:

>A 1% error in the British RPI cost the government 121M pounds ...a computer
error caused the RPI to be understated from February 1986 to October 1987.  The
programs had been tested, but the tests did not reveal the error.<

It is perhaps worth mentioning that the current UK Government has made control
of inflation, measured by the RPI, one of the main items on its agenda, and
that a General Election occurred during the stated period.

Taking this in combination with the use of the Falklands / Malvinas War for
electoral purposes, with the deliberate massaging of official statistics
(particularly on unemployment) since 1979, to such a degree as to cause
near-mutiny in the Government Statistical Service, and with the general
cynicism of the present UK Government, the uncharitable might suggest that the
'error' was no more than another episode of electoral 'management' on the UK
Government's part, and that 121 million pounds would be considered a small
price to pay for re-election by people as power-hungry as Margaret Thatcher.

The 'secondary RISK' of people becoming so accustomed to computer 'error' as to
be willing to accept it as fact rather than to suspect deliberate manipulation
of computer-resident data has already been discussed in RISKS.
                                                                    Chaz

Please report problems with the web pages to the maintainer

x
Top