The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 5 Issue 75

Tuesday, 15 December 1987

Contents

o Advice to the Risklorn
Steven McBride
o Expert systems liability
George S. Cole via Martin Minow
George Bray
Dean Sutherland
Bjorn Freeman-Benson
William Swan
Wm Brown III
o Microprocessors vs relay logic
Wm Brown III
o Info on RISKS (comp.risks)

Advice to the Risklorn

Steven McBride <shamus@BOEING.COM>
Tue, 15 Dec 87 10:40:20 pst
Franklynn Peterson and Judi K-Turkel in their newspaper column "The
Business Computer" {(c) 1987 P. K. Associates, Inc.} discuss computer
problems with banks, phone companies, and supermarkets. The discussion
on phone billing which follows was new to me -

             Ripped off by computer mistake? Fight back

      . . .
      Before computerization, your phone company didn't charge you if
   nobody picked up at the other end. Now, if there's a busy signal or no
   answer to your ring, you may get charged for a one-minute call.  Unless
   you keep track of all your calls you may never even notice it.
      Has your bill ever shown two different minute-long phone calls made
   during the same minute of the same day? Ours has. It's a dead giveaway.
   It proves our servicer's computer can't tell a non-answer from a
   completed call.
      Many states have watchdog public service commissions. Ours specifies
   that we can't be charged for calls that reach nobody. But it also lets
   the phone company bill us incorrectly. The burden's on us to go through
   each bill circle all disputed charges, and write letters explaining why
   we're not paying them.
      These charges can mount up fast if you're phoning via computer. . .
      . . .
      Here's what's sad about all these computer-made annoyances: They're
   unnecessary. These very computers are capable of doing consumer-
   pleasing tasks at a penny a job. Why don't they? Because whoever bought
   them, programmed them, and manages them obviously doesn't want a system
   that can help customers. They want what's fastest or cheapest to design,
   easiest to manage, and most profitable for the firm.
      How can we change things? By joining in making the status quo grimmer
   than the task or reprogramming to give better service.
      We changed supermarkets when our once-favorite store refused to give
   pre-checkout pricing clues.
      We deduct from our phone bills all one-minute calls, putting the
   burden on the long distance carrier to show it any were completed.
      And when we clear up a blunder our bank's computer makes, we bill
   them a service charge. You're welcome to join us!


<minow%thundr.DEC@decwrl.dec.com>
      (Martin Minow THUNDR::MINOW ML3-5/U26 223-9922)
Date: 15 Dec 87 08:06
To: risks@csl.sri.com
Subject: Interesting note on expert systems liability from AI-Digest

From:   DECWRL::"AIList-REQUEST@SRI.COM"
    "AIList Moderator Kenneth Laws  14-Dec-87 2224 PST"
AIList Digest            Tuesday, 15 Dec 1987     Volume 5 : Issue 283
...
Date: Thu 10 Dec 87 10:27:39-PST
From: George S. Cole <GCOLE@Sushi.Stanford.EDU>
Subject: Expert System Liability

 I have researched this area and a paper is forthcoming -- as soon as the
USC Computer/Law Journal editorial staff are ready -- on "Tort Liability for
Artificial Intelligence and Expert Systems". The trite answer is yes, there can
be a suit and EVERYBODY INVOLVED will be named -- because the plaintiff's
lawyer will realize that the law does not clearly know who is liable (including
the plaintiff).
        A short answer is to cite the Restatement of Torts, 2nd, Section 552:
"Information Negligently Supplied for the Guidance of Others:
    one who, in the course of his business, profession, or employment, or in
any other transaction in which he has a pecuniary interest, supplies false
information for the guidance of others in their business transactions, is
subject to liability for pecuniary loss caused to them by their justifiable
reliance upon the information, if he fails to exercise reasonable care or
competence in obtaining or communicating the information".

This section was cited without success in Black, Jackson and Simmons Insurance
Brokerage, Inc. v. IBM, 440 N.E. 2d 282, 109 Ill. App. 132 (1982). The phrase
"in the course of his business" was strictly construed to prevent liability
under this cause of action (there were others, including warranty) as the
court noted that the defendant had sold both hardware and software to allow
the firm to process information.  But in Independant School District No. 454,
Fairmont, Minnesota v. Statistical Tabulating Corporation, 359 F. Supp. 1095
(N.D. Ill, 1973) the court permitted a negligence action to be brought against
the third-party statistical bureau whose miscalculations had led to the
under-insurance of a school which had then burned down. The court stated:
"[O]ne may be liable to another for providing inaccurate information which
was relied upon and caused economic loss, although there was no direct
contractual relationship between the parties...The duty to do work reasonably
and in a workmanlike manner has always been imposed by law..." Factors the
court suggested to consider included (1) the existence, if any, of a guarantee
of correctness; (2) the defendant's knowledge that the plaintiff would rely
upon the information; (3) the restriction of potential liability to a small
group; (4) the absence of proof of any correction once found being delivered
to the plaintiff; (5) the undesirability of requiring an innocent party to
carry the burden of another's professional mistakes; and (6) the promotion
of cautionary techniques among the potential defendants for the protection
of all potential plaintiffs.
        Did the ES indeed make a mistake? Suppose Joe has said he plans to
invest for 15 years -- too short for real estate, too long for bonds, and
in that light the "Black Monday" might be seen as a temporary aberration.
(I.e. Joe caused the harm by selling out at the bottom rather than holding
on for the 15 years as planned.)
        Can the experts hide behind the company? Those who are professionals
(which is a legal phrase for "holders of a semi-monopoly") probably cannot be
fully shielded; the rest may have to seek indemnity from their corporation.
It will depend in part on their employment contract, or lack thereof.
        Can the knowledge engineers be found liable if their mistake led to
this? What sort of mistake? A standard programming flaw is not the same as
a design flaw. What if the mistake lies at the boundary -- who is responsible
for realizing that the computer has to have rules for assessing "market
psychology" that will quantitatively assess the subtle dynamics of what
the current "feel" for the market is? Did the domain experts learn that the
computer was going to do more than crunch numbers?

        This is both a nascent and a complex legal area. My hope is that a
number of the AI and ES companies realize the potential exposure and that the
evolution of the law can be influenced by their behavior -- and begin to
plan defensively. It is a bit more expensive initially, affecting immediate
profits; but it can provide tremendous savings both for the firm and for the
industry over the longer run.
                                George S. Cole, Esq.
                                793 Nash Av.
                                Menlo Park, CA  94025
                        GCole@Sushi.stanford.edu (until it goes away)


Re: Can you sue an expert system (RISKS DIGEST 5.71)

George Bray <lcc.ghb@SEAS.UCLA.EDU>
Tue, 8 Dec 87 16:28:09 PST
The discussion of suing expert systems is similar to the issues raised by
[the case of the Therac 25].  I am talking about the several deaths and
serious injuries that have resulted from software failures in certain X-Ray
machines made by Atomic Energy Canada.  (See the latest issue of IEEE
Spectrum for a short summary article; an earlier issue (within the last
year) of IEEE Spectrum had a longer article on the same topic.)

Basically, a modification to the data entry software used by operators
resulted in the machine delivering extremely large doses of radiation
while indicating that only small amounts were delivered.  As I remember it,
the machine can generate electron beams directly, or use the electron
beams to generate X-rays.  The machine is supposed to lower a target
into the path of the beam to generate the X-rays.  

Apparently, when the operator did some kind of data editing operation
(I think it was use an up-arrow key), the software would get confused
and raise the target while setting the beam intensity to the huge values
needed to generate X-rays.  This editing code was added in response to
user's complaints about the primitive data entry in earlier versions
of the software.  If I remember correctly, the failure was caused by
some rare conjuction of situations that could occur if the operator
used the "up-arrow" key to edit data at just the right time.  I think
the bug was some variable that was also used in an interrupt routine.

Various family members of those injured or killed have sued everyone
responsible, including the software engineer who added the "user-friendly"
editing code.  This raises many issues.  On the one hand, there was no
doubt that the software bug killed and injured people, so it seems 
reasonable that the people who made the poor software are liable.
On the other hand, I believe that the bug was due to an unforeseen
interaction that would be very hard to eliminate.  These kinds of bugs
probably exist in much of the software in thw world.

What do RISKS readers think of the issues raised by this case?  Are 
programmers liable for their software's actions?

George Bray, Locus Computing Corporation


Litigation over an expert system

Dean Sutherland <Sutherland@TL-20B.ARPA>
Tue, 8 Dec 1987 09:43 EST
In Risks digest 5.71, chapman@russell.stanford.edu (Gary Chapman) mentions a
"goofy" California law that provides for a defendant who is only 1% responsible
to pay 1% of the judgement.  Although this law may be goofy, it is a major
improvement over previous versions.  Before this law was passed, it was
possible to have the following situation:

    Defendant A:  99% guilty, has assets of $10,000
    Defendant B:   1% guilty, has assets of $1,000,000,000
    Judgement of $10 million against defendants.  A pays $10,000 (or maybe
    nothing by declaring bankruptcy).  B pays $9,990,000... EVEN THOUGH B
    WAS FOUND TO BE ONLY 1% GUILTY.

The new version of the law is a BIG improvement.

Dean F. Sutherland, Tartan Labs


Expert Systems Liability

Bjorn Freeman-Benson <bnfb@june.cs.washington.edu>
Tue, 08 Dec 87 12:09:03 PST
Regarding the discussion of Expert System Liability in RISKS-5.69
through 5.71 --- one argument is "it's just like a book".  I disagree.
The fact is that a book is completely Passive, but an expert system
may be Active.  One must ask a book for advice, but the expert
system may offer advice on its own, or even act in your behalf.

Consider a car with expert system controlled drive-by-wire steering.
When it fails, the manufacturer is liable, and it may turn out that
the expert system made an erroneous decision.  In that case, who, in
addition to the automobile company, is liable?

Bjorn N. Freeman-Benson,University of Washington


Sue Who? (Re: RISKS-5.71)

William Swan <uw-beaver!tikal!sigma!bill@RUTGERS.EDU>
Tue, 8 Dec 87 11:07:24 pst
>From: "Jerry Leichter (LEICHTER-JERRY@CS.YALE.EDU)"
>Subject: re:  Can you sue an expert system
>[...] "If an expert system gives bad advice, who can I sue?"  I find it
>extremely disturbing that this is considered an interesting question by
>ANYONE, let alone by technically sophisticated people. It is a symptom of
>the pervasiveness of our misplaced trust in buzzwords and, more generally,
>in computers:  If the computer said it, it MUST be right.  [...]

I wonder.

You believe instruction manuals, don't you? And try to follow the procedures
therein? I can show you a service manual for my pickup that is very wrong at
one step.

A true story:

Last week I was dealing with a new copier, which had its "entire" manual
on-line. I needed to do two-sided copies and was overwhelmed by the numerous
buttons and cryptic icons on the thing. No problem, I pressed "help" and the
display walked me through the setup. Very nice! It even walked me through 
clearing a paper jam or two - even telling me where within the machine the 
paper jammed and how to get there ("lift lever A", "raise lid B", etc).

Suddenly I got a message, "remove paper from duplexor". What's a duplexor? I
pressed help, and got.. "remove paper from duplexor". I checked all the
mechanics on top. No paper. I cleared the paper trays. Nothing out of place.
I opened the doors to the insides, the display says "close front door". I
close the door and look elsewhere. Nothing. I open the doors again, it says
"close front door". I close the doors and cycle power on the machine. I try a
copy, it says "remove paper from duplexor". I open the machine up (it says
"close front door") and hunt through the innards. No paper.

Finally, I find a secretary (it's after hours but she's still there) who
tells me what a duplexor is and where it is. I return to the machine, open
the doors (it says "close front door"), find the duplexor and remove the
sheet, close the doors (it says "ready") and continue copying!

What did I do wrong? Believe the computer? I don't think so (C'mon, I know
better than *that* :-). I had instead been led to put my trust in an 
incomplete set of procedures. More knowledge (i.e. what a "duplexor" was)
would have helped prevent me from making this error. 

The problem I ran into, and one that I'm sure we'll see more and more often,
is that this form of the information hid its incompleteness. If I had had
a printed manual with no reference to "duplexor" I would have known it was
incomplete. This mechanised manual hid that information from me and, worse,
led me astray by behaving as if it were not incomplete.

If this had been a situation with serious consequences I believe I would 
have very good cause for litigation.


re: Can you sue an expert system (RISKS-5.71)

Wm Brown III <Brown@GODZILLA.SCH.Symbolics.COM>
Wed, 9 Dec 87 17:55 PST
 From: "Jerry Leichter (LEICHTER-JERRY@CS.YALE.EDU)"
  <LEICHTER@VENUS.YCC.YALE.EDU>
 Subject: re:  Can you sue an expert system

 In RISKS-5.69, Barry Stevens becomes another in a long line of people to raise
 the question "If an expert system gives bad advice, who can I sue?"  I find it
 extremely disturbing that this is considered an interesting question by
 ANYONE, let alone by technically sophisticated people.  It is a symptom of the
 pervasiveness of our misplaced trust in buzzwords and, more generally, in
 computers:  If the computer said it, it MUST be right.

I find it extremely disturbing that ANYONE on this list would make such a
statement.  Judging from the length of the diatribe which followed this
preamble, I must conclude that Mr. Leichter really has strong feelings and
quite a lot to say on the subject, and therefore must find it somewhat more
interesting than he admits to.  Personally, I think that the realities of
turning software, especially expert systems, loose on the world are very close
indeed to the center of this forum's domain.

When software of this type is sold to professionals who know its limitations,
or who understand that its output is an 'opinion' rather than a statement of
hard facts, I agree that it is not reasonable for those users to turn around
and sue if they get burned by depending upon it.  This, however, is a very
small subset of the potential users for expert systems.

Lawsuits are society's way of enforcing product specifications and warranties.
Taking away this mechanism is equivalent to voiding these contracts.  One way
to look at things is that this is an important check which will discourage the
offering of immature or inferior products.  It is a very effective way of
making marketeers "Put their money where their mouth is."

There are definitely cases where expert systems are of value because they
offer potential savings in time; consider an 'advisor' for hospital emergency
rooms which gives every third-shift physician the knowledge equivalent of
twenty top specialists and a medical library, but much quicker.  Users of this
system simply won't have the time to second guess the 'expert' or cross check
every literature reference, but if this system gives incomplete advice it will
sooner or later kill someone.  Should the physician on duty be responsible for
deciding whether to follow the system's advice, even if he doesn't understand
the specialty involved?  Should the hospital let some patients die for lack of
quick decisions rather than buy this system?  Should state legislatures
indemnify some or all of the players from lawsuits, thus legitimizing the
concept that anything the computer says is gospel?  Clearly, the legal aspects
will be a large factor in the equation which decides when or if such systems
will be installed.

On a more mundane scale, consider the original example of a personal finance
advisor.  If it is offered as a novelty, with pages of disclaimers and
explainations of why no sane person would rely on its advice, who will buy it
except as a toy?  On the other hand, if it comes with the usual marketing
banners describing it as the answer to everyone's financial questions, the end
user certainly has some right to expect that it will give good advice.
Whether this right extends to the point of filing lawsuits is properly a
matter to be decided by a jury, based upon the facts in each individual case;
it would be just as wrong to give blanket protection to software vendors as it
would be to legislate all responsibility onto their shoulders.

To say that this topic is complex or ill-defined is an understatement.  To say
that it is not interesting is a personal opinion which I will hotly contest.
If expert systems are to progress from academic curiosities to everyday-life
applications, they will have to be useable by non-computer types who need fast
and reliable answers, not philosophical disertations or legal disclaimers.

NOTE:  The opinions expressed are my own and do not necessarily represent
those of my employer or anyone else.


Microprocessors vs relay logic (RISKS-5.65)

Wm Brown III <Brown@GODZILLA.SCH.Symbolics.COM>
Wed, 9 Dec 87 17:55 PST
Two cents worth on the relative virtues of microprocessors vs. relay logic for
railway control systems.  (Continuing the discussion started in RISKS 5.65).

There is such a thing as choosing the right tool for a job.  About fifteen
years ago, I did some design work on a motion control system for amusement
rides.  The improvements since then in reliability and cost of mini and micro
computers has changed things, and I don't know whether I would draw the same
conclusions today, but at the time the best solution appeared to be relays
with a mini backing them up and monitoring for control system failures.
Here's why:

<>>Well designed relays have a conservative life expectancy of 100 million 
operations (that's 1E8).  They don't particularly care how long each
operation lasts; things only wear when they move.  Given a typical rate
of 100 operations per hour, this works out to 1 million (1E6) hours
of ACTUAL USE before they start to die.  Idle hours don't count.

Computers typically have a fairly constant MTBF, regardless of what they are
doing for those hours.  Turning them on and off may substantially reduce this
time, so it is difficult to decide whether it is better to leave the system on
all day and use up 24 hours of life per day or power it on and off every day.
Either way, not many computers come close to the million-hour level.

<>>Somewhere in the system, your controller has to connect to a real world
full of spikes, transients, dirt, water, and unionized maintainence techs
with 250 watt soldering irons.  Relays are pretty imune to nearby lightning
hits, poylester shirts, etc.  Anything which permanently damages them tends
to make them fail completely.

Computers can be brought down by anything from dry days to alpha particles.  
Worse yet, they may drop a bit without noticing it and you will never be 
able to figure out exactly what happened.  Yes, it is feasible to program 
in a lot of redundancy, but it is still possible for a glitch in the 
housekeeping routines to clobber the functional code or working storage.

<>>Flexibilty is great when you want to control a ride and balance your
checkbook on the same machine, however there are some cases where the
same flexibility is useless or even dangerous.  If you want the system
to perform the same way, day after day, it is tough to beat hardwired
control logic.  If something DOES go wrong, it is usually much easier
to deduce how and why when the logic paths are soldered down.

<>>Failures in relay logic tend to be more localized.  A stuck contact
in track zone 5 may someday permit one car to tail-end the next, but
it won't affect zone 8 at all.  If the same CPU is controlling the
entire system it can spread havoc everywhere instantly.

No, I don't sell relays or even use them any more.  This just seemed germane
to the original discussion.

Please report problems with the web pages to the maintainer

Top