The RISKS Digest
Volume 4 Issue 39

Sunday, 11th January 1987

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Re: As the year turns ...
Jerry Saltzer
911 computer failure
PGN
Engineering tradeoffs and ethics
Andy Freeman
Ken Laws
George Erhart
Re: computerized discrimination
Randall Davis
Info on RISKS (comp.risks)

Re: As the year turns ... (Jeffrey Mogul)

Jerome H. Saltzer <Saltzer@ATHENA.MIT.EDU>
Fri, 9 Jan 87 12:40:24 EST
I believe it was New Year's eve, 1962, when I first found myself
poking around inside a system--M.I.T.'s Compatible Time-Sharing
System for the IBM 709--that was nominally intended for continuous
operation, but that had to be *recompiled* to tell it about the new
year, because whoever designed the hardware calendar clock assumed
that someone (else) could program around the missing year field.

It took only a small amount of contemplation to conclude that any
source that claims to tell you the date has got to mention the year,
and with some browbeating of engineers we got a version of that
design included in the Multics hardware a few years later.

At the time, someone talked me out of writing a paper on the subject on the
basis that the right way to do it is so obvious that noone would ever be so
dumb as to design a date-supplying clock without the year again.  Possible
conclusion for RISKS readers?: nothing, no matter how obvious, is obvious.

                        Jerry


911 computer failure

Peter G. Neumann <Neumann@CSL.SRI.COM>
Sat 10 Jan 87 12:04:00-PST
From an article by Dave Farrell, San Francisco Chronicle, 9 Jan 1987:

The city's failure to send help to a choking 5-year-old boy was attributed to
equipment failure, not human error, according to Mayor Dianne Feinstein.  When
Gregory Lee began choking, his Cantonese-speaking grandmother dialed 911, but 
gave up when no one understood her.  The automatic call-tracing program somehow
retrieved the wrong address and displayed it on the police controller's
computer screen.  (The rescue crew was directed to the wrong address.)


Engineering tradeoffs and ethics

Andy Freeman <ANDY@Sushi.Stanford.EDU>
Fri 9 Jan 87 09:58:41-PST
Dan Ball <Ball@mitre.ARPA> wrote:  [He mentions that many engineering
    organizations are so large and projects take so long that individual
    responsibility is suspect and the uncertainty in predicting risks.]
    I'm afraid reducing the problem to dollars could tend to obsure the
    real issues.

What issue is obscured by ignoring information?

    Moreover, even if the [cost-benefit] analyses were performed
    correctly, the results could be socially unacceptable. [...]  In the
    case of automobile recalls, where the sample size is much larger, the
    manufacturers may already be trading off the cost of a recall against
    the expected cost of resulting lawsuits, although I hope not.

Between legal requirements and practical considerations (they can't
pay out more than they take in), manufacturers MUST trade off the cost
of a recall and other legal expenses against costs and probability.

The result of a cost-benefit/risks analysis is information, not a
decision.  This information can be used to make a decison.  I think it
is immoral for a decision maker to ignore, or worse yet, not determine
cost-benefit or other relevant information.  (There is a meta-problem.
How much should gathering the information cost?  People die while
drugs undergo final FDA testing.  Is this acceptable?)  In addition,
gathering the information necessary to determine it often finds
opportunities that the decision maker was unaware of.

Since we'd like to have cars, there will always be some safety feature that
is unavailable because we can't afford a car that includes it.  (Because
autos and "accidents" are so common, auto risks can be predicted fairly
accurately.)  Unfortunately, the current legal system restricts our access
to information about the tradeoffs that have been made for us.  You might
buy a safer car than I would, but you don't have that information.  The
costs are spread over groups that are too diverse.  A legal system that
encourages that is socially unacceptable.
                                                  -andy


Engineering Ethics

Ken Laws <LAWS@SRI-IU.ARPA>
Fri 9 Jan 87 10:15:26-PST
  Date: Thu, 08 Jan 87 11:29:37 -0500   <RISKS-4.38>
  From: ball@mitre.ARPA <Dan Ball>

  ... I am not convinced that we know
  how to predict risks, particularly unlikely ones, with any degree of
  confidence.

True, but that can be treated by a fudge factor on the risk (due to
the risk of incorrectly estimating the risk).  There are difficulties,
of course: we may be off by several orders of magnitude, different
tradeoffs are required for large, unlikely disasters than for small,
likely ones, and certain disasters (e.g., nuclear winter, thalidomide)
may be so unthinkable that a policy of utmost dedication to removing
every conceivable risk makes more sense than one of mathematically
manipulating whatever risk currently exists.

  I would hate to see a $500K engineering change traded off against
  a loss of 400 lives @ $1M with a 10E-9 expected probability.  I'm afraid
  reducing the problem to dollars could tend to obsure the real issues.

How about a $500M tradeoff against a loss of 1 life with a 10E-30
probability?  If so, as the punch line says, "We've already
established what you are, we're just dickering over the price."
The values of a human life that are commonly accepted in different
industries seem to fall in the $1M to $8M range, with something
around $2M being near the "median".

  Moreover, even if the analyses were performed correctly, the results could
  be socially unacceptable.  I suspect that in the case of a spacecraft, or
  even a military aircraft, the monetary value of the crew's lives would 
  be insignificant in comparison with other program costs, even with a
  relatively high hazard probability.

The "value of a human life" is not a constant.  The life of a volunteer or
professional, expended in the line of duty, has always been considered less
costly than the life of innocents.  If we forget this, we end up with a few
$60M fighter aircraft that can be shot down by two or three less-secure $5M
aircraft.  (I predict that the next protracted U.S. war will be won by
expendable men in jeeps with bazookas, not by autonomous vehicles.)

  In the case of automobile recalls, where the sample size is much larger,
  the manufacturers may already be trading off the cost of a recall against
  the expected cost of resulting lawsuits, although I hope not.

Of course they are.  The cost of lawsuits is much more real than any
hypothetical cost of human life.  In fact, the cost of lawsuits <>is<<
the cost of human life under our current system.  The fact that awards
differ depending on manner of death, voluntarily assumed risk,
projected lifetime income, family responsibilities, etc., is the
reason that different industries use different dollar values.

I think we should set a formal value, or set of values, if only to
ease the burden on our courts.  It would give us a firm starting
point, something that could be adjusted according to individual
circumstance.  This is already done by the insurance industry
and their guidelines are also used by the courts in setting reasonable
damage awards ($x for mental suffering, $y for dismemberment, ...).
It would not be a big change to give legal status to such values.
Courts would still be free to award punitive damages sufficient to
inflict genuine influence on rogue corporations.

As for the dangers of incorrectly estimating risks, I think that the
real danger is in not estimating risks.

                    — Ken Laws


Engineering Ethics

George Erhart <gwe@cbosgd.mis.oh.att.com>
Fri, 9 Jan 87 16:05:50 est
Whether or not we like to admit it (or even are aware of it), we all
(not just engineers) place a monetary value on human life. For example,
consider the number of people who drive small cars; most of these are less
survivable in a collision than larger, more expensive autos. The purchasers
usually are aware of this, but accept the  risks to save money.

How many of us have rushed out to have airbags installed in our cars ? How
often do we have our brakes checked ? Do we even wear our seatbelts ?

The facts are that :
1)No system can be made 100% safe/infallible.
2)The cost of the system increases geometrically as the 100% mark is approached
3)A compromise *must* be reached between cost and safety.

A good example of the latter would be in the design of ambulances. We could
make them safer via heavier construction, but this would decrease top speed
(which also makes the vehicle safer). The increased response time, however,
would endanger the lives of the patients. Larger engines can be installed to
regain speed, increasing both the purchase cost and operating expense, which
will result in fewer ambulances being available, and increased response time.

We set the value of human life in countless ways. We must; it is an unavoidable
situation.  But that value is rarely set by an engineer; it is fixed by the
consumer (read you and me) who determine how much they are willing to pay for
their own safety.

Bill Thacker   -   AT&T Network Systems, Columbus, Ohio


Re: computerized discrimination

Randall Davis <DAVIS%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Sun 11 Jan 87 13:54-EST
> Date: Wed 7 Jan 87 15:54:13-PST
> From: Ken Laws <LAWS@SRI-IU.ARPA>
> Subject: Computerized Discrimination
>
> ... Randall Davis made the implicit assumption that the discrimination
>consisted of a rule subtracting some number of points for sex and race,
>and questioned whether the programmer shouldn't have blown the whistle.

Here's the relevant paragraph:

   One can only imagine the reaction of the program authors when they 
   discovered what one last small change to the program's scoring function 
   was necessary to make it match the panel's results.  It raises interesting 
   questions of whistle-blowing.

There's no assumption there at all about the form the scoring function.

One "small change" that would be at the very least worth further investigation
is the need to introduce race as a term.  Whatever its coefficient, the need
to introduce the term in order to match the human result should at least give
one pause.  That's the whistle-blowing part: one ought at least to be wary and
probe deeper.  "Reading the polynomial" to determine the direction of the
effect may not be an easy task, but this is one situation where the
circumstances should quite plausibly inspire the effort.

The point remains that the polynomial, once created, can be examined and
tested objectively.  No such option exists for people's opinions and unstated
decision criteria.

Please report problems with the web pages to the maintainer

x
Top