The RISKS Digest
Volume 5 Issue 78

Friday, 18th December 1987

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Roger Boisjoly and Ethical Behavior
Henry Spencer
Ronni Rosenberg
Computer aids taxi dispatch
Jeff Lindorff
Re: product liability
Martyn Thomas
Re: Expert systems liability
Jonathan Krueger
Re: Australian telecom blackouts and 'hidden' crimes
Jon A. Tankersley
Wall Street Kills The Messenger
Scot E. Wilcoxon
Expert systems; Ejection notice?
Steve Philipson
Squirrels, mice, bugs, and Grace Hopper's moth
Mark Mandel
Info on RISKS (comp.risks)

Roger Boisjoly and Ethical Behavior

<mnetor!utzoo!henry@uunet.UU.NET>
Fri, 18 Dec 87 04:41:38 EST
There has been a fair bit of back-and-forth over Roger Boisjoly et al. in
private mail [subsequent to RISKS-5.63,70,71], most of which is pretty
peripheral to Risks.  Herewith a straight chronology of verifiable events.
One or two personal notes are in brackets [].  Numbers in brackets are page
numbers in the Rogers report, "Report of the Presidential Commission on the
Space Shuttle Challenger Accident".  (Any library with any pretensions to
quality should have this; it is not an obscure technical report, but a
widely-distributed and not overly expensive book that is basic to real
understanding of the disaster.)  Quotes in single quotes are approximate,
double quotes are literal.

Dramatis Personae:

B = Roger Boisjoly, Morton-Thiokol engineer
L = Bob Lund, M-T VP engineering
H = George Hardy, NASA manager
M = Larry Mulloy, NASA manager
K = Joe Kilminster, M-T VP boosters
R = Stan Reinartz, NASA manager

The scene:  a teleconference between M-T Utah and two NASA centers, called
    to discuss the issue of cold vs. SRBs [107].

1. B: 'Don't launch.' [89]  L: 'Don't launch.' [90]

2. H: 'Argh.  But if contractor says don't launch, we won't.'  [Note NASA
    willingness to at least talk about not launching.] [90]

3. K: 'If the engineers say no, M-T says no.' [90]

4. M&H: 'Argh.  We think it's not that bad.  We're impatient to launch.' [91-2]

5. K: 'We want a recess to talk about it.'  Done. [92]

6. Much discussion.  L told to put on his management hat. [93]

7. Teleconference resumes, same participants [including B]. [108]

8. K: 'Go ahead and launch.' [93]  B comments later in testimony:  "I did
    not agree with some of the statements that were being made to
    support the decision." [93]  [Note:  not just 'decision wrong' but
    'supporting arguments are lies'.]

9. R asks whether anyone in the teleconference has a different position or
    further comments. [96,100]

10.  --->  SILENCE  <--- [96,100]  In particular, B is silent. [93]

11. Teleconference concludes.  B is unhappy but does nothing. [93]

12. Next morning:  manned space program in shambles, seven astronauts dead.

13. Later, in testimony, B:  "I felt I really did all I could to stop the
    launch." [93]


The reader will have to form his own opinions on whether Boisjoly was, in
these events, a heroic whistleblower risking his job for his principles,
or a dutiful company man who shut up when his management told him to shut up.
He clearly did become a whistleblower later... after the damage was done.

Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,decvax,pyramid}!utzoo!henry


Roger Boisjoly and Ethical Behavior

Ronni Rosenberg <ronni@CCA.CCA.COM>
Fri, 18 Dec 87 15:23:48 EST
I am afraid I continue to disagree with Henry Spencer's interpretation.  It
appears that he condemns Boisjoly because Boisjoly did not speak out at the
teleconference after Morton Thiokol and NASA management decided to launch.
But Boisjoly had argued his point vigorously at that meeting.  NASA was part
of the teleconference and heard these arguments; much of this is left out of
the clipped, paraphrased excerpts from the Rogers report.  The fact that h
Boisjoly did not repeat his argument after Thiokol management clearly chose
to override it hardly seems like a worthwhile basis for such harsh criticism.

Also, it is simply wrong to say that Boisjoly did not take risks before the
teleconference.  With the possible exception of tenured faculty, anyone who
works in an organization takes a risk when criticizing management.  Boisjoly
did just that, repeatedly, with regard to the O-ring issue.  This is not the
behavior of a "dutiful company man"!  Continued criticism incurs management
disatisfaction, and Boisjoly increasingly was shunned and treated badly by his
managers and some colleagues; yet, he kept raising the issue.  The risks that
he took are clear when you hear him discuss these events at length, but not
when you read the "straight chronology of verifiable events" on which Spencer
appears to base his entire argument.  Eventually, critical behavior puts one's
job on the line.  Anyone who has worked at an organization should know the
very real risks of being a critic.

Before the teleconference, Boisjoly took all possible action within Morton
Thiokol.  He complained in writing to increasingly high levels of management,
up to the VP of Engineering.  As a direct result, Thiokol set up a group to
investigate potential problems with the O-rings.  Boisjoly says the company
did not assign enough resources for the group to collect adequate data.  Lack
of adequate resources is a common complaint.  However, the procedure of
setting up a group to investigate the problem was a reasonable one on the
company's part.  This procedure probably was a standard part of the company's
structure for resolving problems, and that corporate structure had resulted in
a successful shuttle program until then.  Boisjoly had no reason to think that
the structure would fail this time:  As an engineer, he was not in a position
to identify the organizational flaws that the Rogers commission later pointed
out.  Hence, it is not clear that he should have gone public at this time.

It was not until the teleconference — when Boisjoly genuinely believed the
launch would be delayed — that the lack of an adequate O-ring investigation
became a critical problem.  It is at this point that Spencer finds fault with
Boisjoly's lack of action.  Yet, Boisjoly did not merely "raise doubts."  He
argued vigorously against launch, presenting what data he had to support his
position, to both NASA and Morton Thiokol.  Others made the decision.  It was
clear that repeating his points would not have affected the decision.  He had
argued his points strongly, and it appeared that NASA and Thiokol management
wanted to override them.

In hindsight, it is easy to say he should have gone public at this point.  But
making ethical judgments in hindsight is unfair.  Spencer gives Boisjoly
little credit for going public later.  But without the testimony of critical,
in-house engineers, the Rogers commission had little chance of discovering the
truth.  It was Boisjoly's perception that the commission was not getting the
full story, so he risked his job and livelihood by testifying:  The evidence
shows that whistle-blowers are unwelcome in their own or other companies after
they go public with criticism of their company.

I believe Boisjoly did everything he was ethically obligated to do, by
speaking out at every available level and time to express his strong concerns
and, at the end, his strong feelings against launching.  By doing this, he
had nothing to gain, he endured censure within the company, and he took great
risks.  When he later went public, he risked his job and livelihood.  To say
he was wrong because he did not do this earlier is to say he should have been
a hero, not an average ethical human being.  I think an heroic standard of
behavior is unfair.  While it would have been ethically right to go public
earlier, I do not believe it was ethically wrong to postpone this heroic step.


Computer aids taxi dispatch

Jeff Lindorff <cae780!sequent!jeffl@sri-unix.ARPA>
Thu, 17 Dec 87 13:58:52 pst
Getting a cab in Portland, OR., may or may not be getting easier ...

(excerpted without permission from "The Oregonian")

IT'S NO LONGER CATCH AS CAB CAN — COMPUTER SUPPLANTS RADIO DISPATCH --

Her name is Cathy, and better than anyone else she'll remember the week
Broadway Cab (in Portland, OR.) got its new computers.  She called the
company one recent night and asked to have a particular driver call her. A
message was sent over the company's brand-new computer system, but but it
was inadvertently relayed not to him alone, but to the entire fleet of cabs
on the road that night.  She received 31 phone calls and a lesson in what
happens with new computers, said Ed Stemwedel, night supervisor.

Things are changing around Broadway Cab. All 125 cabs in the company's fleet
are being equipped with the small screens of a computer that will locate and
assign a fare to the closest driver. The system will cost the company $500,000,
and still includes radios. Drivers, after all, still need to check for special
instructions. But the radios probably won't be used much.

Drivers generally seem to support the new system. It's more fair and efficient
and provides good protection against "theft" of fares by other cabs, several
drivers said a few days after getting the new computers. But things will be
different.

There have been some problems. Two, three, even as many as six cabs have been
showing up for one fare. There have been some long delays, sometimes an hour
or more. And some calls are being missed altogether. But those problems are the
result of human error more that anything else, said Warren Krupa, a dispatcher.

"The folks here don't know what we're doing yet" with the computers, he said.
"But every day, every shift, its better. It'll take a few days."

The computer system was built by Mobile Data International of Vancouver, 
British Columbia. The Broadway Cab system is the only one like it on the West
Coast of the United States. Vancouver, Houston, Dallas, Miami, and some New
York cab companies recently have started using a similar setup, according to
Denny Reed, Broadway Cab's marketing director.


Re: product liability

Martyn Thomas <mcvax!praxis!mct@uunet.UU.NET>
Thu, 17 Dec 87 17:48:56 BST
In Risks 5/73 John Gilmore writes:
<> For imported goods, the original importer into the EEC is liable.
> ..I am curious how long ... mcvax will last ... it may be the largest 
>single channel for import of software.

I'm no lawyer, but I believe the liability will fall on the first company 
in the import chain which supplies the software as a business transaction.
I believe mcvax will escape liability, though anyone importing software by
this route and selling it on could be liable (under the UK Act).

> Does Lloyds of London sell bug insurance?

Yes - they are one of the largest underwriters of product liability and
professional liability risks.  

> It might be fun for someone to sue Praxis .....

Is this one of the RISKS of posting to this group?  Anyway, we're insured,
so sue away! :-)

Martyn Thomas, Praxis plc, 20 Manvers Street, Bath BA1 1PX UK.
Tel:    +44-225-444700.   Email:   ...!uunet!mcvax!ukc!praxis!mct 


Re: Expert systems liability (RISKS-5.75)

Jonathan Krueger <dgis!jkrueger@uunet.UU.NET>
Fri, 18 Dec 87 02:32:42 est
It's unreasonable to sell a product for general use when you know in advance
it will mislead anyone but a specialist.

>Lawsuits are society's way of enforcing product specifications and warranties.

Aren't there some other ways? How about product testing and publication
of findings, product and manufacturer reputations and their relation to
repeat business or lack thereof, contractual remedies, buyer complaints
and buyer/seller negotiation, and buyer preference of distributors and
middlemen who expedite resolution of buyer complaints?

Of course, these depend on the seller's interests in cooperation; there
are sellers who won't cooperate.  But lawsuits have their limits too.
The buyer may sustain damages difficult to prove.  Statutes of
limitations expire.  The seller may go bankrupt or skip town.  He may
stall.  He may tie up the courts with legal maneuvering.

So neither lawsuits nor the alternatives can enforce product
specifications and warranties.  Not in the sense that helps an
individual buyer.  But, over the long run, don't suits force bad
companies to shape up?  In some cases.  Certainly the seller will act
to limit his liability; one way is to improve product quality. But
there are cases where all steps have been taken long ago, when product
quality is already as high as anyone knows how to make it.  At that
point the seller doesn't work to make his product meet the specs; he
pays his last settlement and simply chooses not to sell any more.

For instance, there used to be several polio vaccine manufacturers in
the U.S.  Today there is only one.  Probably within five years we'll
import the stuff.  There is no question that increased product
liability caused this.  The court cases are known, the history is well
documented, you can add up the costs yourself.  As I understand it,
with the highest quality vaccine, about one in ten million kids will
get a polio injection and it will kill him.  A few more will experience
severe side effects.  Millions of kids are injected every year.
Parents sue more these days.  As the settlements added up,
manufacturers decided one by one to leave the market.

How is the remaining U.S. vaccine manufacturer managing?  Well, it
raised the price to cover its losses.  In no sense has it been forced
to put its money where its mouth is; who do you think pays for public
vaccination?  The company plans to continue production as long as costs
from lawsuits are predictable enough to meet with price increases.

Has the buyer been protected?  Did we drive the shoddy manufacturers
out of the market?  No.  Product quality displayed a slight negative
correlation with court and settlement costs.  Essentially, quality was
as high as anyone could make it.  It gradually improved over the years,
including the years that manufacturers liability costs shot up.

Will kids enjoy a lower risk?  No.  Public health authorities are now
arranging to buy vaccine from foreign sources when domestic is no
longer available.  Foreign labs can equal quality of domestic
manufacturers.  They can't do any better.  Sometimes they do worse.
But the alternative is no vaccine at all.

The simple RISKS:
    No matter what, every injection RISKS death.
    Not to inject RISKS polio for that individual.
The less simple RISKS:
    Drug manufacturers RISK getting sued.
    We all RISK a public health disaster.
The interaction RISK:
    Companies take fewer RISKS to develop and sell useful drugs.

How does this apply to expert systems?  By analogy.  There's the RISK,
well known to RISKS readers, of not using the computer.  This could
result from companies taking longer to develop and test the system,
meanwhile perhaps the expertise it provides is less available.  Or it
could result from fewer companies choosing to invest in expert system
development.  Perhaps the entire field might advance and find
acceptance and application more slowly.  Is anyone willing to stand up
and say that we'd be worse off if commercial and widespread use of
expert systems were delayed by ten years?  How about twenty?  Perhaps
we'd all be better off, if the software were twenty years better
understood when it becomes an off-the-shelf product.

More generally, what are the trade-offs involved in driving the
creation of software by fear of lawsuit?  Does it motivate software
companies to take longer but deliver better product when they do
deliver it?  How many companies will instead choose to develop less
flexible, more constrained, less generally useful software?  Any degree
of experience with the computer field should convince one that there's
satisfying amounts of money to be made offering incrementally better
software at last year's prices.  How often will the released product
behave no more safely or correctly than it would have if we got it five
years earlier, but now the product manager can tell the jury "We did
five years of in-house testing!"

How many companies will make code that does little beyond covering the
seller's, uh, liabilities?  There are features of non-computer products
out there aimed specifically at limiting liability, features which
exist solely to be pointed out to juries as Safety Measures We Took.
How useful will software be that follows this model?

How many companies will do what they can to develop a good product but
put their faith in passing along their court and settlement costs to
their customers?  And how many will simply choose not to enter the
market, because in our zeal to protect the buyer we left the seller a
little too exposed?  How many of them would otherwise create wonderful
and useful systems, which like vaccines are used with risk but are
better than the alternative.  It's hard to evaluate this RISK.  But
it's there.  In my opinion it argues against punitive litigation as the
buyer's sole remedy and seller's chief motivation for quality.


Re: Australian telecom blackouts and 'hidden' crimes (RISKS-5.73)

Jon A. Tankersley <uunet!apctrc!cr1a!zjat02@mimsy.umd.edu>
Tue, 15 Dec 87 14:36:58 CST
In Tulsa, about 3 years ago over Thanksgiving weekend, criminals took
chainsaws to the wiring boxes of the phone company to cover up other crimes.
The service to about 1/3-1/2 of Tulsa was knocked out.  The criminals were
later caught.  Unfortunately I can't seem to remember any particulars.  I've
been asleep a few times since then.
                                                   -tank-


Wall Street Kills The Messenger

Scot E. Wilcoxon <umn-cs!datapg.MN.ORG!sewilco@cs-gw.D.UMN.EDU>
Thu, 17 Dec 87 16:00:23 CST
"Wall Street Kills The Messenger" is an article in 'Computer &
Communications DECISIONS', Dec 1987, pg 72-74,104.

The problem on Wall Street was not too much automation but rather too little.
The investment strategy which most obviously failed was portfolio insurance.
Portfolio insurance assumes quick trades.  The weak point was not the NYSE
computers but rather the interface between them and the human traders.
Futures contracts trading speed was limited by the speed of printers and human
traders on foot in the NYSE floor.  Liquidity seemed to vanish.


Expert systems; Ejection notice? (RISKS-5.76)

Steve Philipson <steve@ames-aurora.arpa>
Thu, 17 Dec 87 13:32:34 PST
Here are some responses to a few of today's postings.

From: nsc!taux01!taux01.UUCP!amos@Sun.COM (Amos Shapir)
It seems the main problem is blindly relying on expert systems, because of lack
of time and expertise. A well designed expert system should therefore give not
only the answers, but also the decision path by which it got at them. A country
doctor may not have all the knowledge that a hospital system provides, but may
well be qualified to judge whether a decision like 'the patient has blue eyes
therefore it's pneumonia' is valid in a particular case.

    This is an excellent idea, and the method should be followed
    whenever possible.  It can't be done in all cases, though, as
    many expert system applications are real time, and the operator
    can't examine the entire decision path in the time available.
    For example, a pilot flying an aircraft through a fly-by-wire
    system can't examine all the control logic while flying the
    airplane.  We can (and should) strive to give as much pertinent
    information about the decisions as possible.

    The NASA Aviation Safety Reporting System (ASRS) contains many
    reports on automated systems problems.  One of particular 
    interest concerns ground proximity warning systems.  A commercial
    crew reported landing after a GPWS alert on approach, as they thought 
    that the alert was erroneous.  (The alert was your standard "pull-up"
    voice message).  It turns out that the flaps were only partialy
    deployed and not at their correct landing setting.  The GPWS could
    have been programmed to alert them to the specific logic rule
    that caused it to activate (e.g. "pull up!  flaps not in landing
    configuration").  This might be difficult to do in practice, as 
    the GPWS considers many factors, and would have to be making a
    conclusion about the intended maneuver.  

    It's interesting to note that in this case the crew did not 
    blindly follow the reccomendation of their expert system --
    as far as they determined, the expert system was at fault.
    Who would have been judged liable if there was an accident
    as a result of this situation?

From: mnetor!utzoo!henry@uunet.UU.NET

>Mind you, there is a negative side to having a relatively thin canopy.  There
>was a recent accident in Britain, not yet explained in detail, which *might*
>have been due to the parachute-deployment system of an ejection seat firing
>*through* the canopy by accident (i.e. not as part of an ejection) and pulling
>the pilot out of the plane after it.  The plane (a Harrier) unfortunately kept
>on flying and eventually ran out of fuel over deep ocean.  Recovering it will 
>be difficult, but may be tried because more information is badly needed.

   Aviation Week printed a preliminary article on this accident.  It seems
that the Harrier had an experimental back-up bail-out system.  There had been
some problems with low-altitude ejections, so this new system was devised in
case the normal ejection seat did not function.  It was supposed to work by
firing a rocket to deploy the pilot's chute, and to release him from the
ejection seat.  Every ejection system has a safe deployment envelope — it
appears that the Harrier was flying faster\ than the top of this systems
limit.  The accident investigators have found no way that the system could
have been accidently fired, and are tyring to determine if the pilot
intentionally activated the system.

   It seems that even emergency backup systems have their risks.  These
systems must also be integrated into the overall system.  Perhaps for
computer driven systems there should be manual (physical) interlocks
hardwired in to prevent dangerous excursions from normal operation.

Steve Philipson, NASA/Ames Research Center, Moffet Field, CA


squirrels, mice, bugs, and Grace Hopper's moth

Mark Mandel <Mandel@BCO-MULTICS.ARPA>
Fri, 18 Dec 87 12:06 EST
The word "bug", in the sense we use in the computer world, did NOT
originate with "Amazing" Grace Hopper's moth.  It is attested in
non-computer environments, but with the same meaning ("a mistake in
design that causes errors in operation"), from before the time of the
computer; certainly before the date of the log entry-with-moth.  William
Safire discussed this around a year ago.  I've found a use in a
little-known non-Tarzan novel of Edgar Rice Burroughs (_Beyond the
Farthest Star_), in a context of aerodynamics or rocket design.  Though
the date of the book is a couple of years after Hopper's moth, the usage
-- without explanation, as a colloquialism that the author assumes the
reader will understand — is evidence that the term was already well
known among engineers outside the then-nascent field of computing.

Please report problems with the web pages to the maintainer

x
Top