The RISKS Digest
Volume 9 Issue 15

Tuesday, 22nd August 1989

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Toronto Stock Exchange down for 3 hours, disk failures
Peter Roosen-Runge
Automated highways ...
Jerry Leichter
Bill Gorman
Peter Jones
Emily H. Lonsford
Bill Murray
Constructive criticism? Technology doesn't have to be bad
Don Norman
Computer Ethics
Perry Morrison
Info on RISKS (comp.risks)

Toronto Stock Exchange shut down for 3 hours by disk failures

Peter Roosen-Runge <peter@nexus.yorku.ca>
Tue, 22 Aug 89 11:20:16 EDT
The following excerpts from Toronto newspapers indicate the reactions expressed
following the failure of a Tandem `non-stop' system at the Toronto Stock
Exchange August 16th. Note that the Tandem system did indeed not stop, but the
three disc drive failures prevented access to critical market information,
without which trading could not continue.

    "TSE `crash' drives trades to Montreal, Brokers curse computer system"
       (Mark Hallman, Financial Post, Thursday August 17, 1989, p. 1.)

A computer crash all but shut down trading on the Toronto Stock Exchange for
almost three hours yesterday, forcing tens of millions of dollars' worth of
trades to Montreal. ... [the crash] — a multiple failure within a disc-drive
subsystem — forced a halt at 9:41 AM. ... `Two pieces of hardware break down
and Bay Street breaks down.' said a sour Charles Mitchell, trader. ... `Who's
accountable for that?' ...

A TSE spokeswoman said the failure of both primary and backup systems had
never occurred since computers were installed 26 years ago. ...

`Today, the market floors are so automated that when you have this kind of
computer failure, there is nothing that can be done', exchange
President Pearce Bunting said. `The greater your dependence on computers, the
greater the risks.'

The exchange's computer assisted trading system (CATS) was not affected and
smaller stocks continued to trade.  But only about 25% of the stocks listed on
the TSE are traded using CATS. ...

`Everybody's very much annoyed,' McLean McCarthy's Mitchell said. `It's
costing us a lot of money.  I think the people upstairs in the exchange should
be held accountable for it.'

[Note: Exchange floor traders failed to haul out the traditional chalkboards
to continue trading manually "in the interests of a fair market".]

                 "Fail-safe Tandem system fails"
                Geoffrey Rowan and Lawrence Surtees
                Globe and Mail, Aug. 17 1989, p. B-1

The stock market crash of 1989 — yesterday's three-hour trading halt at the
Toronto Stock Exhange — was caused by multiple failures in a computer [Tandem
VLX] designed to work even when some of its pieces are broken.

... Peter Richards, president of Tandem Computers Canada of Markham, ont.,
said the disk drives suffered three separate failures that were a `once in a
lifetime event'. ... Mr. Richards said yesterday's failure was not a computer
crash. `The whole system, minus the data on the one drive, was up and running
and available to the exchange during the entire morning.  That also meant that
recovery was done with the computers on-line, making it a quicker process than
with standard computers."

             "TSE seeking to prevent further computer chaos",
        Geoffrey Rowan, Globe & Mail, August 21, 1989, p. B-1, B-4.

... `Fault-tolerance guarantees that you will not have a problem from any
single failure', said John Kane, vice-president for marketing strategy for
Tandem. `I've never heard of three failures at one one time.  I don't know what
the gods were doing to us.'

[P. H. Roosen-Runge, Dept. of Computer Science, York University, Toronto, Canada]


Automated highways and the risks of over-sensitivity

<"Jerry Leichter - LEICHTER-JERRY@CS.YALE.EDU">
Mon, 21 Aug 89 16:55 EDT
I've been watching the stream of messages deriding as hazardous, impossible,
stupid, etc., etc., the idea of automatic steering of cars on the highway with
some bemusement - and concern.  The purveyors of the lastest bit of technical
wizardry are always ready to call it "absolutely safe", "foolproof", and so
on.  In response, we seem to be developing a "nay-saying" attitude:  Any use
of computer controls is inherently unreliable, risky, even downright dange-
rous.  This "see no good" response is no better than the "see no evil" atti-
tude of the purveyors!

Let's try to think about the problem of controlling automobiles RATIONALLY
for a moment.  This is a problem which, WITH SUITABLE DESIGN, is MUCH better
solved by machines than by human beings!  Driving on a highway consists of
long periods of boredom punctuated by random, unpredictable moments in which
quick responses are required.  People are very bad at dealing with this kind
of situation.  They get bored, their attention wanders; few are adequately
trained to respond quickly, without hesitation or thought, to the near-instan-
taneous (on the human time scale) dangers which can arise.  Computers, on the
other hand, cannot get bored, cannot start thinking about their date that
evening, and can respond very quickly, without hesitation or panic, if
something goes wrong.

It is quite true that a system which mixed human with machine drivers would be
very hazardous:  Neither is good at dealing with the responses or the abili-
ties other.  In any case, I'll bet that MOST of the hazards of highway driving
are the result of problems with the human drivers, not with the road, the
mechanical parts, or other external factors.  In some 20 years of driving,
I've had my share of scary highway encounters.  Without a single exception,
they were all clearly the result of poor human decision-making:  Unexpected
lane changes without checking whether the lane was clear; driving way too fast
for the icy conditions of the road and going into a spin at 60 mph; wandering
to the edge of the road, touching the curb, then panicking, losing control
over the car, and wandering over several lanes of traffic before regaining
control.  And then of course there are the drunks and the druggies, who
account for a large percentage of accidents.

It is also quite true that the highway system as it is designed today was
intended mainly for human use.  Lane markers that are so clear to the human
visual system are notoriously difficult for computers to see, for example.
But if the goal is a computerized highway, there is no reason in the world to
limit yourself to systems that make sense to humans.  A lane marker is hard
to track, but a buried wire can be an absolute triviality.

Do you worry about the computers controlling elevators?  Elevators have been
controlled by digital computers - albeit relay-based - for 30 years or more.
They, too, are mechanical devices subject to all sorts of failure.

A computerized driving system, designed for such a purpose, operating in a
constrained environment, strikes me as quite practical.  It would have to be
designed carefully, with "fail-safe" modes - which should be doable since,
while tasks like finding the best route to a point 100 miles away require
global information and coordination, failing safe requires local information
(where are the cars around me, what do their computers claim they intend to
do in the next 100 ms., what do my sensors show they actually ARE doing (an
essential, and fairly simple backup) and so on.)

Any system can be implemented badly.  The structure of some systems makes bad
implementations almost inevitable.  Computer control of cars on today's road
system, in combination with today's drivers, would be such a system - and it
seems to be the system most people are commenting on.  But it is NOT the
system we must necessarily build!
                            — Jerry


Re: Drive-by-wire

"W. K. (Bill) Gorman" <34AEJ7D@CMUVM.BITNET>
Mon, 21 Aug 89 16:51:53 EDT
     If one were to compare the potential level of hazard, based, say, on
fatalities/100,000 miles, which might arise from failure of on-board computers
as opposed to the current levels of fatality caused by intoxicated and/or
drugged-out drivers, heart attacks, strokes, vehicular suicides, driver
error, police chases, etc., I wonder which methodology (hardware vs. wetware)
would actually be safer, once all the emotional rubbish about "men vs.
machines" was cleared away? Just a thought...


California studies "drive-by-wire" (RISKS-9.14)

514)-987-3542 <Peter Jones <MAINT@UQAM.bitnet> (>
Mon, 21 Aug 89 16:27:10 EDT
Bring back the trains, buses and streetcars. These are well-proven, reliable,
non-polluting ways of moving large numbers of people from place to place.
My intuition is that the problem of controlling thousands of automobiles from
a central point would rival Star Wars in complexity.


Re: Automatic vehicle systems (RISKS-9.14)

Emily H. Lonsford <m19940@mwvm.mitre.org>
Monday, 21 Aug 1989 18:04:55 EST
Don't worry too much about these.  If we can't get people to ride public
transportation (like buses and subways) then they won't sit still for the
equivalent in their own cars.  And anyway, how do you get off the "remoteway"
to stop at the cleaners?
                           Emily H. Lonsford, MITRE - Houston W123  (713) 333-0922


Automated Highways

<WHMurray.Catwalk@DOCKMASTER.NCSC.MIL>
Mon, 21 Aug 89 19:37 EDT
>Call me a technophobe if you like, but *NOBODY* can guarantee 100% ...

Well, it must be nice to live in the UK.  Here in the US we have a very
reliable manual system.  It can be so relied upon to kill 40,000 people a year
that we simply take it for granted.  A large percentage of these deaths involve
malicious manipulation of the system, i.e., the ingestion of alcohol and other
drugs.  You may be able to tolerate these deaths in preference to the risk of
computer systems that are less than perfect.  Quite candidly, I am sick to
death of a system that tolerates and defends this carnage, while crying
crocodile tears about computer-related risks.  It is high time that we got a
system for comparing risks that is prepared to put these lives on the same
scale that we use to condemn research into alternatives.

The New York Times concluded in a front page article that the American people
are far more likely to tolerate the huge death toll from their discretionary
and recreational use of highways, tobacco and alcohol than risks which are
comparatively minuscule from automated systems.  I may be stuck with that, but
it is insane and elitist.  It is high time somebody said so.

William Hugh Murray, Fellow, Information System Security, Ernst & Young
2000 National City Center Cleveland, Ohio 44114
21 Locust Avenue, Suite 2D, New Canaan, Connecticut 06840


Constructive criticism? Technology doesn't have to be bad

Donald A Norman-UCSD Cog Sci Dept <norman%cogsci@ucsd.edu>
Tue, 22 Aug 89 09:09:38 PDT
This has been brewing for a long time.  I enjoy the RISKS forum and tout
RISKS as the best digest available on e-mail/netnews.  But sometimes I
wonder.  Submitters seem to take great glee in presenting YAHS (Yet another
horrible story), but I seldom see any constructive comments — or much
discussion at all — mostly it is simply story time.  And most of the stories
are about why we should not trust technology.  But the unaided human is even
less reliable than the aided one!  Surely we could use the stories as design
examples and attempt to use them to inform our design, so that we could
invent technologies that were real improvements.  Instead of using the
examples for a cheap laugh, how about using them for instruction?

Let me provide some examples, using RISKS DIGEST 9.14 as my example, both
because it is handy and also because it had an excess of scare stories and
cheap conclusions.
    - - -
1. Misdelivered mail because the barcode had the wrong zipcode, and the
machines read it and ignored the written address.  Moral: I would say that
you should not use code that was only machine readable.  Either do like the
banks do — invent a machine-readable text that humans can also read — or do
as supermarkets do - print the "English" version of the bar code beneath the
bar code.  This would make it a lot easier to catch errors of this sort.
(The best solution is to make postal machines that can read the handwriting
on the envelopes, but that is another decade away, at least).  (Because
barcode is already standardized as the postoffice system, why not require
English text to be printed beneath the barcode?  No change in equipment is
required at the Post Office end.)
    - - -
2. Automatic vehicle navigation systems. RISK readers are really scared of
this one, to judge by the number of times this is raised in RISKS. But what
are the alternatives?  Today we have roughly 50,000 fatalities a year in the
auto.  If driving were all done automatically, there would be occasional
massive screwups.  But would the death toll hit 50,000?  I bet it would
improve things.  (Yes, roughly 1/2 of those deaths have drinking implicated,
but I don't see how this changes my argument.)  These systems in aviation are
reasonably efficient.

Moreover, I see no reason why we couldn't make these things fail reasonable
softly, with lots of warning.  And yes, there would still be foul-ups, but
the baseline is not zero accidents — it is 50,000 deaths).  (Admission of
guilt: I am consulting for a company that is designing one of these automated
systems.)
    - - -
3. One submitter's statement that "I am amazed that Boeing has taken all the
blame... " for pilot's shutting down the wrong engine.  The submitter than
cites other examples of pilots shutting down wrong engines, implying that
this is clearly pilot error (since it happens often and in various aircraft).
So Boeing — or any particular manufacturer is clearly not at fault.  I
disagree.

This error has long been noted, first well documented in the 1940's.  But it
is the design of the cockpit controls that leads to the error.  So I would
still blame Boeing (or cockpit designers in general).  I have long thought
that many control panels are designed as if to increase the chance of error
(my experience is mostly in nuclear power and aviation), and that there are
redesigns — perhaps using better technology — that would minimize these
errors.  Shutting off the wrong engine is so common (relatively speaking),
that more design effort should have gone into finding better systems.
Instead, folks just blame the pilot.  That is the wrong attitude.

Look, people make errors.  Fact of life.  Therefore, good design will
anticipate those errors and make them either impossible or less likely, or
easier to detect and correct.

(Admission: I am off to Boeing next week to discuss cockpit design.  This
note will not win me friends at Boeing.)
    - - -
4. The report of airliner crew fatigue.  Cockpit crew fall asleep and so
cabin crew are now supposed to wake them up, or there are systems that blast
them with loud sounds.

My solution is quite different: let them sleep!

It is well known that the circadian rhythm has two minima — about 2PM and
2AM.  And flying long hours across time zones is hard on the system.  Suppose
you let crew take naps.  As long as at least one member stayed awake, and if
the nap were announced and known about, there is likely to be no problem.
Modern aircraft can be flown with one crew member: more are needed only in
emergencies and in periods of high workloads (e.g., takeoff and landing or in
crowded airspace at low altitudes).  In fact, some airlines are experimenting
with allowing short naps.  The point is, human biology should be lived with,
not faught against.  Naps are natural, and overall efficiency will be
improved if you allow very short naps.  And I suspect we can develop schemes
whereby this improves safety, not hinders it.  (Citation: Somewhere buried in
my office is a reference to a paper on this topics I just heard at a meeting
of the (British) Royal Society on Human Factors in High Risk Technologies.)
    - - -
FINAL STATEMENT:

My argument is that human physiology and psychology are reasonably fixed: we
have to design systems to take account of our properties.  But too many of
our technologies are designed with only the technology is mind, and the human
is forced to cope.  Then when the problems ocur, we blame either the
techcnology isteslf or the person.  But neither is to blame — it is the
design and use of the technology that is wrong.  We could design technologies
that inproved our lives and considered our strenghts and limitations.  That
is what I am after!
     (I was told just yesterday by a visiting NASA researcher that modern
     Air Traffic Control Centers are clustering operators based on
     considerations of cable lengths and cabling neeeds, even though this
     is non-optimal in terms of operator performance — people who must
     communicate with one another are thereby put far apart.  It is this
     attitude I am fighting — it isn't the technology that is wrong, it is
     the lack of consideration of the human side of the operation that is
     faulty.)
People are especially good at some things, bad at others.  Creativity,
adaptability, flexibility, and extreme reliability are our strengths.
Continual alertness and precision in action or memory are our weaknesses.
Why not use technology in ways that take advantage of the former and allow
for the latter?  And in reviewing the cases presented in RISKS, why not use
them as guides to better designs and better technology, not as excuses to
lash out against the use of technology in general.

Don Norman                                 INTERNET:  dnorman@ucsd.edu
Department of Cognitive Science D-015          BITNET:    dnorman@ucsd
University of California, San Diego        AppleLink: D.NORMAN
La Jolla, California 92093 USA

  [By the way, even in RISKS Volume 1 there was a complaint that RISKS dealt
  only with problems and not with solutions.  I have on several occasions noted
  that RISKS is limited by what is submitted, and that very few people have
  ever submitted success stories.  On the other hand, I frequently quote Henry
  Petroski's statement that we learn NOTHING from our successes and at least
  have a chance to learn from our failures.  But the real — and not even
  subliminal — purpose of RISKS is to make us all aware of what the risks are
  so that we can intelligently try to avoid them!  No disagreement there!  PGN]


Computer Ethics

Perry Morrison MATH <pmorriso@gara.une.oz.au>
Tue, 22 Aug 89 11:08:24 EST
                                 Computer Ethics:
                 Cautionary Tales and Ethical Dilemmas in Computing
                by Tom Forester and Perry Morrison

            Published by MIT Press and Basil Blackwell
                       To Appear (Hopefully) January 1990

                                  CONTENTS

Preface and Acknowledgments

1. Introduction: Our Computerized Society
Some Problems Created for Society by Computers - Ethical Dilemmas for Computer
Professionals and Users

2. Computer Crime
The Rise of the High-Tech Heist - Is Reported Crime the Tip of an Iceberg? -
Targets of the Computer Criminal - Who Are the Computer Criminals? - Improving
Computer Security - Suggestions for Further Discussion

3. Software Theft
The Growth of Software Piracy - Revenge of the Nerds? Intellectual Property
Rights and the Law - Software Piracy v. Industry Progress - Busting the Pirates
- Suggestions for Further Discussion

4. Hacking and Viruses
What is Hacking? - Why Do Hackers `Hack'? - Hackers: Criminals or Modern-Day
Robin Hoods? - Some "Great" Hacks - Worms,Trojan Horses and Time-Bombs - The
Virus Invasion - Ethical Issues - Suggestions for Further Discussion

5. Unreliable Computers
Most Information Systems are Failures - Some Great Software Disasters -
Warranties and Disclaimers - Why are Complex Systems So Unreliable? - What are
Computer Scientists Doing About It? - Suggestions for Further Discussion

6. The Invasion of Privacy
Database Disasters - Privacy Legislation - Big Brother is Watching You - The
Surveillance Society - Just When You Thought No One was Listening - Computers
and Elections - Suggestions for Further Discussion

7. AI and Expert Systems
What is AI? - What is Intelligence? - Expert Systems - Legal Problems - Newer
Developments - Ethical Issues: is AI a Proper Goal? - Conclusion: the Limits of
Hype - Suggestions for Further Discussion

8. Computerizing the Workplace
Computers and Employment - Computers and the Quality of Worklife: `De-skilling'
 - Productivity and People: Stress, Monitoring, Depersonalization, Fatigue and
Boredom -  Health and Safety Issues: VDTs and the RSI Debate - Suggestions for
Further Discussion

APPENDIX A    Autonomous Systems: the Case of "Star Wars"

                     Preface and Acknowledgements

     The aim of this book is two-fold: (1) to describe some of the problems
created for society by computers and (2) to show how these problems present
ethical dilemmas for computer professionals and computer users.

     The problems created by computers arise, in turn, from two main sources:
from hardware and software malfunctions and from misuse by human beings. We
argue that computer systems by their very nature are insecure, unreliable and
unpredictable -  and that society has yet to come to terms with the
consequences. We also seek to show how society has become newly vulnerable to
human misuse of computers in the form of computer crime, software theft,
hacking, the creation of viruses, invasions of privacy, and so on.

    Computer Ethics has evolved from our previous writings and in particular
our experiences teaching two courses on the human and social context of
computing to computer science students at Griffith University. One lesson we
quickly learned was that computer science students cannot be assumed to possess
a social conscience or indeed have much awareness of social trends and global
issues. Accordingly, these courses have been reshaped in order to relate more
closely to students' career goals, by focussing on the ethical dilemmas they
will face in their everyday lives as computer professionals.

     Many college and university computer science courses are now including -
or would like to include - an ethics component, but this noble objective has
been hampered by a lack of suitable teaching materials. Computer Ethics has
therefore been designed with teaching purposes in mind in an effort to help
rectify the shortage of texts.  That is why we have included numerous
up-to-date references, as well as scenarios, role-playing exercises and
`hypotheticals' in the `Suggestions for Further Discussion' at the end of
each chapter. The creative teacher should be able to build on these.

     Readers will notice that we have not adopted an explicit theoretical
framework and have avoided philosophical discussion of ethical theory. The
reason is that this book is but a first step, with the simple aim of
sensitizing undergraduate computer science students to ethical issues.
Neither will readers find a detailed account of the legislative position
around the world on the various topics discussed. This is because in each
country the legal situation is often complex, confused and changing fast -
and again this is not the purpose of the book.

     Finally, a note on sources. First, we have to acknowledge an enormous debt
to Peter G. Neumann, whose "Risks to the Public in Computer Systems" sections
in Software Engineering Notes, the journal of the Association of Computing
Machinery's Special Interest Group on Software (ACM-SIGSOFT) have provided
inspiration, amusement and a vast amount of valuable information. Long may he
continue. Second, we have to caution that many of these and other sources are
newspaper and media reports, which, like computers, are not 100 per cent
reliable.

Tom Forester,                                  Perry Morrison
School of Computing & Information Technology   Maths, Stats and Computing
Griffith University,                           University of New England
Queensland, Australia                          Armidale, NSW, Australia

Please report problems with the web pages to the maintainer

x
Top