The RISKS Digest
Volume 2 Issue 17

Friday, 28th February 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Replacing humans with computers?
Nancy Leveson
Eastern Airlines stock
Steve Strassmann
Computerized stock trading and feedback systems
Kremen
Computer Voting Booths
Larry Polnicky
Reliance on security
Jong
AI risks
Nicholas Spies
Data Encryption Standard
Dave Platt
Info on RISKS (comp.risks)

Replacing humans with computers?

Nancy Leveson <nancy@ICSD.UCI.EDU>
25 Feb 86 22:06:36 PST (Tue)
I have recently seen several risks contributions which assumed that humans
are the cause of most system accidents and that if the human was somehow
replaced by a computer and not allowed to override the computer (i.e. to
mess things up), everything would be fine.  The issue is too complicated to
cover adequately here.  But before rushing off to replace human controllers
with computers, at the least consider the following:

  ** Most accidents involve multiple failures of different components
     of the system.  It is rarely possible to pinpoint one particular
     failure as the sole cause.  (e.g. Three Mile Island involved at
     least four or five different types of mechanical failures.  Who
     got the blame?)
  ** There are often powerful and compelling reasons for wanting the
     blame placed on the human.  For example, Babcock and Wilcox can
     be sued for billions if there is something wrong with the design
     of their nuclear power plants — how much can you collect from
     some poor operator?
  ** The human is often called in to save the day after chaos has
     already begun and then expected to come up with a miracle.  If he
     does not save the day, then the blame is often placed on him/her
     instead of the initiating mechanical failures.
  ** Most accidents result from unanticipated events and conditions.
     Thus it is doubtful that computers will be able to cope with emergencies
     as well as humans do.  Expert systems do not help in coping with
     unanticipated events or conditions.
  ** There are many examples of accidents which were averted by a human
     overruling an errant computer.  If the operator had not intervened at
     the Crystal River Nuclear Power Plant, for example, a catastrophe might
     have occurred because of the computer error.  The hype about "expert
     systems" and "artificial intelligence" may be very dangerous.
     There are reports that commercial pilots are becoming so complacent
     about automatic flight control systems that they are averse to
     intervene when failures do occur and are not reacting fast enough
     (because of the assumption that the computer must be right).

The problem is just not that simple that the answer "replace the human
with a computer" will solve it.   Nancy


Eastern Airlines stock

Steve Strassmann <straz@MEDIA-LAB.MIT.EDU>
Thu, 27 Feb 86 02:38:17 EST
As an owner of Eastern Airlines stock (fell from $11 to $5 right after
I bought it), I'm particularly upset by this.  I don't know the
details; I hope someone with more knowledge can fill them in.

According to my stock broker (Disclaimer: I don't have any hard
documentation, and I'm not a Wall. St. expert), one of the major blows
to the already troubled company was a bogus earnings report issued on
a Dow Jones computer (something like 20 cents instead of $1.50). The
mistake was corrected within the hour, but in that hour, portfolio
managers had dumped Eastern stock, and the price fell $3, and never
recovered. I think this happened around early September.


Computerized stock trading and feedback systems

<kremen@aero>
26 Feb 86 07:57:40 PST (Wed)
There seems to be some misunderstanding about computerized stock trading.

First, "programmed buys" and "programmed sells" really have nothing to do
with computers. All "programmed" transactions could be done by hand but
typically they are extremely complex, so a computer is needed.  Programmed
trading only occurs when special intermarket conditions are present. Program
trading consists of arbitrageurs who use the spread between the value of
stocks on the New York Stock Exchange (NYSE) and the Chicago Board of
Options Exchange (CBOE) in Chicago. Occasionally other markets are used.

Intermarket arbitrage adds to market volatility but not in a negative sense.
The infamous "Triple Witching Hour", a time four times a year of extremely
volatile trading, is a direct result of this intermarket arbitrage.

Eric Nickell in his note compare the market to a feedback system that
oscillates - something like a forcing function with resonance. Well not
at all true. The market cannot get really swamped because something
will "break-down first". In the case of the NYSE - the "market makers"
will have an "order imbalance" preventing further trading.


Computer Voting Booths

Larry Polnicky <Polnicky@HIS-PHOENIX-MULTICS.ARPA>
Wed, 26 Feb 86 10:43 MST
In RISKS Vol 2, Issue 16, Kurt Hyde write:

> There are many documented cases of accidental miscalculation in computerized
> vote tallying equipment.  The reasons why such errors were discovered is
> because reconstruction and recount was possible.  Investigators
> reconstructed by gathering the machine-readable ballots.  They were then
> able to recount by machine or by hand.  Such reconstruction is impossible
> with the current state of the art in computerized voting booths because no
> physical ballots are created.  Recounts in such cases are wholly dependent
> upon the software to have stored each vote in its proper storage location at
> the time of voting.

While the risks would not be entirely removed, and regardless if any fraud
or error is suspected, there could be a standard practice initiated whereby
a sample from each election is validated by follow-up phone call or
physical notification.  Privacy could be somewhat maintained by automating
this process, e.g., immediately after the polls close, the computer randomly
selects some small sample and sends a letter saying, "Citizens Jones,
according to our computer voting system, you voted thusly:...."  The
citizen then returns the card validating or invalidating his voting record.
A box could be checked for him to indicate that he would rather not
acknowledge via mail or not at all; the percentage of such respondents
would probably be low.  Also, since some people may goof or maliciously
be inconsistent, the final validation would not have to be unanimous;
some standard percentage of validation would pass as I believe it does
today in a recount.  If delegating the follow-up procedure to a computer
is the start of a new computer risk, then it could be done manually,
but I believe this kind of check-back mechanism would significantly
reduce the risks involved in computer voting to the point that it
could gain approval.

Larry Polnicky, Honeywell Information Systems, McLean, Virginia.


Reliance on security

<Jong@HIS-BILLERICA-MULTICS.ARPA>
Wed, 26 Feb 86 12:19 EST
Kurt Hyde's reference to the Phillipine elections and the security of
computerized vote-counting systems reminds me that the issue of computer
security is artificially narrow.  If I am a criminal, and you confront me
with an unbreakable computer security system, I will simply direct my
attention elsewhere.  Attacking strong points went out with World War I (or,
to maintain the underworld analogy, with Machine Gun Kelly).

The most elaborately password-protected system is easily cracked if the
passwords are transmitted over telephone lines, or if people leave their
passwords lying about on scraps of paper.  That may fall outside the venue
of computer science, but not outside the venue of reality.  In the case of
the Phillipine elections, it didn't matter how well the vote-counting
computers were programmed; there were soldiers at the polling places
threatening to shoot voters.  Ballot boxes were opened to reveal twenty
thousand ballots marked in the same handwriting for Mr.  Marcos.  The
computer operators were being told what numbers to enter.

I guess there's not much you can do about risks outside your direct control.
My point is not to get too focussed in our concerns.

    [As noted many times in RISKS, any single weak link may represent a
     vulnerability.  In systems designed not to have single weak links,
     there are weak combinations.  Thus we must be concerned with ALL of
     the weak links.  PGN]


AI risks

<Nicholas.Spies@GANDALF.CS.CMU.EDU>
26 Feb 1986 23:19-EST
Today I attended an IEEE videoconference on "Applications of Artificial
Intelligence" with Drs. Tom Mitchell (CMU/Rutgers), Alex Pentland (SRI),
Peter Szolovits (MIT) and Harry Tennant (Texas Instruments). Aside from some
overdriven graphics such that it interfered with the audio, it was an
excellent intro to AI (for those concerned with the medium AND and the
message).

I asked the question, asked here and elsewhere by others, about the
potential legal responsibility of authors of AI software, the most obvious
example being medical diagnosis.  The answer from the panel was that most AI
work now has been done under very controlled conditions, responsibility has
never been tested in a court case, and that (possibly) the law applying to
publishers of reference books might apply also to AI systems (that is,
willful deceit would be punishable but typos and other innocent mistakes
would not make a publisher accountable). But according to one of the panel
members some AI researchers ARE in fact taking out insurance against
possible suits but (paraphrase) "the insurance companies look upon this
problem as something of a lark and the insurance emiums are low now"
although the same panel member said that (paraphrase) "this may become a
very important problem in the future".

I originally phrased the question to ask whether the implicit threat of
possible suits against artificial intelligence applications might have a
chilling effect on research and development of interesting applications
(that is, those involving human life and property), but as it was not asked
it was not answered.

My own (legally uninformed) feeling is that AI by its very nature spreads
around the concept of "volition" such that the present legal system might
have a difficult task in assigning responsibility in a damage suit (and
these doubtless will come down the pike someday).


Data Encryption Standard

Dave Platt <Dave-Platt%LADC@CISL-SERVICE-MULTICS.ARPA>
Thu, 27 Feb 86 17:31 PST
There's an interesting article in the 2/24 issue of InformationWEEK
concerning the DES.  Apparently, DES was up for voting to become an
international encryption standard sanctified by ISO.  The NSA (National
Security Agency) was lobbying very strongly within ANSI (the United States'
representative within ISO) to have DES disapproved...  the apparent reason
being that wide standardization of DES, and its routine use, would make it
substantially more difficult for NSA to monitor overseas voice and data
communications.  IBM pushed very strongly within ANSI for a "yes" vote
within ISO (DES is already an ANSI standard, and its details have been
readily available to anyone for the past five years or more).  In the end,
IBM won and NSA lost; ANSI abstained from voting, which had the same net
effect as a "yes" vote.

Have any studies been done concerning the risks of having, or not having
a secure data-encryption scheme to guard the integrity of one's data?

Please report problems with the web pages to the maintainer

x
Top