The RISKS Digest
Volume 3 Issue 43

Tuesday, 26th August 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Comment on PGN's comment on human error
Nancy Leveson
Keystroke Analysis for Authentication
Scott E. Preece
Eugene Miya
Risks of Mechanical Engineering [More on O-Rings]
Martin Harriman
Re: Words, words, words...
Mike McLaughlin
Comments on paper desired
Herb Lin
Info on RISKS (comp.risks)

Comment on PGN's comment on human error

Nancy Leveson <nancy@ICSD.UCI.EDU>
26 Aug 86 11:52:32 PDT (Tue)
Both "human error" and "computer error" are meaningless words.  At least in
scientific discussions, we should attempt to use words that can be defined.
There are not "deeper" human errors (or "shallower" ones?).  There are
design flaws or inadequacies, operational errors, hardware "random"
(wear-out?) failures, management errors, etc.  In hardware, there are also
production errors.  These may not be good categories, and I welcome
suggestions for better ones.  But if we can categorize, then it may help us
to understand the issues involved in risks (by locating general themes) and
to devise fixes for them.

But in doing this we must be very careful.  Accident causes are almost
always multifactorial.  TMI, for example, involved all of the above
categories of errors including several hardware failures, operator errors,
management errors, and design flaws.  Challenger also appears to follow the
same trend. [I mention these two because they are both accidents which
involved extensive investigation into the causes].  According to my friends
in System Safety Engineering, this is true for ALL major accidents.  As I
have mentioned earlier in this forum, liability plays a major role in
attempts to ascribe accidents to single causes (usually involving operator
errors because they cannot be sued for billions).  Also, the nature of the
mass media, such as newspapers, is to simplify.  This is one of the dangers
of just quoting newspaper articles about computer-related incidents.  When
one reads accident investigation reports by government agencies, the picture
is always much more complicated. Trying to simplify and ascribe accidents to
one cause will ALWAYS be misleading.  Worse, it leads us to think that by
eliminating one cause, we have then done everything necessary to eliminate
accidents (e.g. train the operators better, replace the operators by
computers, etc.).

But even though it is difficult to ascribe a "cause" to a single factor, it
is possible to describe the involvement of the computer in the incident, and
this is what we should be doing.  We also need to understand more fully the
"system" nature of accidents and apply "system" approaches to preventing
them.  If accidents are caused by the interaction of several types of errors
and failures in different parts of the system, then it seems reasonable that
attempts to prevent accidents will require investigation into and
understanding of these interactions along with attempts to eliminate
individual problems.  Elsewhere I have given examples of serious
computer-related accidents that have occurred in situations where the
software worked "correctly" (by all current definitions of software
correctness) but where the computer software was one of the major factors
("causes") in the accident.

Since I specialize in software safety, I interact with a large number of 
companies and industries (aerospace, defense, medicine, nuclear power, etc.)
concerned with this problem.  The most successful efforts I have seen have 
involved companies where the software group and engineers have worked together. 
Unfortunately, this is rare.  The majority of the people who come to my talks 
and classes and with whom I work are engineers.  The software personnel 
usually argue that: 

   (1) safety is a system problem (not a software problem) and thus is 
       the province of the system engineer.  They are too busy doing their 
       own work developing software to participate in system safety meetings 
       and design reviews.

   (2) they already use good software engineering practices and therefore
       are doing everything necessary to make the software safe.

  I.e., "leave me alone and let me get back to my job of producing
  code, and don't waste my time by making me attend meetings with
  the system engineers.  They can do their job, and I'll do mine."

Unfortunately, almost all of the techniques that appear to be useful
in producing safer computer-controlled systems require the involvement
of the software designers and implementers.

      Nancy Leveson
      Information and Computer Science Dept.
      University of California, Irvine  


Keystroke Analysis for Authentication (RISKS-3.42)

"Scott E. Preece" <preece%ccvaxa@GSWD-VMS.ARPA>
Tue, 26 Aug 86 08:59:00 cdt
I would think this would only be safe if you had physical security
for the terminal — otherwise the determined break-in artist could
record the appropriate sequences and play them back as desired.
Of course, if you allow that kind of intrusion any kind of password
scheme is also hopeless.

scott preece, gould/csd - urbana     uucp:  ihnp4!uiucdcs!ccvaxa!preece


Re: Keystroke Analysis for Authentication (Re: RISKS-3.31)

Eugene Miya <eugene@AMES-NAS.ARPA>
Tue, 26 Aug 86 15:02:33 pdt
We just had a demostration of the keystroke authentication system
by Dr. John David Garcia.  To clarify a couple of things.  The
shortest realistic name should be 5 characters (Ed Ng).  10 characters
is better.  The system uses a statistical distance function and is based on
the old idea of telegraph key signatures.  It is not just a matter of
starting to type.  A user must do between 70-80 trials to train a
system to recognize a signature.  A lower figure is used for touch
typists.  Non-typists can be recognized with a sort of relaxation
phenomena when they adapt to using the system: users (believe it or not
go into "an alpha state" [not my quote]) have to relax in order to consistency
log in.  It seems other benefits or problems result: any significant
quantity of alcohol or other drug affects timing: three drinks and you
can't log in [good and bad].  The mechanism for determining timing
is not for general purpose typing, only particular strings.  This also
brings up the fact that some times you don't always log in on the first try.
Garcia is speaking to various Government agencies and computer manufacturers
about this system, but it would not be appropriate to say whom.
Signatures tend to be keyboard specific, so trials are required for different
keyboards.  Despite these draw backs, the system appears quite nice.
It does not require "microsecond timing," 60 Hz wall clock timing is
adequate.  There is probably be a demostration of the system at the next
Compcon in San Francisco.  The demo we saw was running on a Compauq written
in BASIC with a couple of assembly language kernels.

--eugene miya;   NASA Ames Research Center;   eugene@ames-aurora.ARPA
  {hplabs,hao,dual,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene


Risks of Mechanical Engineering [More on O-Rings]

Martin Harriman <MARTIN%SRUCAD%sc.intel.com@CSNET-RELAY.ARPA>
Fri, 22 Aug 86 10:40 PDT
O-rings are used in many applications where a reliable gas or liquid seal
is desired; they are generally the most reliable method for sealing a
joint that must be disassembled periodically.  There are lots of interesting
failure mechanisms (interesting if you are a mechanical engineer), but
I doubt any of them involve computers, except in the most peripheral fashion.

O-ring failures in automobiles are usually the result of hardening, either
due to chemical attack (usually methanol in gasohol), or heat.

The recent failures (NASA, Chernobyl, TVA) don't have a lot to do with
computers, per se--I claim each of these cases were due to poor management.
In NASA's case, we have the spectacle of NASA management ignoring engineering
concerns because of the pressure to launch.  So NASA will listen to the WCTU
(who convinced NASA to abandon their plans to include wine in Skylab's
rations), but won't listen to Morton Thiokol's engineers.

The Chernobyl accident was evidently the result of the local operators (and
management?) ignoring the procedures in the operating manual; the Soviets
claim that the local folks weren't supposed to have that much autonomy.  The
operators will take the rap--but the Soviet central management is
responsible for not doing a better job of supervising (and motivating?) the
local site people.

Right now, most of TVA's nuclear capacity is shut down; it seems that their
plants don't match their documentation, due to unrecorded (and perhaps
unauthorized) modifications during construction.  Since this problem
(at least at this magnitude) is unique to TVA, it seems that the fault
was management's attitude towards the importance of this documentation.
At least the NRC seems to think so, since a management reshuffle was one of
their conditions for relicensing the TVA reactors.

No one's mentioned the earlier famous O-ring/management failure (so I have
to, of course--):  the triple engine failure on a 727.  In this case,
the ever-so-reliable O-rings failed because they were omitted from a
maintenance kit--so they didn't get installed, so they didn't seal the
engine chip detectors, so all the oil ran out of the engine, so all
three engines failed en-route (over the ocean).  One restarted (it
still had a quart or so left), and the aircraft made it back to Miami.
The NTSB decided the problem was inadequate training and supervision;
the procedures for changing the O-rings had been changed, but no one
told the mechanic, or checked his work (as they were required to do).

Hope this kills all further interest in O-rings--
  --Martin Harriman          Intel Santa Cruz <martin@srucad.sc.intel.com>


Re: Words, words, words...

Mike McLaughlin <mikemcl@nrl-csr>
Tue, 26 Aug 86 15:25:49 edt
      [Herb's message got appended after the end of RISKS-3.42, and
       was not included in the Contents of that issue.  Sorry.  PGN]

"Deliberate dumbness" is delightful.  Reminds me of a friend's term,
"malicious obedience," referring to carrying out dumb orders in their
infinite complexity, regardless of the consequences, and without applying a
grain of common sense.

Used car salesmen and realtors sometimes exhibit deliberate dumbness when 
they discourage the owner from telling them about defects in a property or
automobile.  

I do not know that "NO ONE in the scientific community believes that it is
possible to frustrate a deliberate Soviet attack on the U.S. population..."
If there is a PhD in a science who believes that, is that person de facto 
excluded from the scientific community?

I do not know what "frustrat(ing) a deliberate... attack" means.  If it means
deterring the attack by reducing the cost/benefit ratio to an unacceptable
level, I believe that is possible (but I am not in the scientific community
and never have been).

If it means saving a significant number of civilian lives from an inevitable
attack, I believe that is possible (but... ).

If it means saving EVERY civilian life, I do not believe that, any more than
I believe the statement that "NO ONE... etc."

SDI involves more than science, it affects billions of people, millions of
military and defense industry people, and thousands of decisions makers on
both sides of the Curtain.  As such, it is not susceptible to the simple 
and elegant solutions of science - neither "It won't work" nor "It will work"
is adequate.  

I have five children.  I hope we, and the Russians, get it right, whatever
we decide to do.  

"Things are the way they are because if they were to be any different they
 wouldn't have come out like this." - Tevye (Sholom Aleichem)

    - Mike  <mikemcl@nrl-csr.arpa>


Comments on paper desired

<LIN@XX.LCS.MIT.EDU>
Tue, 26 Aug 1986 19:42 EDT
I am currently writing a paper entitled COMPTER SOFTWARE AND STRATEGIC
DEFENSE, which should be available in preliminary draft form on August
29, Friday.  Comments are solicited by September 15.  It is too big to
mail, so FTP is the solution.  If you want to see a copy (in exchange
for a promise to make comments on it), please drop me a note.  A brief
abstract follows:

  Computer software will be an integral part of any strategic defense
  system (defined here to include BMD, ASAT, and air defense).  Several
  issues are addressed: The reliability of SDI software, the problem of
  system architecture, the problems that very short defensive time lines
  may introduce, the risk for accidental nuclear war, mechanisms for
  escalation control.  

Thanks.  Herb

Please report problems with the web pages to the maintainer

x
Top