The RISKS Digest
Volume 3 Issue 49

Thursday, 4th September 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Human Error
Dave Parnas
Bill Anderson
Machine errors - another point of view
Bob Estell
Flight simulators
Eugene Miya
F-16 software
Henry Spencer
Terminal (!) lockup
Ken Steiglitz
Info on RISKS (comp.risks)

human error

Wed, 27 Aug 86 09:16:44 EDT
   When people use the phrases "human error" and "computer error"
they are simply trying to distinguish between situations in which 
"the cause" of the accident was a human action that happened about the time
of the accident and the situations in which "the cause" of the 
error was a human action much earlier.  Obviously, we cannot
make a hard black/white distinction based on this continuum of
possibilities.  Only humans cause accidents because only humans
provide the problem statements that allow one to talk about
an accident or a failure.

Dave


Re: RISKS-3.46: Human Error

<WAnderson.wbst@Xerox.COM>
3 Sep 86 09:57 EDT
Herb Lin writes:

  "no user should approach a computer system as though its behavior is
  predictable and/or sensible under all circumstances ...."

Stanislaw Lem has written some very amusing and thought provoking
stories about the relations between people and technology (including
automata) in the Tales of Pirx the Pilot (2 volumes, paperback).  Pirx
is just an ordinary space pilot who learns to approache the computer
systems he must use with a good deal of common sense.

Bill Anderson


Machine errors - another point of view

<"SEFB::ESTELL" <estell%sefb.decnet@nwc-143b.ARPA<>
3 Sep 86 12:19:00 PST
    I'm not satisfied with the notion that computers don't make errors; 
that they ONLY suffer mechanical failures that can be fixed. 

 "Deep in a computer's hardware are circuits called arbiters whose function is 
 to select exactly one out of a set of binary singnals.  If one of the signals 
 can change from '0' to '1' while the selection is being made, the subsequent 
 behavior of the computer may be unpredictable.  It appears fundamentally 
 impossible to construct an arbiter that can reliably make its selection 
 within a bounded time interval."
    Peter J. Denning, in  AMERICAN SCIENTIST 73, no. 6 (Nov-Dec 1985)
    [also reprinted in RIACS TR  85.12]

    I'm not a hardware guru, nor a scholar in theoretical computer science; 
 but my practical experience says that Peter is right.  I've gotten very close 
 to the internals of only two computers; both were IBM second generation 
 machines, the 7074, and the 1401.  I programmed both in assembly and machine 
 code; even wrote diagnostics for the 7074.  I can guarantee that those 
 machines, much simpler in design than today's multi-processors,  and also 
 much slower, were somewhat unpredictable.   We found some nasty situations 
 that required special code loops to mask/unmask interrupts, so that the 
 machine could run.

    A "machine" as seen by the applications programmer, is already several 
 layers [raw hardware, microcode, operating system kernel, run-time libraries,  
 compiler]; and each layer is perhaps nearly a million pieces [IC's, lines of  
 (micro)code] that may interact with nearly a million other pieces in other 
 layers.

    What I suspect here is a "problem of scale" akin to the well know idea 
 that there are real limits to what one can build with a given material; e.g., 
 bones can't support animals much over 100 feet tall; because the internal 
 tensile and sheer stresses will at some point destroy the molecular integrity 
 of the materials.    We can analyse the few hundred lines of code, in the 
 kernel of an I/O driver, running on naked second generation hardware; I did 
 that.  But can we examine the millions of lines of code that comprise the 
 micro-instructions, the operating system, and the engineering applications
 on a multi-processor system, and hope to understand ALL the possible 
 side-effects?  Color me skeptical.   Thus, because we put machines "in 
 control" of significant events in our lives [ATM's, FFA stuff, weapons 
 simulators, etc.]; and because EVEN AFTER we've made our best personal and 
 professional attempt to eleminate the errors; and even after the system has
 run "a thousand test cases" it still has errors - not necessarily "hard 
 failures" that the C.E. can fix, but "transients" that are sensitive to 
 timing ; for all these reasons, I'll argue that "machines make errors" in 
 much the same sense that people mispronounce words or make mistakes driving.  
 It's not that we don't know better; it's not that we've suffered some damage;
 it's that we aren't perfect; neither are our computers.  And sometimes 
 there's "nothing wrong" that can be fixed.

    If we continue this discussion long enough, we'll approach the 
 metaphysical notion of "free will and determinism."  I don't think that's 
 necessary; I think our current  systems have already exceeded our ability 
 to predict them 100.0%, even in theory.

Bob


Flight simulators

Eugene Miya <eugene@AMES-NAS.ARPA>
3 Sep 1986 1714-PDT (Wednesday)
    [I thought that by now these simulators were designed so that they could
     be driven by the same software that is used in the live aircraft <PGN>...

Don't forget that very few aircraft use "software."  Software is a radically
new concept to aircraft designers: F-16, F-18, X-29A, and so forth.

     change in one place would be reflected by the same change in the other,
     although changing the application code without having to modify the
     simulator itself.  Maybe not...  PGN]

The problem comes when it's asked "What do you simulate?"  The view? The
feeling?  Handling characteristic?->based on aerodynamics->computational
fluid dynamics->???  True, those games your can buy for an apple two are
simulators, and we have a $100 million test facility (6 stories high) which
is a simulator.  But there are limits to simulation:
we don't know how to simulate the flight characteristics of
a helicopter, we don't know how to automate a helicopter: (any one know
of a microprocessor which can withstand 800-1600 Gs?).  Anyway, Peter, you
are invited to talk to our simulator people if you want to answer this one,
as I don't have the time.  Danny Cohen has been here.

Another thought: as simulators become more "real" [as in some of ours]
they require increasing amounts of certification BEFORE you can use a
simulator [does this sound like a paradox in some ways? hope so].
I saw an experienced pilot told they he could not use a simulator
in some mode (motion base).

--eugene miya,   NASA Ames Research Center
  {hplabs,hao,dual,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene


F-16 software

<allegra!utzoo!henry@ucbvax.Berkeley.EDU>
Wed, 3 Sep 86 23:59:56 PDT
Phil Ngai writes:

> It sounds very funny that the software would let you drop a bomb on the wing
> while in inverted flight but is it really important to prevent this? ...

This issue actually is even more complex than it sounds, because it may be
*desirable* to permit this in certain circumstances.  The question is not
whether the plane is upside down at the time of bomb release, but which way
the bomb's net acceleration vector is pointing.  If the plane is in level
flight upside-down, the vector points into the wing, which is a no-no.  But
the same thing can happen with the plane upright but pulling hard into a
dive.  Not common, but possible.  On the other side of the coin, some
"toss-bombing" techniques *demand* bomb release in unusual attitudes,
because aircraft maneuvering is being used to throw the bomb into an
unorthodox trajectory.  Toss-bombing is common when it is desired to bomb
from a distance (e.g. well-defended targets) or when the aircraft should
be as far away from the explosion as possible (e.g. nuclear weapons).
Low-altitude flight in rough terrain at high speeds can also involve quite
violent maneuvering, possibly demanding bomb release in other than straight-
and-level conditions.

                Henry Spencer @ U of Toronto Zoology
                {allegra,ihnp4,decvax,pyramid}!utzoo!henry


Terminal (!) lockup

<princeton!ken@seismo.CSS.GOV>
Wed, 3 Sep 86 01:34:17 EDT
From the User's manual for the Concept AVT terminal, p. 3-52 (Human Designed
Systems, Inc., 3440 Market Street, Philadelphia, PA 19104):

  "Note: since the Latent Expression is invoked automatically, it should not
  contain any command that resets the terminal, either implicitly or
  explicitly. If any such command is included in the Latent Expression, the
  terminal will go into an endless loop the next time it is reset (implicitly
  or explicitly) or powered up. The only way to break out of this loop is to
  disassemble the terminal and physically reset Non-Volatile Memory. ... "

Having a sequence of keystrokes that will physically disable a terminal
seems to me a bad thing. For one thing it makes me awfully nervous when I'm
changing the Latent Expression. For another, it opens up the possibility of
having my terminal physically disabled by people or events outside my
control. (I don't know whether this effect can also be caused by a sequence
of bits sent to the terminal.)

I wonder: How common is this property of terminals (or other equipment)?
Does the phenomenon have a (polite) name?  Is it so hard to avoid that we
should be satisfied to live with it? Is it clear how to test for the
possibility? Does anybody have any experience with this phenomenon?

           [You have found the tip of the iceberg of Trojan horses that 
            can take over your terminal, processes, files, etc.  PGN]

Ken Steiglitz, Dept. of Computer Science, Princeton Univ., Princeton, NJ 08544

Please report problems with the web pages to the maintainer

x
Top