The RISKS Digest
Volume 3 Issue 52

Tuesday, 9th September 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Re: F-16 software
Nancy Leveson
Upside-down F-16's and "Human error"
Jon Jacky
F-16 software
Scott E. Preece
Do More Faults Mean More Faults?
Ken Dymond
Why components DON'T interact more often
Bob Estell
Computer almost created swing vote
Scott E. Preece
Computer Sabotage [MISSING LAST LINE FROM RISKS-3.51]
Computer Sabotage of Encyclopedia Brittanica
Scott E. Preece
Captain Midnight & military satellites
Werner Uhrig
Re: always mount a scratch monkey
Alexander Dupuy
Erroneous computer printout used in public debates
Chris Koenigsberg
Info on RISKS (comp.risks)

Re: F-16 software

Nancy Leveson <nancy@ICSD.UCI.EDU>
08 Sep 86 09:53:29 PDT (Mon)
Wayne Throop writes:  

   >it may well be that one doesn't *want* to prevent all possible 
   >modes weapons discharge that may damage the aircraft ... some of
   >them may be useful techniques for use in extreme situations.

This raises some extremely important points that should be remembered
by those attempting to deal with risk.

   1) nothing can be made 100% safe under all circumstances.  In papers I
      have written I have pointed out that safety razors and safety matches
      are not completely safe, they are only *safer* than their alternatives.
      Drinking water is usually considered safe, but drinking too much water
      can cause kidney failure.  

   1) the techniques used to make things safer usually involve
      limiting functionality or design freedom and thus involve tradeoffs
      with other desirable characteristics of the product.

All we can do is attempt to provide "acceptable risk."  What is "acceptable" 
will depend upon moral, political, and practical issues such as how much
we are willing to "pay" for a particular level of safety.

I define "software safety" as involving procedures to ensure that the
software will execute within a system context without resulting in
unacceptable risk.  This implies that when building safety-critical systems, 
one of the first and most important design problems may be in identifying
the risks and determining what will be considered acceptable risk for that
system.  And just as important, our models and techniques are going to have
to consider the tradeoffs implicit in any attempt to enhance safety and
to allow estimation of the risk implicit in any design decisions.  
If we have such models, then we can use them for decision making, including 
the decision about whether acceptable risk can be achieved (and thus
whether the system can and should be built).  If it is determined that
acceptable risk can be achieved, then the models and techniques should 
provide help in making the necessary design decisions and tradeoffs.
The important point is that these decisions should be carefully considered
and not subject to the whim of one programmer who decides in an ad hoc
fashion whether or not to put in the necessary checks and interlocks.

      Nancy Leveson
      Information & Computer Science
      University of California, Irvine


Upside-down F-16's and "Human error"

Jon Jacky <jon@uw-june.arpa>
Mon, 8 Sep 86 16:55:19 PDT
> (... earlier postings mentioned "fly-by-wire" F-16 computer would 
> attempt to raise landing gear while aircraft was sitting on runway,
> would attempt to drop bombs while flying inverted, and other such 
> maneuvers — in response to pilot's commands

These are regarded as errors?  Maybe I'm missing something, but it sounds 
like the right solution is to remind the pilots not to attempt obviously
destructive maneuvers.  I detect a notion floating about that software 
should prevent any unreasonable behavior.  This way lies madness.  Do we have 
to include code to prevent the speed from exceeding 55 mph while taxiing down
an interstate highway?

My point is, if you take the approach that the computer is supposed to check
for and prevent any incorrect behavior, then you have saddled yourself with
the task enumerating every possible thing the system should NOT do.  Such a 
list of prohibited behaviors is likely to be so long it will make the 
programming task quite intractable, not to mention that you will never get all
of them.

I suggest that the correct solution is the time-honored one: the operator must
be assumed to possess some level of competence; no attempt is made to 
protect against every conceivable error that might be committed by a flagrantly
incompetent or malicious operator.

Note that all non-computerized equipment is designed this way.  If I steer my
car into a freeway abutment, I am likely to get killed.  Is this a "design
flaw" or an "implementation bug?"  Obviously, it is neither.  People who are
drunk or suicidal are advised not to drive.

This relates to the ongoing discusssion about "human error."  This much-abused
term used to refer to violations of commonly accepted standards of operator
performance — disobeying clear instructions, attempting to work when drunk, 
things like that.  Apparently it has come to refer to almost any behavior which,
in retrospect, turns out to have unfortunate consequences.  It is sometimes 
applied to situations for which the operator was never trained, and which the 
people who installed the system had not even anticipated.  

When abused in this way, the term "human error" can be a transparent attempt
to deflect blame from designers and management to those with the least control
over events.  Other times, however, it is evidence of genuine confusion over
who is responsible for what.  Right at the beginning, designers must draw a
clear line between what the automated system is supposed to do and what the
operators must do.  This may require facing the painful truth that there 
may be situations where, if the operator makes a mistake, a real disaster
may occur.  The choice is then one of ensuring the trustworthiness of the
operators, or finding an alternative approach to the problem that is more
robust.  

I suggest that if additional computer-based checking against operator errors
keeps getting added on after the system has been installed, it is evidence that
the role of the operator was not very clearly defined to begin with.

-Jonathan Jacky
University of Washington


F-16 software

"Scott E. Preece" <preece%ccvaxa@GSWD-VMS.ARPA>
Mon, 8 Sep 86 09:36:42 cdt
> From: amdcad!phil@decwrl.DEC.COM (Phil Ngai)

> It sounds very funny that the software would let you drop a bomb on the
> wing while in inverted flight but is it really important to prevent
> this?

Others have already pointed out that sometimes you may WANT to
release the bomb when inverted.  I would ask the more obvious
question: Would a mechanical bomb release keep you from releasing
the bomb when inverted?  I tend to doubt it.  While it's nice
to think that a software controlled plane should be smarter than
a mechanical plane, I don't think it's fair to cite as an error
in the control software that it isn't smarter than a mechanical
plane...

If, in fact, the mechanical release HAD protected against inverted
release, I would have expected that to be part of the specs for
the plane; I would also expect that the acceptance tests for the
software comtrolled plane would test all of the specs and that
the fault would have been caught in that case.

scott preece
gould/csd - urbana
uucp:   ihnp4!uiucdcs!ccvaxa!preece
arpa:   preece@gswd-vms


Do More Faults Mean More Faults?

"DYMOND, KEN" <dymond@nbs-vms.ARPA>
8 Sep 86 09:18:00 EDT
In RISKS 3.50 Dave Benson comments in "Flight Simulator
Simulators Have Faults" that

    >We need to understand that the more faults found at
    >any stage to engineering software the less confidence one has in the
    >final product.  The more faults found, the higher the likelyhood that
    >faults remain.  

This statement makes intuitive sense, but does anyone know of any data 
to support this ?  Is this true of any models of software failures ?  
Is this true of the products in any of the hard engineering fields — civil, 
mechanical, naval, etc. — and do those fields have the confirming data ?

Ken Dymond, NBS


why components DON'T interact more often

"SEFB::ESTELL" <estell%sefb.decnet@nwc-143b.ARPA>
8 Sep 86 08:12:00 PST
I guess I neglected to emphasize a key word: "MAY."
My original posting said "...may interact..."
I am well aware that components SHOULD *NOT* interact.
I am also well aware that hardware designers labor to make sure that 
the actual interactions are 
 (1) very infrequent; and 
 (2) not terribly damaging when they inevitably do occur.  
Similarly, software designers [good ones!] labor to restrict the 
inevitable interactions; and limit the damage done when they occur.
Since each layer of a well designed, carefully implement system 
"filters" faithfully, the result is a system that will run for months 
[years?] without random failures.
But random failures do occur.  My ancient stories were replaced by more
current ones in later RISKS postings.
Until the theory and the practice of computing systems design EACH admit
that random error [including, but not limited to, interactions] is real,
we'll continue to build systems and applications less reliable than they
could be - or should be.

Bob


Computer almost created swing vote

"Scott E. Preece" <preece%ccvaxa@GSWD-VMS.ARPA>
Mon, 8 Sep 86 10:02:18 cdt
> From: bnfb@uw-june.arpa (Bjorn Freeman-Benson)
>  In my mind, this brings up an interesting question: should errors like
> this be reported (1) to the general public and (2) to the software
> engineering community?
----------
I don't think errors like this should be HIDDEN, but I also don't
think this demands issuing a press release.  The reason you do
a test is to determine whether your procedures are working --
it shouldn't be thought newsworthy that you find mistakes in
testing.  If, on dry run day, a manual election counting system
had mistakenly recorded the Democratic votes on the master tally
sheet on the Republican line and vice versa, the counter would
have been apprised of the error and instructed in proper
procedure, but I don't think they'd have issued a press release.

The problem in this particular case wasn't that the system
didn't work, but that the operators didn't understand the
operating procedures.  That's no big deal, but the election
judges should be warned what to look for on election night to
see that the control information is correctly set up (regression
testing).

-- 
scott preece
gould/csd - urbana
uucp:   ihnp4!uiucdcs!ccvaxa!preece
arpa:   preece@gswd-vms


Computer Sabotage [LAST LINE MISSING FROM RISKS-3.51]

Rosanna Lee <rosanna@CSL.SRI.COM>
Sat 6 Sep 86 18:09:51-PDT
   [LAST LINE INADVERTENTLY TRUNCATED... COMPLETE LAST PARAGRAPH FOLLOWS.]
The publication first was alerted to a problem, sources said, when a worker
scanned the computer data base and discovered the clearly odd insertion of
the names of a company executive and a private consulting firm apparently
viewed by the former employee as partly responsible for the layoff decision.


Computer Sabotage of Encyclopedia Brittanica

"Scott E. Preece" <preece%ccvaxa@GSWD-VMS.ARPA>
Mon, 8 Sep 86 10:12:54 cdt
>  [There are several problems in believing that this audit-trail approach
>   is fool-proof.  First of all, it relies on a password.  Masquerading is
>   therefore a concern.  The second is probably more important — any
>   self-respecting system programmer or cracker is probably able to alter
>   the audit trail.  It is dangerous to assume that the only disgruntled
>   employess are those who are NOT computer sophisticates... PGN]
----------
Clearly the audit trail is not enough to protect against insider
damage by systems programmers, but the article says nothing about
whether there are other tools designed to deal with such users --
just that audit trail methods were sufficient in this case.  Let's
not jump to conclusions.

Curiously, embezzlement, fraud, doctored documents, and disgruntled
employee sabotage all pre-dated computers.  It appears to be the
case (I haven't heard the details yet) that in this case the fact
that the system was computerized allowed them to identify the damage
quickly and repair it.  If the files were on paper and the
saboteur had simply altered and replaced random pages of random
articles in the files, the damage would have been worse and much
harder to trace and fix.  The system doesn't have to be foolproof to
be an improvement over manual systems.

-- 
scott preece
gould/csd - urbana
uucp:   ihnp4!uiucdcs!ccvaxa!preece
arpa:   preece@gswd-vms


Captain Midnight & military satellites (Mother Jones, October 86)

Werner Uhrig <CMP.WERNER@R20.UTEXAS.EDU>
Mon 8 Sep 86 00:01:30-CDT
[ pointer to article in print:  Mother Jones, Oct '86 Cover Story on Satellite
  Communications Security (or lack thereof) ]

(p.26)  CAPTAIN MIDNIGHT, HBO, AND WORLD WAR III - by Donald Goldberg
    John "Captain Mignight" MacDougall has been caught but the flaws he
exposed in the U.S. military and commercial ssatellite communications system
are still with us and could lead to far scarier things than a $12.95 monthly
cable charge.

(p.49)  HOME JAMMING: A DO-IT-YOURSELF GUIDE - by Donald Goldberg
    What cable companies and the Pentagon don;t want you to know.

PS: Donald Goldberg is described as "senior reporter in Washington, D.C., for
the syndicated Jack Anderson column

[ this is not an endorsement of the article, just a pointer.
  you be the judge of the contents. ]


Re: always mount a scratch monkey

Alexander Dupuy <dupuy%amsterdam@columbia.edu>
Mon, 8 Sep 86 03:00:30 EDT
Here's another version of this story, from the ever reliable usenet net.rumor.
The existence of the alternate versions puts both pretty much in the realm of
apocrypha.  It's still a good story though...

From: moroney@jon.DEC (Mike Moroney)
Newsgroups: net.rumor
Subject: Re: Computer war stories
Date: 19 Mar 86 18:19:22 GMT
Organization: Digital Equipment Corporation


Yet another old classic war story.
--

It seems that there was a certain university that was doing experiments in
behavior modification in response to brain stimulation in primates.  They had
this monkey with a number of electrodes embedded in it's brain that were hooked
up to a PDP-11.  They had several programs that would stimulate different parts
of the monkey's brain, and they had spent over a year training the monkey to
respond to certain stimuli.  Well, eventually the PDP developed problems, and
field service was called in.  Due to some miscommunication, the field service
representative was not informed of the delicacy of this particular setup, and
the people running the experiment were not informed that field service was
coming to fix the machine.  The FS representative then booted up a diagnostic
system I/O exerciser.  After several minutes of gyrations, the monkey expired,
it's brain fried.

The moral, of course, is "Always mount a scratch monkey"


Erroneous computer printout used in public debates

Chris Koenigsberg <ckk@andrew.cmu.edu>
Mon, 8 Sep 86 10:23:56 edt
[Brief background on this story: In Pennsylvania, all sales of wine and hard
liquor are made at State Stores, run by the Liquor Control Board. The
Governor has been trying to abolish the board and let private industry take
over the liquor business, but LCB employee unions have been fighting against
him. The LCB held a 20% discount sale on the Saturday before Labor Day, and
the unions were outraged because the LCB's mission is actually to control
alcohol, not promote it, and the sale seemed to encourage consumption on the
holiday weekend. The debate over how much was sold, how much profit or loss
was made, and the effects on holiday weekend drunk driving were hot news all
week. Now this report of a computer error comes after public debate already
occurred, in which people relied on the incorrect sales figures.]
          ++++++++++++++++++++++++++++++++++++++++++++++++++
Article from the Pittsburgh Post-Gazette, Saturday, September 6, 1986,
written by Gary Rotstein (copyright 1986 PG Publishing Co.)

"20% discount sale brought LCB $18.9 million, not $8.5 million"

Admitting a $10 million flub in a computer printout, the Pennsylvania Liquor
Control Board reported yesterday that it sold a one-day record high of $18.9
million of alcohol last Saturday. LCB Chairman Daniel Pennick told reporters
Wednesday that the second 20 percent discount sale in the agency's history
had grossed only about $8.5 million. That would have been $5 million more
than was sold on the comparable date a year ago, but less than the $11
million one-day high recorded during a similar sale in June.

LCB spokesman Robert Ford said the agency's comptroller's office reviewed the
figures yesterday morning and realized an important digit - the numeral 1
indicating $10 million - had been unable to fit on the initial computer
printout tallying the sales figure. Once a correction was made and final
purchases from Saturday were tacked on, the LCB learned it had sold $18.9
million in goods.

Ford noted that the comptroller's office personnel responsible for the
mistake are employees of the governor's budget office rather than the LCB.

"The fact that someone made an error doesn't bother us," Ford said. "We're
just happy about the sales figures."

Whether the higher sales is good or bad news for the LCB, however, is in
dispute between the agency and its longtime critic, Governor Thornburgh.
Thornburgh's budget office has estimated, based on an analysis of the LCB's
receipts and costs last year, that when the price of a bottle is reduced by
20 percent the agency loses an average of $1.13 on each item sold. That
scenario means it's worse for the LCB's financial picture to have $18.9
million in discount sales than $8.5 million, administration spokesman Michael
Moyle pointed out.

Ford maintained that the sale only cut into the size of the LCB's profits and
did not actually amount to a net loss.

"We didn't lose a penny on any bottle sold," he said.
          ++++++++++++++++++++++++++++++++++++++++++++++++++
[notice that Ford emphasizes how the mistake was made by the governor's
budget office (the same office responsible for the disputed estimate that a
20% sale would lose $1.13 on each item), and not by his LCB employees - his
agency is under fire and is close to being dissolved by the legislature. But
it's made very clear that a human error led to the missing digits, rather
than trying to "blame it on the computer"]
Christopher Koenigsberg
Center for Design of Educational Computing
Carnegie-Mellon University
(412)268-8526
ckk@andrew.cmu.edu

Please report problems with the web pages to the maintainer

x
Top