The RISKS Digest
Volume 5 Issue 19

Wednesday, 29th July 1987

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Automating Air Travel
Dan Graifer
Responsibilities of the pilots and the traffic controllers
Nathan Meyers
Flippin' statistics
Joe Morris
Nuclear power safety and intelligent control
Rich Kulawiec
Single-pipe failures
Kenneth Ng
Hacking and Criminal Offenses
SEG
Passwords and telephone numbers
Jonathan Thornburg
Separation of duties and "2-man control"
Patrick D. Farrell
Info on RISKS (comp.risks)

Automating Air Travel

Dan Graifer <sdcsvax!net1.UCSD.EDU!graifer@ucbvax.Berkeley.EDU>
Mon, 27 Jul 87 16:49:18 PDT
In The Economist, July 25, 1987, pp. 73-74, there is an interesting look at
the coming automation of commercial airports.  Deregulation has led to
explosive growth in demand, with consequences all of us who fly can talk at
length about.  The chart in the article shows the current ~10^12 revenue
passenger miles/year almost doubling by the year 2000.(~6% growth per year).

The article discusses the introduction of automation to cope with increasing
airport and aircraft size.  After a brief look at automated ground traffic
control, including arrival gate assignment,  it concentrates on evolving
mechanisms for baggage sorting and handling.  There is also a brief mention
of the problems associated with sharing ticket counter facilities by small
airlines when each uses a different, incompatable, reservation system. Large
emphasis is placed on a new International Civil Aviation Organisation rule
effective next year that will require carriers to match every piece of 
baggage loaded with a passenger verified to be on the aircraft.

In addition to a brief survey of the technologies involved, there is a lot 
of discussion of the potential problems:  Simple system breakdown, incom-
patible tag systems at different airports, accidental mischief (child with
toy magnet scrambling magnetically encoded tickets), and malicious mischief
(as simple as disgruntled employees entering false gate data, thereby running
aircraft around in circles).

My favorite "fun" prospect mentioned is baggage tags where the human and
machine readable destinations differ...  This one reminds of the old check
kiting scheme based on checks with mismatched bank name/numbers.

The article is very witty, and I just hope the people designing these systems
are experienced.  I think we can find examples of just about every kind of
risk ever mentioned in this forum lurking here.

Oh yeah, the solution to the incompatible reservation system problem is 
CUTE...Common Use Terminal Equipment.
                                                 Dan Graifer


Responsibilities of the pilots and the traffic controllers

Nathan Meyers <hpda!hp-pcd!hpcvra!hpcvrp!nathanm@ucbvax.Berkeley.EDU>
Mon, 27 Jul 87 17:29:47 pdt
In RISKS 5:15, Andy Freeman asked, in regard to commercial airline operations:

  > What are the pilot's responsibilities and liabilities?  What about the
  > controller's?

As one of many private pilots reading RISKS, I'll try to answer.  In so doing, 
I will attempt to simplify the system enough to a) be understood by non-pilots,
b) not insult too many pilots, and c) not reveal too much of my own ignorance.

The statement of the pilot's responsibilities is, like the U.S. constitution, 
very succinct and, also like the constitution, full of implications:

The pilot-in-command is responsible for all aspects of the flight.

Among other things, this means:

  1) The pilot can refuse Air Traffic Control (ATC) instructions.

  2) Whatever the co-pilot, navigator, flight attendants, airline
     executives, controllers, and everyone else have to say, it is the
     pilot who has the final word on whether a flight is go or no-go.

  3) No matter who's being paid to de-ice the wings, fill the gas tank,
     inflate the tires, clean the windshield, etc., it is the pilot's
     responsibility to determine that the aircraft is ready to fly.

What are ATC's responsibilities?  To put it very succinctly (probably too
succinctly):

  Advisory, sequencing, and separation.

This, of course, means a lot of different jobs.  These jobs are handled
by four different parts of the ATC system which usually involve
different personnel and, somehow, intermesh, communicate, and
coordinate.  These four pieces (not found at all airports) are:

   1) Ground control (take care of traffic on the ground up to the runways),

   2) Tower (control use of the runways, specifically through takeoff and
      landing clearances),

   3) Approach and departure control (control air traffic in the
      vicinity of the airport), and

   4) Enroute traffic control (control air traffic everywhere else).

Ground control and tower operations take place in a tower with a view,
approach/departure control takes place in a darkened room full of radar
scopes, and enroute traffic control takes place in a darkened room at a
regional center (for example, a center in Seattle watches over most of
the Pacific Northwest).

During the course of a typical flight under Instrument Flight Rules
(IFR), ATC tells the pilot when to taxi, when to take off, when to
ascend and descend (and to what altitudes), when to turn, and when to
land — all based on a flight plan filed by the pilot.  The pilot's job
is to worry about which way the aircraft is pointed and what's going on
in its vicinity — ATC's job is to allow lots of aircraft to use the
same airspace (clouds and all), airports, runways, and terminals.  That,
in a nutshell, is my attempt to answer Andy's question.

So what's gone wrong with the system?

There's been no shortage of finger-pointing.  Some blame general
aviation (little guys like myself), some blame the PATCO strike, some
blame drugs, some blame deregulation, some blame pilot training.

The answer, of course, isn't all that simple — it involves economics,
politics, technology, the Reagan military buildup, personnel, unions, etc.
So far, nobody in any position of power has come up with any solution more
creative than to increase regulation of the already heavily-regulated airspace.

My own pessimistic prognosis is that the air traffic system has a lot worse
to get before it's going to get better.

Nathan Meyers   (hplabs!hp-pcd!nathanm)

    [We have noted Henry Petroski's evidence that we tend to learn little from
    our engineering successes, but that the real advances come from trying to
    understand our failures.  There is MUCH to be learned from the ATC
    situation, especially when confronted with the realization that many of
    the existing problems will continue to exist even when the long awaited
    new computer systems arrive.  PGN]


Flippin' statistics

Joe Morris (jcmorris@mitre.arpa) <jcmorris@mitre.arpa>
Tue, 28 Jul 87 11:16:13 EDT
In RISKS 5:18, Mark Day writes:

> The fact that nuclear power plants have been run in a generally safe way in
> the past tells me very little about the future danger from them.  Predicting
> the future like that is similar to the statistical fallacy that if a fair
> coin has come up "heads" 500 times in a row, it is somehow "more likely" to
> come up "heads" the next time that I flip it.

Sorry, but that's true only if you assume that the coin is balanced to have
equal probability of falling heads or tails.  A balanced coin would have a
probability of 2^-500 of 500 consecutive heads; while not impossible, a run
of that length would lead a reasonable person to conclude that the coin was
balanced in favor of heads.  I suggest that the nuclear power industry is 
still demonstrating a good track record and will probably continue to do so.
This doesn't say that there won't be a disaster, but if you want to dismiss
the operating history, you should explain why it represents a statistical
anomoly.  (Coverup, progressive deterioration, etc.)  After 500 consecutive
heads, it isn't unreasonable to expect a 501st head as well.

On the other hand, it is perfectly reasonable to consider whether the
potential damage from an accident outweighs the small probability of its
occurrance, or whether it is outweighed by the reduction in consumption
of non-renewable fuels.  Unfortunately, the analysis may be technical, but
the decision will be political.


Nuclear power safety and intelligent control

Whitewater Wombat <rsk@j.cc.purdue.edu>
Tue, 28 Jul 87 21:48:05 EST
In RISKS-5.14, Alex Bangs raises two concerns relating to nuclear power plant
safety (construction corruption, intelligent control systems) and requests
references on nuclear power plant control.

Let us assume, for the sake of argument, that a given plant may be constructed 
without "corruption"; I interpret that to mean that the plant is built exactly 
to specification.  Such a procedure does not guarantee that the specification 
itself is correct; nor does it guarantee that a properly constructed plant
built to a correct specification will continue to adhere to that specification 
after it is put into operation.  I do not mean to de-emphasize the problems
that result from shoddy construction practices, but merely to point out that
these are not the only problems; perhaps they are not even the major ones.

With regard to the use of intelligent control systems, a similar plethora of
problems exist.  Although [hopefully!] intelligent control systems known as
"human beings" are now used to operate plants, the multitude of sensor
inputs (some of which may be erroneous) and the large number of possible
failure modes pose problems that are exceedingly difficult to solve even
when copious time is available--which it sometimes isn't.

Consider also what repercussions have already occurred for the nuclear
industry as the result of [possible] human error during crisis situations;
now imagine the outcry if the public at large discovered that a nuclear
accident occurred in part because of a fault in an expert system---a
"computer error".

Two excellent references on the subject are:

      IEEE Spectrum Vol 16 #11 Nov 1979  and  Vol 21 #4  Apr 1984

Both issues cover TMI extensively, especially the first.  They are eminently
readable to anyone with a technical background; I believe the first issue won
a number of awards for its coverage.

Finally, a brief anecdote.  On March 28, 1979, I was in St. Louis for an
interview with Union Electric Co. for a possible position at their
then-under-construction nuclear facility at Callaway, Missouri.  The
interview went fine, but when I came out of their office in the afternoon and
turned on the car radio, I began to hear news bulletins about a place in
Pennsylvania called "Three Mile Island".   I figured it was a sign. :-)

Rich Kulawiec, rsk@j.cc.purdue.edu, j.cc.purdue.edu!rsk


Single-pipe failures

Kenneth Ng <KEN%ORION.BITNET@wiscvm.wisc.edu>
Wed, 29 Jul 87 06:26:11 EST
>  [Concerning SINGLE-FAULT-TOLERANT SYSTEMS, I noted recently that most
>  of the nuclear power plants are designed to remain safe as long as only
>  a single pipe ruptures.  Two pipes are too many.  Earthquakes could make
>  things quite difficult.  PGN]

First, I'd like to point out that I am not an expert in nuclear energy or
nuclear power.  I am not entirely familiar with the workings of the insides
of nuclear power plants.  Even if I were, the fact that there is no
standardization among these plants makes quantitative statements difficult.

Judging from "The President's Commission on The Accident at Three Mile
Island", there are three methods of cooling the reactor:

1: Main feedwater pumps.
2: Auxiliary feedwater pumps.
3: High-pressure injection pumps.

Unfortunately among this foot-thick stack of reports I cannot find a blasted
diagram of the reactor cooling system.  I presume these systems operate on
separate lines.  But for all I know they could feed into a common line
(besides the reactor vessel itself).  Breaks in the upper half of the
cooling system are covered however.  Parts of the contingency plans indicate
that it is possible to pump water that comes out of a break back into the
reactor via sump pumps.  Therefore, from what I can gather, two
double-guillotine breaks that occur in the lower leg of the reactor cooling
system may be fatal, but two double-guillotine breaks, where one occurs in
the lower leg might not be fatal.  Note: from the definition of
double-guillotine breaks, isn't this four pipe breaks in 2 pipes?

Ref: double-guillotine break: a pipe break where a section of pipe is
completely removed from the line (needing 2 breaks) and the outgoing water
does not impede the flow of water coming out the other pipe.  I read this
somewhere in WASH-1400.


Hacking and Criminal Offenses (Re: RISKS 5.18) (David Sherman)

<ptsfa!pbhya!seg@Sun.COM>
Tue, 28 Jul 87 14:10:46 PDT
In reference to the above subject of Computer Crime, I have a book at home
that told of a case of a stolen program for estimating bids on the type of
work involved by two competing companies. The only reason the larger company
was convicted for the theft of the program from it's smaller competitor was
that they printed out a hard copy of the program. U.S. law at that time said
that what was contained in electronic memory, wasn't "real?" or something to
that effect. There certainly is a need for new or more comprehensive laws
due to new technology. I also understand banks are required to make hard
copies of their accounting programs determinations at appropriate places for
auditing purposes. Again what is in memory is not "real".
                                SEG, Pac Bell, Rohnert Park, Calif.


Passwords and telephone numbers

<Jonathan_Thornburg%UBC.MAILNET@MIT-Multics.ARPA>
Tue, 28 Jul 87 20:51:42 PDT
This is an old pet peeve/idea/complaint of mine that some recent postings
on passwords being broken have finally prompted me to set down on iron oxide:

Claim:  Any frequent computer user, including the most non-technical,
        can/should be able to remember a 10 to 15 character password
        consisting of a "random" sequence of digits.

To demonstrate this, consider the following 2 questions:
(A)     What's your office phone number?
(B)     What's your home phone number?
I suspect almost everyone can answer both questions correctly.  The
two together give 14 fairly patternless digits, or 8 if you don't
count exchanges.

Now compare the frequency with which you dial/speak/write your *own*
phone number with the frequency with which you type your password.
(This is why I only make my claim for any "frequent" user).  At least
around here, the number labels on office phones are often missing, so
the lack of visual feedback for passwords shouldn't be a problem.

      - Jonathan Thornburg           userbkis@ubcmtsg.bitnet
        thornburg%ubc@um.cc.umich.edu


Separation of duties and "2-man control"

"Patrick D. Farrell" <Farrell@DOCKMASTER.ARPA>
Tue, 28 Jul 87 15:32 EDT
Although Ted Lee's interpretation of "2-man control" does describe a form of
implicit separation of responsibility, I suspect that Dr. Ware was referring
to what has become rather common in NATO defense compusec procurements, the
explicit "2-man (or more) rule".  This a mechanism whereby two (or sometimes
more) mutually cooperating, authorized administrators are required to
perform some action that affects the state (particularly the security state)
of the system.

The cooperating, authorized administrators may also be required to belong to
separate operational groups, etc., depending upon how the system has been
screwed together.  All in all, it's not a bad idea and I'm still surprised
that the US compusec community has not yet picked up on it.

Pat Farrell (Control Data Corporation)

Please report problems with the web pages to the maintainer

x
Top