The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 10 Issue 04

Monday 4 June 1990


o Swiss Supreme Court sets limit to duration of data storage by police
Werner Uhrig
o A U.K. View of Early C3 Systems
C.F. Reynolds
o Glass cockpits (A320, etc.)
Henry Spencer
o Article on A320 in Aeronautique, April 1990
Jon Livesey
o Boeing 747-400 Autothrottle problems
Martyn Thomas
o Equipment failure or human failure?
Julian Gomez
o Re: Steve Jackson Games
Jim Harkins
o Routing tables for private switches
Simson L. Garfinkel
o Risks of Caller Identification
David Lesher
o More sendmail woes
o Info on RISKS (comp.risks)

Swiss Supreme Court sets limit to duration of data storage by police

Werner Uhrig <>
Sun, 3 Jun 1990 7:05:39 CDT
  [ the following is extracted/translated from the Swiss press agency ELSA ]

   Lausanne, June 2 (sda)  Police may not keep any unimportant data captured
on a fiche or file for longer than five years on any person.  Disrespecting
this rule shall be considered contrary to the constitutional right of personal
freedom and the right of a person to respect of privacy. This basic ruling
was published on Saturday by the Swiss Supreme Court.

[ Re:  unimportant;  don't ask me.  no further explanation was given.]
[ Re: fiche or file; the German words used were "Fiche oder Dossier".
      I think that they could better have used an expression like "any form
      of information storage" - and I assume that using more than one word
      indicates the intended meaning of "any" rather than limiting the appli-
      cability of the law - but naming just these 2 forms of data storage
      may, no doubt, lead to some lawyer to bicker over this "detail".]

A U.K. View of Early C3 Systems

3-JUN-1990 00:07:04
I have read Les Earnest's contributions on U.S. experience with interest.

In 1970 I found myself employed on Linesman, a massive UK military command and
control system. A year later I moved to a nearby university and ended up
assessing Master's degree dissertations carried out by "students" working on

At the time the project was in deep trouble - in that questions were being
asked in Parliament because it was clear that things were going wrong. The
basic problem was that there was no comprehension of sunk costs. The cheapest,
common sense option of scraping the tons of equipment that had become obsolete
before it was ever used was politically unacceptable to the Ministry of
Defence. To do this they would have to admit their incompetent management of
the project. Perhaps they could escape by throwing even more money to try and
buy their way out of the trouble they had got themselves into.

The civilian contractors were paid on a costs plus basis - and the more money
the MOD threw at the project the better. In fact they could make even more by
cutting back on secretarial staff and getting expensive professional staff to
collate the tons of documents the project "required".

Another problem was excessive secrecy - and the "need to know" attitude. For
instance, at one time the application design team wanted to know the typical
number of Russion bear bombers flying off-shore at any one time. That's too
secret to tell you can the reply - until the figure was given on a BBC TV
program, having been cleared for public release by a different arm of the Royal
Air Force! In fact asking questions was taboo. As a mere minion you did what
you were told and no more. I was considered a maverick because I pointed out
that the throughput of a particular device would be an order of magnitude less
than the design document required. (The design had been based on the maximum
hardware timings theoretically possible - and implicitly assumed, for example,
that a human could respond to a twin light signal by pressing a heavy duty key
in less time than it would take for the filiment in the bulb to cool
sufficiently to be visually detectable.) On another occasion I pointed out that
programming and systems staff were repeating earlier errors because the results
of assessment trials were considered too sensitive to tell them where they had
gone wrong. (For this impudence I was was denied an annual increment!)

The effect of the quality of staff was interesting. The salaries paid were
above the odds - after all the MOD were desperate and the civilian contractors
got a percentage.  People being interviewed were not give a fair picture for
fear of putting them off. (No one ever though to tell me that I was being
interviewed to work on a military project!)

Good staff (and those with a professional concience) quickly realised that they
had made a mistake and moved on. This left a residue of unimaginative plodders
who couldn't possibly get a better paid job elsewhere, and mecinaries who would
do anything for money. This was particularly obvious in their project work at
the nearby university, when it was clear that working on the project was
teaching them bad obsolete techniques and a belief that staff didn't need to
know anything outside their immediate work environment.

But this isn't the end of the affair. When one military project ends the team
of out-of-date and inward-looking programmers and systems analysts is not
disbanded. Other military projects are out to tender, and the civilian
contractor has a ready made team of staff who have security clearance and know
how to work to military specifications ....

Of course my experience is now nearly twenty years out of date - but I still
meet people who say that there has been little change.

Glass cockpits (A320, etc.)

Sun, 3 Jun 90 23:58:50 EDT
The April 30 issue of Aviation Week has a couple of interesting small
items about computerized airliners and "glass cockpits".

The first is a news item:  Airbus Industrie is considering alterations
to the A320's flight software to help guard against "overconfidence
syndrome", which they consider a significant factor in the Habsheim
and Bangalore crashes.  One possible change is upgrading the automatic
throttle management of the "alpha floor" protection mode to guard
against descents with inadequate thrust.  "Alpha floor" already runs
the throttles up automatically in emergencies like encounters with
serious windshear or maneuvers to avoid collisions.  Says Bernard
Ziegler (Airbus VP Engineering):  "The alpha floor was never designed
to save a crew that had been improperly managing a normal approach,
but we now are thinking of modifying it to serve as one more safeguard.
Such a modification will not make it a 100% safeguard, but it could
offer an additional safety margin."

The second is a background piece on the poor state of research in glass-cockpit
human factors (for example, NASA Ames, a major center of work on such things,
has no simulator representative of modern cockpits).  Hart A. Langer (United
Airlines VP flight operations) says that flight-management-system CRTs act as
"cockpit vacuum cleaners -- they suck eyeballs and fingertips right into them.
I have given check rides on these aircraft and seen four eyeballs and ten
fingertips caught in two [displays] at the same time.  This is bad enough at
cruise altitude, but it can be lethal in the low-altitude terminal area..."

               Henry Spencer at U of Toronto Zoology uunet!attcan!utzoo!henry

Article on A320 in Aeronautique, April 1990

Jon Livesey <livesey@Eng.Sun.COM>
Sat, 2 Jun 90 17:11:56 PDT
In Risks-10.02 Pete Mellor inadvertently gives us a good example of the risks
of muddy thinking.

Writing of a translated article, he recommends it to us on several ground,
one of which is

> b) the fact that it presents a French (and therefore not negatively biased?)
>    view,

The two problems with this are, first, Airbus is not exclusively a French
aeroplane.  It is a joint venture between several European countries.

Secondly, there has been quite a lot of negative comment about Airbus from
French sources, mainly from pilots' unions.

The risk here is that of giving one source extra credence on specious grounds.

Boeing 747-400 Autothrottle problems

Martyn Thomas <mct@praxis.UUCP>
Wed, 11 Apr 90 17:26:41 BST
This week's Flight International reports:

"British Airways (BA) Boeing 747-400s have experienced uncommanded inflight
closure of all four throttles on six separate flights between 6 October 1989
and 19 February 1990, 'several times' on one of those flights alone, according
to formal reports. Several other airlines have suffered the same incident,
Northwest reporting it first. ...

In most of the events the power levers retarded rapidly to idle, but
sometimes the reduction was partial, followed by automatic reset. ...

All incidents have occurred in the climb or cruise, and an IAS of more than
280 Knots is believed to be fundamental to the event. ...

Evidence indicates that the event is caused by a spurious signal to the
full authority digital engine control from the stall-management module. The
'single-word' spurious command says that the undercarriage [gear] is down or
the flaps are at setting 1, so if the IAS exceeds the maximum speed for these
configurations, the autothrottles close to reduce IAS to limiting speed, then
reset to maintain it.

The modification [to correct the problem - issued on February 22nd] assumes
that the fault was in the processing logic of the appropriate universal logic
card (a printed-circuit software unit [sic]) and adopts a standard technique
for reducing digital oversensitivity: there is now a delay (a few miroseconds)
built into the software by requiring it to receive an 'eight-word' command
before acting. Power spikes of other spurious commands should not produce a

So far the latest modification has proved effective. Early corrections, though,
had assumed the reaction was associated only with main-gear selection, so
although software changes had reduced the incident rate, spurious flap signals
continued to set engines to idle. BA has not reported any further events since

[end of quote]

This looks like a useful warning of inability to get complex systems right -
luckily it only occured at high IAS and was sub-critical for flight safety.  I
hope that appropriate lessons are learnt by both developers and certification
authorities, and that they start to question their ability to assure the safety
of such systems.

Notice that the partial fix, reported in the last paragraph, implies that the
Flight Data Recorder either was not used to diagnose the fault, or contained
insufficient information to point the finger at both the gear and flap signals.
This seems ominous for future accident cause analysis. The apparent action of
fixing symptoms until no further errors are reported, rather than analysing the
cause and then looking for all possible classes of the same error, seems
ill-judged, too.

I wonder what re-certification was undertaken following the modification.

Martyn Thomas, Praxis plc, 20 Manvers Street, Bath BA1 1PX UK.
Tel:    +44-225-444700.   Email:   ...!uunet!mcvax!ukc!praxis!mct

Equipment failure or human failure? (RISKS-10.01)

Sun, 3 Jun 90 00:12:55 EDT
Henry Spencer wonders if the failure of the pilot who had problems
with the `glass cockpit' to come forth means that something more
was going on than meets the eye.

It's possible, of course, but if the UK Civil Aviation Authority
is anything like the FAA, I'm not surprised the pilot is keeping
quiet.  It's rather amazing how nasty the FAA can be if it
decides to go after someone.

Equipment failure or human failure? (RISKS-10.01)

Sun, 03 Jun 90 22:57:01 -0700
>Some little while ago, Risks published the report of a flight crew landing an
>airliner in Britain after a very difficult time with wind readings of 100+
>problem are hamstrung, doubts are cast on the accuracy of the report, and if it
>*is* factual, that aircraft is still in service and potentially a lethal hazard
>to crews and passengers.

They aren't the only ones. Here is an excerpt from "Tales from the TRACON: a
controller's view of emergencies" in June 1990 "IFR" 6(6).  Belvoir
Publications, 75 Holly Hill Lane, Greenwich, CT 06836 (all typos mine):

    It's frequently the same for controllers. We're sometimes
    reluctant to take any action that invites scrutiny of our
    routine work. An experience I had last year shows what can
    happen. During one shift on a typical hectic IFR day in the
    TRACON, my frequency died three times.  Technicians reviewed
    the problem, only to find the "bad" frequency had started
    working again. After three such outages (with no repairs) I
    filed an unsatisfactory condition report (UCR) which goes
    straight to Washington and requires an answer to the complaint
    in writing.  Not surprisingly, facility managers aren't too
    fond of UCRs.

    When my complaint was checked out, my tapes were "dumped"
    (supposedly to discover whether the outage reqallly happened)
    on *every* position and frequency I had worked that day, not
    just the "bad" frequency. The tapes showed that I had really
    had a problem.  But in the process, management had my
    supervisor "counsel" me for bad phraseology and improper land
    line usage. The experience hardly encourages one to draw
    attention to malfunctions or emergencies.

A few days ago I posted to rec.aviation a white paper by an ARTCC controller
detailing how certain kinds of targets could be eliminated from the
controller's scope by the computer, roughly because it didn't like the targets.
Washington's response to his UCR was that they didn't see a problem.

Dr. Julian Gomez  RIACS - Research Institute for Advanced Computer Science

Re: Steve Jackson Games (Webb, RISKS-10.01)

Jim Harkins <jharkins@sagpd1.UUCP>
2 Jun 90 22:01:48 GMT
>The chance of GURPS Cyperpunk being used as a manual for computer crime is very
>slight indeed.

I don't see where this is relevant.  Its perfectly legal to buy books on how
to make illegal stuff like explosives, check out the warnings section(s) in
college chemistry books, not to mention stuff like The Poor Mans James Bond.
There are some very good cookbooks on committing murder (see the mystery section
of Waldenbooks).  Should we have thrown Agatha Christi in the slammer?  So what
would make it illegal to give even a step by step list of instructions for
breaking into a computer?

It seems to me that the act of commiting a crime is illegal, but the knowledge
of how to commit that crime isn't.  I think we can all figure out on our own
how to stick a gun into a cashiers face to get money, but we haven't done
anything wrong until we actually do it.  Nor have I done anything wrong by
offering a suggestion on improving your monthly income :-) Of course, if I
suspect that you did use my suggestion then by not finking on you I am breaking
the law.

routing tables for private switches

Simson L. Garfinkel <>
Sat, 2 Jun 90 11:47:00 EDT
It is a general problem that NYNEX has not been automatically distributing
routing updates to people who own their own switches.  This is one of the
things that NYNEX does to discourage companies from owning their own switches
and encourage them to use Centrex.

If you think that this is bad public policy, call the public utility
comission's complaint number.

Risks of Caller Identification

David Lesher <>
Sun, 3 Jun 90 18:16:14 EDT
The law enforcment community in FL is upset over Bell South's plan to offer
Caller ID without subscriber blocking. While the telco has offered blocking to
law enforcement officers, and a few select others, they are still worried

    1. The utility will need to have a list of all those
    eligible, including undercover officers.

    2. The fact the CID is blocked is a sure pointer that
    the caller is a cop, sure death to an informant.

I soon thought of a third. The blocking is done in the terminating CO.  What
happens if the expected block fails, for whatever reason? There is no feedback
to the caller that such has happened.

Given the level of violence within the general population around here, the CID
block seems to made a classic RISKS mistake. A system designed for less
critical use has been thrust beyond its design parameters into a life-dependent
role. This strikes me as no different than using unproven software for
designing bridges or buildings.

More sendmail woes (duplicates of RISKS-10.02 for a few of you)

Peter G. Neumann <>
Mon, 4 Jun 1990 9:32:45 PDT
HERCULES crashed at 6AM this morning, but the completion of the mailing of
RISKS-10.02 had been hung in the queueueueueueue manager since Saturday for
just one of the six RISKS sublists --so a few of you got a second copy of
RISKS-10.02 when the system automatically rebooted.  This is of course Standard
Sendmail Problem Number 1.  (Unfortunately the log tapes do not explain the
crash!)  This problem reinforces the need for a more robust algorithm that more
often deletes nonNACKed sendings periodically from the queue during the first
pass over the long list, rather than waiting (hopefully) for the end of the
list.  Private hacks of sendmail exist to do that, but each privately hacked
version of sendmail seems to introduce its own set of new problems or else does
not provide the services of other hacked versions.  At any rate, mailing to
very large lists remains a tricky business.

Ironically, I had just prepared a bunch of slides for the talk on my COMPASS
paper at the end of the month, The Risk of the Year: Distributed Control, which
(among other things) relates the 15 Jan 90 AT&T problem to the 1980 ARPANET
collapse, and throws in a section on sendmail woes for good measure.

Please report problems with the web pages to the maintainer