The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 9 Issue 74

Monday 12 March 1990

Contents

o Airbus Crash: Reports from the Indian Press
N. Balaji
o Indian Airlines A320 in the German press
Udo Voges
o The C3 legacy, Part 4: A gaggle of L-systems
Les Earnest
o The risks of keeping old versions -- Daigle book
Graeme Hirst/David Sherman
o PSU Hackers thwarted
Angela Marie Thomas
o Anonymous Word Processing: `Z'
Jon von Zelowitz
o Re: Now Prodigy Can Read You
Eric Roskos
o Re: Traffic System Failure
Peter Ahrens
o Tracking criminals and the DRUG police-action
J. Eric Townsend
o Human-Centered Automation
Robert Dorsett
o Drive-by-wire cars
Craig Leres
o Info on RISKS (comp.risks)

Airbus Crash: Reports from the Indian Press

N. Balaji <balaji@redwood.USC.EDU>
Fri, 9 Mar 90 20:02:42 PST
Below are excerpts from three reports related to the Feb. 14th Airbus crash
which appeared in the weekly edition of The Statesman, an English language
newspaper published from Calcutta and New Delhi.

   Madras [India], Feb 17.  -- An Indian Airlines A-320 aircraft, on a
   scheduled flight from here to Bangalore this morning, developed a
   snag in mid-air and was brought back safely.

   When the aircraft was airborne for 20 minutes and was halfway to
   Bangalore, there was a drop in cabin pressure and passengers
   complained of suffocation.  It was flown back and grounded and the
   passengers were transferred to a Boeing-737 and flown to Bangalore.

   The day after the A-320 aircraft crashed in Bangalore [on Feb.  14]
   killing 90 passengers one of the engines of another A-320 Airbus
   failed as the aircraft was getting ready for take-off from Begum
   airport in Hyderabad for Madras.  The flight was aborted.

   Alarmed by the frequent snags, Indian Airlines pilots are wary of
   flying the A-320 Airbus, the most sophisticated civilian aircraft
   anywhere in the world.

   The fly-by-wire system of computer-driven controls used in the
   A-320 is common to Mirage-2000 fighter planes also.  The Indian Air
   Force has built air-conditioned hangars in Gwalior for its Mirage
   fleet but Indian Airlines has not even provided ordinary hangars
   for its A-320 aircraft.  Two of its grounded A-320 were parked in
   the open for two months, one in Bombay and the other in Delhi,
   exposed to heat, dust and moisture.         ...

[On Feb.  18, the Civil Aviation Ministry grounded all 14 Airbus aircraft in
the Indian Airlines fleet pending an official inquiry.  Delivery of 12 more
A-320 aircraft on order has also been suspended.  -nb]

   New Delhi, Feb.  15.  -- Preliminary investigations into the Airbus
   crash at Bangalore yesterday are reported to be focusing on the
   sudden drop in height of the aircraft as it was on its final
   approach for landing at the runway of the airport, reports PTI
   [Press Trust of India].

   According to Civil Aviation Ministry sources here today, there was
   no distress signal from the pilot to the control tower and the
   aircraft appeared set for a smooth landing before the sudden drop
   under clear weather conditions.

   The sources said it was possible that the pilot either misjudged
   the height at which he was flying or he was misled by the
   instruments on board...

    Bangalore, Feb.  14.  --       ...

   [The Civil Aviation Minister Arif Mohammad Khan] said the pilot
   [Captain S.  S.  Gopujkar, who was in command of the flight] was
   one of the most experienced in Indian Airlines and even
   manufacturers of the Airbus had placed him in the "excellent pilot"
   category after he underwent training.  "He was flying the aircraft
   and it is a mystery how the accident happended", Mr Khan said.

[See, in contrast, the report translated from Badische Neueste Nachrichten 22
Feb 90 by Udo Voges in Risks 9.70. - nb]

The following is from Indian Abroad, March 2, 1990, published from New York.

   New Delhi -- Even as the Airbus Industrie launched a campaign to
   undermine [sic] the expertise of Indian Airlines' Airbus pilots in
   the French media, the Indian government continued to persist with
   its apprehensions about the aircraft.

   A technical committee armed with comprehensive terms of reference
   began a probe into the whole Airbus affair last week.  ...

   The five-member expert committee announced by Civil Aviation
   Minister Arif Mohammed Khan, would go into Indian Airlines' state
   of preparedness to safely operate the Airbus.  ...

   The issue of the Airbus' safety has become a national one with
   important newspapers making editorial comments.  The Indian press
   has urged the Indian government to thoroughly examine the safety
   aspects of the plane before allowing [it] to fly once again.   ...


Re: A320 (Risks 9.70)

Udo Voges <voges@idtuva.uucp>
Fri, 9 Mar 90 08:37:11 +0100
Pilot to blame for Airbus-Crash
(translated from Badische Neueste Nachrichten, 9 March 1990)

Paris (dpa). The crash of an Airbus A320 in India in mid February, in which 90
people were killed, is due to carelessness of the pilot. This was found after
analysis of the tape from the cockpit. In Paris it was announced, that the
function of the airplane was not in question.  According to the announcement,
the analysis of the tape showed that the pilot of the crashed airplane of
Indian Airlines was training the copilot during landing. In the course of this
he didn't pay sufficient attention that the airplane should keep its required
speed. This was announced from well-informed French sources.  The Indian
Government has gotten a preliminary report, in which the reopening of the use
of the Indian A320 is recommended.


The C3 legacy, Part 4: A gaggle of L-systems

Les Earnest <LES@SAIL.Stanford.EDU>
05 Mar 90 2025 PST
Martin Minow contributes some SAGE anecdotes in RISKS 9.68, including the
following.
> My friend also mentioned that the graphics system could be used to display
> pictures of young women that were somewhat unrelated to national defense
> -- unless one takes a very long view -- with the light pen being used
> to select articles of clothing that were considered inappropriate in the
> mind of the viewer.  (Predating the "look and feel" of MacPlaymate by
> almost 30 years.)  Perhaps Les could expand on this; paying special
> consideration to the risks involved in this type of programming.

While light pens did exist in that period, SAGE actually used light
_guns_, complete with pistol grip and trigger, in keeping with military
traditions.  Interceptors were assigned to bomber targets on the large
displays by "shooting" them in a manner similar to photoelectric arcade
games of that era.

Regrettably, I never witnessed the precursor to MacPlaymate, which
probably appeared after my involvement.  While I never saw anything bare
on the SAGE displays, a colleague (Ed Fredkin) did stir up some trouble by
displaying a large Bear (a Soviet bomber of that era) as a vector drawing
that flew across the screen.  Unfortunately, he neglected to deal with X, Y
register overflow properly, so it eventually overflew its address space.
The resulting collision with the edge of the world produced some bizarre
imagery, as distorted pieces of the plane came drifting back across the
screen.

(Continuing from RISKS 9.67)

A horde of command-control development projects was initiated by the Air
Force in the early 1960s.  Most were given names and each was assigned a
unique three digit code followed by "L."  Naturally, they came to be
called called "L-systems."  A Program Manager (usually a Colonel) was put
in charge of each one to ensure that financial expenditure goals were met.
Those who consistently spent exactly the amounts that had been planned
were rewarded with larger sums in succeeding budgets.  Monthly management
reviews almost never touched on technical issues and never discussed
operational performance -- it was made clear that the objective was to
spend all available funds by the end of the fiscal year and that nobody
cared much about technical or functional accomplishments.

In 1960, after earlier switching from MIT Lincoln Lab to Mitre Corp., my
group was assigned to provide technical advice to a Colonel M., who was in
charge of System 438L.  This system was intended to automate the
collection and dissemination of military intelligence information.  Unlike
most command-control systems of that era, it did not have a descriptive
name that anyone used -- the intelligence folks preferred cryptic
designations, so the various subsystems being developed under this program
were generally called just "438L."

I had recently done a Masters thesis at MIT in the field of artificial
intelligence and hoped to find applications in this new endeavor.  I soon
learned that the three kinds of intelligence have very little in common
(i.e. human, artificial, and military).

IBM was the system contractor for 438L and was already at work on an
intelligence database system for the Strategic Air Command Headquarters
near Omaha.  They were using an IBM 7090 computer with about 30 tape
drives to store a massive database.  It turned out to be a dismal failure
because of a foreseeable variant of the GIGO problem, as discussed below.

The IBM 438L group had also developed specifications for a smaller system
that was to be developed for other sites.  Colonel M. asked us to review
the computer Request for Proposals that they had prepared.  He said that
he planned to buy the computer sole-source rather than putting it out for
bids on the grounds that there was "only one suitable computer available."
When I read it, there was no need to guess which computer he had in mind
-- the RFP was essentially a description of the IBM 1410, a byte-serial,
variable word length machine of that era.

When Colonel M. sought my concurrence on the sole-source procurement, I
demurred, saying there there were at least a half-dozen computers that
could do that job.  I offered to prepare a report on the principal
alternatives, including an approximate ranking of their relative
performance on the database task.  He appeared vexed, but accepted my
offer.

My group subsequently reviewed alternative computers and concluded that
the best choice, taking into account performance and price, was the Bendix
G-20.  I reported this informally to Colonel M. and said that we would
write it up, but he said not to bother.  He indicated that he was very
disappointed in this development, saying that it was not reasonable to
expect his contractor (IBM) to work with a machine made by another
company.  I argued that a system contractor should be prepared to work
with whatever is the best equipment for the job, but Col. M seemed
unconvinced.

This led to a stalemate; Colonel M. said that he was "studying" the
question of how to proceed, but nothing further happened for about a year.
Finally, just before I moved to another project, I mentioned that the IBM
1410 appeared to be capable of doing the specified task, even though it
was not the best choice.  Col. M. apparently concluded that I would not
make trouble if he proceeded with his plan.  I later learned that he
initiated a sole-source procurement from IBM just two hours after that
conversation.

In the meantime, the development project at SAC Headquarters was falling
progressively further behind schedule.  We talked over this problem in my
group and one fellow who had done some IBM 709 programming remarked that
he thought he could put together some machine language macros rather
quickly that would do the job.  True to his word, this hacker got a query
system going in one day!  I foolishly bragged about this to the manager of
the IBM group a short time later.  Two weeks after that I discovered that
he had recruited my hotshot programmer and immediately shipped him to
Omaha.  I learned to be more circumspect in my remarks thereafter.

The IBM 438L group did eventually deliver an operable database system to
SAC, but turned out to be useless because of GIGO phenomena (garbage in,
garbage out).  Actually, it was slightly more complicated than that.
Let's call it GIGOLO -- Garbage In, Gobbledygook Obliterated, Late Output.

The basic problem was that in order to build a structured database, the
input data had to be checked and errors corrected.  In this batch
environment, the tasks of data entry, error checking, correction, and file
updating took several days, which meant that the operational database was
always several days out of date.

The manual system that this was supposed to replace was based on people
reading reports and collecting data summaries on paper and grease pencil
displays.  That system was generally up-to-date and provided swift answers
to questions because the Sergeant on duty usually had the answers to the
most likely questions already in his head or at this finger-tips.  So much
for the speed advantage of computers!

After several months of operation with the new computer system, the
embarrassing discovery was made that no questions were being asked of it.
The SAC senior staff solved this problem by ordering each duty officer to
ask at least two questions of the 438L system operators during each shift.
After several more months of operation we noted that the total number of
queries had been exactly two times the number of shifts in that period.

The fundamental problem with the SAC 438L system was that the latency
involved in creating a database from slightly buggy data exceeded the
useful life of the data.  The designers should have figured that out going
in, but instead they plodded away at creating this expensive and useless
system.  On the Air Force management side, the practice of hiring a
computer manufacturer to do system design, including the specification of
what kind of computer to buy, involved a clear conflict-of-interest,
though that didn't seem to worry anyone.

(Next segment: Subsystem I)

    -Les Earnest (Les@Sail.Stanford.edu)


The risks of keeping old versions

Graeme Hirst <gh@ai.toronto.edu>
Sat, 10 Mar 90 16:03:59 EST
>From the Toronto /Globe and Mail/, 9 March 90:

  The much-hyped launch of Chantale Daigle's* book took a bizarre twist
  yesterday as the publisher ordered that all 40,000 copies be burned and
  a corrected version be issued.

  ``We made a serious technical error.  We printed the working copy of
  the (computer) disc instead of the final, edited version,'' said Monique
  Summerside, a spokesman [sic] for Les Editions 7 Jours Inc.
  She said that the changes to be made were strictly grammatical and
  typographical, and were not due to the threat of a lawsuit.  ``This is
  very embarrassing and even more expensive, but we have a obligation to
  the public to provide a good product.''

  . . . The book was scheduled to go on sale yesterday . . . but
  distribution was delayed until Monday because of the recall.

    [*Chantale Daigle was at the centre of a controversial
    Canadian court case concerning abortion last year.]

     [A Toronto Star article from the same day was submitted by David Sherman,
     dave@lsuc.on.ca, who wondered how likely would it have been for a printer
     to accidentally print from an earlier draft when using pre-computer
     printing methods?  Probably much more difficult, especially if type were
     involved -- the old version was simply not around anymore!  But it would
     have been just as easy in the old days to lose the marked-up proof
     pages...  PGN]


PSU Hackers thwarted

Angela Marie Thomas <thomas@shire.cs.psu.edu>
Sat, 10 Mar 90 00:22:22 GMT
The Daily Collegian  Wednesday, 21 Feb 1990

Unlawful computer use leads to arrests
ALEX H. LIEBER, Collegian Staff Writer

Two men face charges of unlawful computer use, theft of services in a
preliminary hearing scheduled for this morning at the Centre County Court of
Common Pleas in Bellefonte.  David Geyer, 234 S. Allen St., and Robert W.
Clark, 201 Twin Lake Drive, Gettysburg, were arrested Friday in connection with
illegal use of the University computer system, according to court records.
Geyer, 36, is charged with the theft of service, unlawful computer use
and criminal conspiracy.  Clark, 20, is charged with multiple counts of
unlawful computer use and theft of service.  [...]

Clark, who faces the more serious felony charges, allegedly used two computer
accounts without authorization from the Center of Academic Computing or the
Computer Science Department and, while creating two files, erased a file from
the system.  [...]  When interviewed by University Police Services, Clark
stated in the police report that the file deleted contained lists of various
groups under the name of "ETZGREEK."  Clark said the erasure was accidental,
resulting from an override in the file when he tried to copy it over onto a
blank file.  According to records, Clark is accused of running up more than
$1000 in his use of the computer account.  Geyer is accused of running up more
than $800 of computer time.

Police began to investigate allegations of illegal computer use in November
when Joe Lambert, head of the university's computer department, told police a
group of people was accessing University computer accounts and then using those
accounts to gain access to other computer systems.  Among the systems accessed
was Internet, a series of computers hooked to computer systems in industry,
education and the military, according to records.

The alleged illegal use of the accounts was originally investigated by a
Computer Emergency Response Team at Carnegie-Mellon University, which assists
other worldwide computer systems in investigating improper computer use.

Matt Crawford, technical contact in the University of Chicago computer
department discovered someone had been using a computer account from Penn State
to access the University of Chicago computer system.


Anonymous Word Processing: `Z'

Jon von Zelowitz <vonzelow@adobe.com>
Thu, 8 Mar 90 18:22:42 PST
In RISKS 9.71, R. Clayton quotes an article in the New Republic which gave
astounding "evidence" for the attribution of an anonymous article based on
claimed textual correspondence between it and a file on a computer. (I
assume that Clayton's submission was concerned with the lack of security of
the computer file.) The New Republic article said:

   The staff member called up the file on his own computer during our
   interview and read me lengthy passages, all of which were identical
   to passages in "To the Stalin Mausoleum" [the title of the Daedalus
   article].

What kind of evidence is this? Even if the reporter had personally seen the
file on the screen, it means nothing. File ownership is easily faked. And
since passages were read to the reporter, it was probably a telephone
interview. "Yes, I've got the article right here on my computer screen..."
Wow. I have a bridge to sell to that reporter.

    Jon von Zelowitz   ...sun!adobe!vonzelow   vonzelow@adobe.com


Re: Now Prodigy Can Read You (RISKS-9.69)

Eric Roskos <jer@ida.org>
Fri, 09 Mar 90 09:37:19 E
In Risks 9.69, Donald B. Weschler writes:

> The Prodigy system accesses remote subscribers' disks to check the
> Prodigy software version used, and when necessary, downloads the latest
> programs.  ...  I asked Prodigy how they protect against the possibility
> of altering subscribers' non-Prodigy programs, or reading their personal
> data.  ...  According to Prodigy, the feature cannot be disabled.

This issue was debated at length on the PRODIGY service several months
ago; an explanation was given by Harold Goldes, one of the PRODIGY
service's more technically knowledgeable user-support people.  The
"programs" updated by the PRODIGY software are not executable files
loadable by the PC's operating system; it is not even clear that it is
code executable by the PC's CPU.  Rather, routines to draw the
individual graphics displays used by the PRODIGY software are cached on
the user's disk in a single file, STAGE.DAT, and this cache is updated
via normal cache-updating algorithms.  The PRODIGY software is unable to
update the DOS-executable object programs automatically, and has to send
out new disks when this is necessary.  The explanation given in the
_PRODIGY_Star_ newsletter was an overly-abbreviated version, limited in
technical detail by the PRODIGY service's orientation to nontechnical
people, and, no doubt, by space limitations.

Nevertheless, due the PC's lack of security mechanisms, the possibility
of altering subscriber's programs or reading personal data does exist on
any such system.  PRODIGY representatives have repeatedly stated that
the PRODIGY software will not do this, and my examination of the
operation of the software has not shown any evidence that any file other
than STAGE.DAT is updated.

A topic that has not been so clearly answered is what some users feel to
be the PRODIGY service's overuse of its built-in censorship facilities
and its employment of "over 100" censors; they feel the PRODIGY sevice
uses this facility to control the expression of opinions on the
service's bulletin boards which may adversely affect marketing goals.

   [PRODIGY is a trademark of Prodigy Services Company, a partnership
   of IBM and Sears.]


Re: Traffic System Failure (Rich Neitzel, RISKS-9.73)

<prahrens@pttesac.UUCP>
Thu, 8 Mar 90 10:47:12 -0800
>   ... you could immobilize a major urban area ...

Perhaps it would not be trivial to point out that Norman Spinrad
published a story in Analog about 25 years ago which used this
exact scenario, wherein a foreign power immobilises New York City.

-Peter Ahrens,  San Francisco


Tracking criminals and the DRUG police-action

J. Eric Townsend <jet@karazm.math.uh.edu>
Sun, 11 Mar 90 15:24:29 CDT
>From the Communications of the ACM, vol. 33, no. 3, March 1990:

"News Track:

DRUG WARS... A new FBI computer system created to monitor the activities of
suspected drug traffickers may eventually be able to predict their next move.
Drawing information from several existing FBI databases, the "Drug Information
System" lists suspects' names and stores data on their cars, travle, phone
calls, meetings, assets, and family connections.  Its monitor can display
fingerprints, mug sohots and surveillence photos.  By year-end, AI capabilities
will be added to help agents detect suspicious actions, suggest leads and
forecast crimes.  The system will be installed in four cities by the end of the
month, and 18 cities by the end of the year."

  "forecast crimes" -- could they have predicted the hit and run driver
  who totaled my car and didn't even stop to check if my passenger and
  I were injured?  Maybe they should try predicting crimes by politicians
  and federate employes first, just to get the bugs out of the system....

  No :-).

J. Eric Townsend, University of Houston Dept. of Mathematics (713) 749-2120


Human-Centered Automation

Robert Dorsett <rdd@rascal.ics.utexas.edu>
Mon, 5 Mar 90 16:46:31 CST
From: AirLine Pilot, February 1990:

NASA BUILDING TECHNOLOGY BASE FOR HUMAN-CENTERED AUTOMATION

NASA has launched a research program aimed at improving aviation safety by
developing and applying what it calls "human-centered automation" for flight
crews and air traffic controllers.  The agency hopes to develop, by 1994, a
design for both a cockpit and a controller station that are both intelligent
and human centered.

According to NASA documents developed for a recent NASA conference on aviation
safety and automation, the agency's goal is to "provide the technology base
leading to improved safety of th national airspace system through development
and integration of human-centered automation technologies for aircraft crews
and air traffic controllers."

Automation, says NASA, "can improve the efficiency, capacity, and dependab-
ility of the national aviation system."  But, the agency acknowledges,
*humans* will manage, operate, and assure the safety of the next-generation
system.  Therefore, "human-centered automation is the key to effectiveness."

The specific objectives of the NASA program are to:

* develop philosophies and guidelines for applying human-centered
automation to the flight deck and to ATC controller stations;

* Provide for flight crews human-centered automation concepts that "ensure
full situational awareness"; and

* provide for air traffic controllers human-centered automation concepts and
methods that "allow integration and management of information and air-ground
communications."

The program has three main elements:

The element dealing with "human/autoamtion interaction" will treat such
subjects as a methodology for analyzing human error, ways of measuring work-
load, and "functional validation of intelligent systems."

A program element on intelligent error-tolerant systems will evaluate
collision avoidance systems, "smart" checklists, weather displays, a cockpit
procedures monitor, and more.

A third program element, concerned with ATC/cockpit integration, will look
at pilot/controller communications management, enroute flow management and
scheduling, final approach spacing, and similar issues.


Drive-by-wire cars

Craig Leres <leres@helios.ee.lbl.gov>
Sat, 03 Mar 90 18:26:11 PST
There was an interesting article in the January issue of Car and Driver
magazine titled the "Ten Best Things To Come." One of the ten things is
drive-by-wire. They discuss some of the issues some of the issues applicable to
both fly and drive by wire:

    "Obviously, a primary concern with drive-by-wire systems is
    reliability and fail-safe operation."

There's a picture of a electronically controlled throttle; the PC board says
BOSCH on it and the hardware appears to be the upper assembly of a carburetor,
i.e. there's a servo which control a throttle plate. This may be the unit used
in the BMW 750iL which is claimed to be the only example of a drive by wire
automobile on the American market.

The systems they expect us to see in the future will control not only the
throttle but steering and suspension. Examples include four wheel steering and
dynamic camber adjustment.

Hopefully, auto manufacturers will be as conservative with drive by wire
systems as they have been with the computer controlled engines they are current
building. For example, the engine in my '89 GM car has a computer that controls
functions such as fuel delivery and ignition.  But nearly all the computer
controlled systems have backups that implement the "limp home mode." There's an
oil pressure switch which activates the electric fuel pump should the computer
fail to. The ignition system has a backup circuit that takes over if the
computer stops supplying ignition timing data.

        Craig

Please report problems with the web pages to the maintainer

Top