The RISKS Digest
Volume 3 Issue 90

Thursday, 30th October 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Anti Skid Brakes
Paul Schauble
The Mother's Day Myth, and "Old Reliable"
Jerome H. Saltzer
Collision avoidance systems
John Larson
Crime and punishment
Peter Ladkin
Air Canada
Matthew Kruk
(Voting) Machine Politics
Mike McLaughlin
Computer RISKS in "Ticker-Tape Parades"
PGN
SDI vs. Social Security
Scott Guthery
SDI Impossibility?
Scott Dorsey
Feeping Creaturism
Charley Wingate
Info on RISKS (comp.risks)

Anti Skid Brakes

Paul Schauble <Schauble@MIT-MULTICS.ARPA>
Thu, 30 Oct 86 04:44 EST
    In view of the recent discussion on Anti-Skid Brakes and their
overrides, I thought I would post this item.  It is by John Dinkel's
column in the October 1986 issue of Road & Track and describes his and
other race drivers experience with the Anti-skid Braking System (ABS) on
a Corvette.

          - - - - - - - - - - - - - - - - - - - -

During a recent test session at Sears Point International Raceway, the
Bakeracing Corvette drivers were treated to a couple of graphic
demonstrations of the differences between ABS and non-ABS braking.
Coming down the Carousel, a long sweeping, downhill left-hander, team
leader Kim Baker found himself running a bit fast for the wet track
conditions.  Rather than drive off the track, Kim locked the brakes and
put the car into a harmless spin.  Surprise.  This time it wasn't
totally harmless.  Once the car stopped sliding sideways, the ABS caused
the Vette to steer in the direction in which the front wheels were
aimed.  In this instance the ABS allowed the car to take a wider than
expected arc, and Kim and the Corvette found themselves rolling gently
into the tire wall on the outside of the turn.  No harm except for
embarrassment on Kim's part, but this incident certainly pointed out one
of the differences between spinning a car with and without ABS.

That wasn't the only difference.  I listened intently as two of out
drivers complained of lack of braking and a soft pedal as they applied
the brakes at the top of the Carousel.  Having just finished driving
several laps following a discussion with John Powell, owner of one of
the other Corvette teams and an experienced driver training instructor,
about ABS versus non-ABS race track driving, I knew what the problem
was.  Coming up to the braking point at the entrance to the Carousel, a
car gets light as it crests a hill.  If you apply ABS brakes at the
instant, the ABS senses loss of traction or a low-coefficient [of
friction] surface and releases pressure to one or more wheels that it
thinks is trying to lock.  The ABS brain has been fooled by the car
losing download over that crest, and it can take up to half a second for
the system to recover and allow full braking force after the wheel loads
return to normal.  What does the driver sense during that half second
besides panic?  A soft pedal and longer than expected braking distances.
The solution?  Simple.  Initiate your braking right before the car gets
light or wait until the wheels are fully loaded again after the crest.
Exercise either of these two options and you'll never know that the car
is equipped with ABS except for the added security it affords when you
hot foot it into a corner and discover that you can still steer into the
turn despite having the brakes "locked".  And, as we discovered at
Portland, a Corvette with ABS and drive rings around the competition on
a wet track.

           - - - - - - - - - - - - - - - - - - - -

It's noteworthy that some racing teams are experimenting with computer
controlled cars.  The suspension, braking, steering, and engine
parameters under direct control of an on-board computer that is
programmed for the specific race and track being driven.  So far, such a
car has not run in competition.  However, as Risks readers know,
computer controlled engines and transmissions are almost commonplace.  I
expect to see the car with the computer controlled suspension in
competition in 1987.


The Mother's Day Myth, and "Old Reliable"

Jerome H. Saltzer <Saltzer@ATHENA.MIT.EDU>
Tue, 28 Oct 86 23:11:16 EST
From Robert Stroud's piece on SEAQ. . .  (RISKS-3.89)

> Curiosity is an interesting example of human behaviour causing a
> computer system to fail. I believe the telephone companies have a
> similar problem on Mother's Day when the pattern of usage is abnormal.

Workers in the New England Toll Switching Center here in Cambridge
tell visitors on guided tours (that is the best I can do for a
reference; sorry) that their busiest day for long distance calls is
the Monday after Thanksgiving.  The explanation they give is that the
Friday after Thanksgiving is the first real Christmas shopping day,
because so many people have or take that day off.  All the retailers
in New England study the pattern of sales on Friday and Saturday,
ponder it on Sunday, and spend Monday morning on the telephone to
their suppliers trying frantically to get their hands on more of
whatever seems to be selling well this year.

That one falls in the category of hard-to-imagine-in-advance-but-
easy-to-explain-in-retrospect system problems.

The Michael Clark article quoted by Stroud contains a comment that
is eyebrow-raising from the point of view of RISKS:

> . . . said that although it had enjoyed a high level of reliability,
> it was six years old and considered fairly antiquated by today's
> standards.

I wonder who it is that considers that system as antiquated?  Another
perspective says that a complex system that has been running for six
years is just beginning to be seasoned enough that its users can have
some confidence in it.  People who have work to do (as compared with
computer scientists, who users perceive as mostly interfering with
people trying to get work done) know that in many cases the most
effective system is one that has just become obsolete.  The tinkerers
move on to the shiny new system and leave the old one alone; it
becomes extraordinarily stable and its customers usually love it.

                    Jerry Saltzer


Collision avoidance systems

<jlarson.pa@Xerox.COM>
Wed, 29 Oct 86 11:29:49 PST
There was a rather distressing article about collision avoidance systems
in the San Jose Mercury News recently (Sun, 26 Oct).  According to the
article the FAA nixed a workable collision avoidance system designed by
Honeywell 11 years ago because it competed with an in house collision
avoidance system they were developing.  This was done in spite of
several studies showing that the Honeywell system would be better than
the FAA system.  The Honeywell system would have cost $14,000 per
comercial airline and was projected to be cost reduced to about $1000
making it affordable for most aircraft.  

They also quoted a former FAA official to the effect that the FAA was
partly responsible for the loss of over 700 lives due to collisions
because of their failure to go ahead with the Honeywell system. 

The FAA is finally almost ready with their own version of a collision
avoidance system (apparently needs another year of testing), but it will
cost a lot more than the original Honeywell system ($40-70K) and has
problems with clouds and bad weather.  It also apparently can't be made
as cheap as the original Honeywell system ($5,000 or so) so it will
probably not be used much except in commercial aircraft.

Does anyone know more about this issue ?  I'm particulary interested in
technical details about the Honeywell and the FAA systems.  

John


Crime and punishment

Peter Ladkin <ladkin@kestrel.ARPA>
Tue, 28 Oct 86 18:34:59 pst
Alan Wexelblatt asks:

  [...] the FAA is going to adopt strict rules for small aircraft in busy
  airspaces and establish a system to find and punish pilots who violate
  these rules.  The question this brought to mind is: is this the right
approach for the FAA's problem?

These rules are already in existence, and so are the punitive
practices. Neither can stop mistakes, as in the Cerritos
airspace violation by the Archer. They are even less effective
against deliberate violators, who turn off their transponders.

  How about for computer systems? [..] Is training the answer [..] ?

Maybe to avoid mistakes, as in rm *, but not for deliberate violators.
The late-70s Berkeley Unix cracker was known, and wouldn't stop.
I believe that the Computer Center tried to hire him to turn his
talents to useful purposes - which didn't work.
Eventually the police went around to arrest him, which seemed
to work (he was a young middle-class teenager).
So training wasn't the answer, but sufficiently severe punishment
was, in this case. Not that I advocate this approach.

Peter Ladkin


Air Canada

<Matthew_Kruk%UBC.MAILNET@umix.cc.umich.edu>
Wed, 29 Oct 86 08:37:58 PST
Apparently this was not the main computer system but a (reservations) backup
system. The "stupidity" of this situation is that, according to news
reports, major building damage (currently estimated at greater than $10
million) might have been avoided had their been a sprinkler system. I would
be interested in knowing how it came to be decided to have a "backup system"
located in such a building and if there was additional data security
measures were taken by Air Canada (initial newspaper reports seem to imply
that there were none). Perhaps Risks readers in eastern Canada might be able
to shed more light on this.


(Voting) Machine Politics

Mike McLaughlin <mikemcl@nrl-csr>
Wed, 29 Oct 86 16:11:13 est
See DATAMATION, 1 Nov 86, Vol 32 No 21, "Machine Politics" beginning on
page 54.  Good article by John W. Verity.  Quotes Deloris J. Davisson of
Emerald Software & Consulting, Inc., Terre Haute, Ind., and of Ancilla
Domini College.  If anyone knows Ms. Davisson, request she be invited to
contribute to Risks.


Computer RISKS in "Ticker-Tape Parades"

Peter G. Neumann <Neumann@CSL.SRI.COM>
Thu 30 Oct 86 03:01:32-PST
Mets fans were treated to an interesting new form of computer risk on
Tuesday.  An estimated 2.2 million people turned out for the parade to honor
the Mets, so clearly more paper had to be found to dump on the people in
keeping with New York's tradition of a ticker-tape parade.  The solution was
to use computer printout as well as ticker-tape, including huge volumes of
billing reports, shipping orders, and stock records.  Thus, we ask our New
York RISKers whether they picked up any interesting print-out that might
have been a violation of privacy.  Scavenging dumpsters is an old art, but
having possibly sensitive printouts raining down on you is a new twist.


<"guthery%ascvx5.asc@slb-test.CSNET">
Tue, 28 Oct 86 07:37 EDT
              <"4596::GUTHERY%slb-test.csnet"@CSNET-RELAY.ARPA>
To:       risks@CSL.SRI.COM
Subject:  SDI vs. Social Security

When I think about the risks of computerization, I'm much more afraid
of the Social Security System than I am of SDI.  We know computers
hitched to things-that-go-boom are dangerous so we watch them carefully 
as we build them and as we use them.  But computers hitched to paper?
Who really cares?  If it issues a check that's too small or a report
that's fallacious, it's the recipient's problem to make it right. Right?

In other words, if the builders and maintainers of the system have vested
interests in the correctness of the system it is more likely to be correct
than if they don't.  Said another way, it is always the "users" who are
ultimately in charge of ... not responsible for, mind you ... debugging 
the program. Things get fun when the only means a "user" has to debug 
the system is a bureaucratic hole to yell into.

But beyond these mild inconveniences to that lowest of all computer life,
there is a more ominous shadow on the horizon.  We are bringing into being
very large systems whose behavior we don't understand yet which are woven 
into the fabric of our daily life.  I don't mean we don't understand the
line that says multiply hours worked by hourly pay.  I mean we aren't in
control of it or its destiny.  We can't describe its global behavior.  We 
change it but we don't know where its evolutionary path is leading.

("Well, son, it started out as a computer program but we just kinda lost
track of it.  Now it's kinda like the law of gravity.  We take it as
given and just try to work with it or work around it.")

What do we know about scaling up and evolving software?  Are there any 
empirical studies of the evolution of large code bodies (5+ million lines, 
10+ years)? Do we know how to engineer global behavior from local function?  
How do we recover functional descriptions and domain-specific knowledge from
large, mature software systems?  

Software productivity always seems to mean bringing more code into being 
quickly.  Yet the problem I fear is that there is too much code of unknown 
quality and function scattered everywhere and then forgotten.

I suggest that we already have many of the problems that the SDI critics
call out ... only in a more innocuous form.  Cancer kills just as surely 
as a bullet but it's a hell of lot harder death.  We all seem to be sitting 
around smoking cigarettes and worrying about being shot.


SDI Impossibility?

Scott Dorsey <kludge%gitpyr%gatech.csnet@CSNET-RELAY.ARPA>
Mon, 27 Oct 86 18:36:49 est
>      "In short, the SDI software is not impossible, but ending the
>      fear of nuclear weapons that way is."   [David Parnas]  (RISKS-3.86)

    Is such reliable software impossible?  In 1967, a conference on
computer systems in space contained a paper certifying that the software
required for the Apollo missions was so complex and hard to certify that
it would never work.  Maybe at the time it was true.  And it was certainly
true that it did not work the first time.  The point that I am making is
that no one can really forsee how far software engineering technology will
advance in the next few years, and how far simulation technology will
advance.  Is it worth spending money for something that may not work?
In my (* opinion *) it is always worth spending money on pure research,
but my position is a bit biased.

Scott Dorsey
ICS Programming Lab (Where old terminals go to die),  Rich 110,
    Georgia Institute of Technology, Box 36681, Atlanta, Georgia 30332
    ...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!kludge


Feeping Creaturism

Charley Wingate <mangoe@mimsy.umd.edu>
Tue, 28 Oct 86 22:42:02 EST
(Follow-up to Roy Smith)

This gratuitous computerization also has the obvious risk of introducing a
useless level of unreliability in the system without much gain in
performance.  This is especially a problem for consumer products, where the
electronics are in a far from ideal environment, and which are modularized
to the point of guaranteeing a world tantalum shortage in the not-too-distant
future :-).

C. Wingate

Please report problems with the web pages to the maintainer

x
Top