The RISKS Digest
Volume 3 Issue 29

Friday, 1st August 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Ozone hole undetected for years due to programming error
Bill McGarry
Aircraft simulators and risks
Art Evans
Military testing errors
Scott E. Preece
Risks: computers in the electoral process
Kurt Hyde via Pete Kaiser
Risks of CAD
Alan Wexelblat
Info on RISKS (comp.risks)

Ozone hole undetected for years due to programming error

Bill McGarry <sdcsvax!dcdwest!ittatc!bunker!wtm@ucbvax.Berkeley.EDU>
Fri, 1 Aug 86 0:48:48 EDT
(I read the following in a magazine but when I went to write this
 article, I could not remember which magazine and some of the exact
 details.  My apologies for any inaccuracies.)  

Recently, it was disclosed that a large hole in the ozone layer appears once
a year over the South Pole.  The researchers had first detected this hole
approximately 8 years ago by tests done at the South Pole itself.

Why did they wait 8 years to disclose this disturbing fact?  Because the
satellite that normally gives ozone levels had not reported any such hole
and the researchers could not believe that the satellite's figures could be
incorrect.  It took 8 years of testing before they felt confident enough to
dispute the satellite's figures.

And why did the satellite fail to report this hole?  Because it had been
programmed to reject values that fell outside the "normal" range!

I do not know which is more disturbing — that the researchers had so much
faith in the satellite that it took 8 years of testing before they would
dispute the satellite or that the satellite would observe this huge drop in
the ozone level year after year and just throw the results away?

            Bill McGarry
            Bunker Ramo, Trumbull, CT
            {decvax, philabs, ittatc, fortune}!bunker!wtm

          [A truly remarkable saga.  I read it too, and was going to report
           on it — but could not find the source.  HELP, PLEASE!  PGN]


Aircraft simulators and risks

"Art Evans" <Evans@TL-20B.ARPA>
Thu 31 Jul 86 13:26:48-EDT
In RISKS-3.27, Stephen Little comments on the risks in using an aircraft
simulator which inadequately represents the aircraft being simulated:

    I have been told of one major accident in which the pilot followed
    the drill for a specific failure, as practiced on the simulator,
    only to crash because a critical common-mode feature of the system
    was neither understood, or incorporated in the simulation.

The implication is that use of such a simulator is risky, which is
surely true.  However, as is so often the case, we must also examine the
risk of not using the simulator.  Pilots flying simulators frequently
practice maneuvers which are quite risky in a real aircraft.  A common
example is loss of power in one engine at a critical moment on takeoff.
This is just too risky to practice for real (since sometimes the "right"
answer is to crash straight ahead on the softest and least expensive
piece of real estate in sight), but practice in the simulator is quite
valuable.  All we can do is make the simulator as good as state of the
art permits, and improve it whenever we are subjected to one of the
expensive lessons Little refers to.

Little also comments on the shuttle simulator.  There, I would guess,
the critical issue is the cost of using the real thing as opposed to
cost of the simulator.  Again, the simulator is as good as practical,
and is improved as more data are gathered.

Art Evans


Military testing errors

"Scott E. Preece" <preece%ccvaxa@GSWD-VMS.ARPA>
Fri, 1 Aug 86 16:53:13 cdt
The New York Times report indicated that some of the tests were printed with
a major section set in six-point type instead of ten-point, making it very
hard to read.  The section consisted of math word problems and the object
was to do as many as possible in a set time.  People with the small-type
tests did significantly worse than those with the large-type tests.
Although this MIGHT be a computer-related problem (if the error was, for
instance, lack of a font change in a machine-readable source file), I don't
think the article specifically said that.


Risks: computers in the electoral process

Systems Consultant <kaiser%furilo.DEC@decwrl.DEC.COM>
01-Aug-1986 1529
There will be a symposium on security and reliability of computers in
the electoral process at Boston University this August 14th & 15.

Computers are relatively new in the electoral process and most decision
makers in this process have little, if any, experience.  One of the
speakers found evidence of a Trojan Horse in ballot counting software.
He will be speaking about that in the symposium.

PLACE: Boston University   Engineering Building, Room B33
DATE:  August 14th & 15th
TIME:  9:00 AM thru 4:00 PM

I would like to thank the many RISKS readers who contributed last semester
to my students' request for ideas on how to make the computerized voting
booth safe from computer fraud.  I'll be presenting many of the findings of
our study.
                                Kurt Hyde

             [Recall Ron Newman's detailed summary in RISKS-2.42 of 
              Eva Waskell's talk on this subject.  Perhaps we will get
              an update on any new information presented at BU.  We
              look forward to Kurt's findings as well.  PGN]


Risks of CAD

Alan Wexelblat <wex@mcc.com>
Fri, 1 Aug 86 15:45:54 CDT
Henry Petroski's book, _To Engineer is Human_ contains a chapter called
"From Slide Rule to Computer," in which he talks about some risks of
computers and specifically of computer-aided design (CAD).  I will try to
summarize his main points below.

Petroski points out that the transition away from slide rules has, in
itself, some risks.  First of all, there is the problem of precision.
Everyone knows that computers can produce very precise results, but this
tends to blind us to the fact that the results are really no more precise
than the inputs that were combined to produce them.  A twelve-digit answer
is no good if one of your inputs is accurate to only three digits.

A side effect of this is that we have tended to lose a `feel' for the
proper magnitudes for our numbers.  When arithmetic was done on a slide
rule, students had to supply the decimal place and thus needed to know
approximately how big the answer should be.  This lack of feel seems to
have been (at least part of) the problem with the x-ray machine that burned
a patient by applying too large a dose.

In "the old days" calculating stresses and the like was expensive and so
engineers didn't have time to do too much of it.  So they tended to design
things that were close to their experience and where they knew approximately
what the stresses, etc.  should be.  With optimization (and other CAD)
packages, engineers can do much more calculating and can therefore design
structures that are more novel and that they are less familiar with.  This
increases the risk that the engineer will not be able to spot errors in the
CAD programs' output.  Again, he has no `feel' for what the output should be.

Petroski also fears that inadequate computer simulation is replacing crucial
real testing.  Engineers who are not programmers may not realize that
certain stress calculations have not been done by the program; thus he may
be inclined to forgo simple things (like physically stretching or bending a
pipe to see where it breaks).  An example of this oversimplification is the
collapse of the roof of the Hartford Civic Center (under a weight of ice
and snow).  Post mortem analysis revealed that the interconnection of the
rods and girders in the ceiling had been modeled too simplistically in the
computer programs that were used during the design.

In general, Petroski fears that the CAD programs' optimization of things is
leading to structures that are "least-safe."  That is, there's no room for
error in the optimized structure.

There is also a risk that with a software crutch a less-than-qualified
engineer can put together a design that looks better than it is.  Even an
engineer who is qualified in one area may be encouraged by the ease of CAD
to venture outside his area of expertise.

There is also one other item of interest to RISKS readers.  In the chapter
called "The Limits of Design," Petroski quotes from the proceedings of the
"Proceedings of the First International Conference on Computing in Civil
Engineering."  Apparently, there was a session on `Computer Disasters' at
that conference, but NO PAPERS WERE PUBLISHED.  Supposedly, this encouraged
candor.  The conference was held in New York, in 1981.  Were any RISKS
readers there?  Do you know someone who was?  It would be interesting to
see if we can construct a list of our own.

In any event, Petroski's book (ISBN 0-312-80680-9) is a good read and can
be bought at a discount by members of LCIS.  I recommend it highly.

Alan Wexelblat
UUCP: {ihnp4, seismo, harvard, gatech, pyramid}!ut-sally!im4u!milano!wex

Please report problems with the web pages to the maintainer

x
Top