The RISKS Digest
Volume 6 Issue 37

Sunday, 6th March 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Finagling Prescription Labels
Robert Kennedy
Opus bulletin boards fail worldwide on 1 March 1988
Thomas Fruin
Dave Platt
Social Security Administrator hides computer problems
Ivan M. Milman
A320 Airbus Fly by Wire System
Geoff Lane
Black Monday not caused by program trading, MIT's Thurow asserts.
LT Scott A. Norton
Re: Ada-caused bugs?
Henry Spencer
Magnetic card sensitivity test (a sort of)
Matti Aarnio
Perrow's "Normal Accidents"
Brian Randell
Info on RISKS (comp.risks)

Finagling Prescription Labels

Robert Kennedy <jrk%computer-lab.cambridge.ac.uk@NSS.Cs.Ucl.AC.UK>
Thu, 3 Mar 88 10:57:52 GMT
A recent RISKS posting about adverts appended to TELEX messages reminded me
of a recent experience I had with the label on a bottle of prescription
medicine.

The instructions for use, the name, the Doctor's name, and all the important
stuff appeared intact, but down at the bottom of the label, in compressed
print (the rest of the label had been printed in a "normal" dot-matrix style)
was the question "WILL THIS COMPUTER WORK?"

At first, I just thought it was funny — someone having a good time with some
spare space on the label. But then I realized that maybe prescription labels
aren't the best thing to be monkeying around with...


Opus bulletin boards fail worldwide on 1 March 1988

Thomas Fruin <<FRUIN%HLERUL5.BITNET@CUNYVM.CUNY.EDU<>
Sat, 5 Mar 88 01:51 N
Here's another February 29th/leap year story for this year:

On March 1st, 1988, every PC-based bulletin board running the lastest version
of the Opus bulletin board program (version 1.03a) suddenly decided that every
caller would get only 0 minutes logon time.  When this happened to the BBS I
run, I didn't immediately suspect it was one of those leap-year bugs, but when
I tried to logon to a friend's board, and got the TIME LIMIT message, I was
pretty sure.  And a day or so later, it became clear that this was happening to
the hundreds of Opus boards all over the world.

Fortunately these bulletin boards are mostly for hobbyists, and don't pose such
a great RISK when they fail, but it is stupid.  Anyway, since these Opus boards
are all linked via the FidoNet, a utility to patch the Opus object code has
been sent out all over the world very fast.  That's the advantage of computers
I suppose ...
                                        Thomas Fruin

     [... and a disadvantage too — if Trojan horses can breed that fast.  PGN]


Bug in leap-year code dogs Fidonet systems

Dave Platt <coherent!dplatt@ames.arc.nasa.gov>
5 Mar 88 03:56:42 GMT
I logged onto my favorite local bulletin-board system (Mailcom, in Palo Alto)
this afternoon, after not having been able to contact it for several days.  A
message in the sign-on banner reported that Fidonet bulletin boards
country-wide (and, I presume, world-wide) were seriously disrupted by a bug in
the date logic; it appears that the code didn't properly cope with Leap Year
Day (last Monday).  Mailcom was apparently off the air for three days, until a
patch arrived.  [...]  I imagine that the offending code was less than 4 years
old.
                                   Dave Platt
  UUCP: ...!{ames,sun,uunet}!coherent!dplatt     DOMAIN: dplatt@coherent.com


Social Security Administrator hides computer problems

Ivan M. Milman <ivan@sally.utexas.edu>
Sun, 6 Mar 88 18:14:27 CST
[Excerpted without permission from  Saving Social Security, March 1988]

"Rumors abound that Social Security Commissioner Dorcas Hardy may be on her way
out..."  "The latest example of Hardy's style came January 7 when she arranged
for top General Accounting Office(GAO) officials to tour her "showcase"
computerized service-center in Washington, D.C.  But an hour before the tour,
none of the computers would work - which is what GAO has already concluded
about the entire system.  Rather than allow the GAO officials to witness this
embarassment, however, Hardy ordered all Social Security Service Centers in
Pennsylvania, Maryland, Virginia and West Virginia to shut down computer
printing operations to free the D.C. center to operate without problems,
Seniors throughout those states had to wait for service so Hardy could create
the illusion the system was trouble-free.  Hardy has insisted that the flawed
computer system justifies a 21 percent reduction in Social Security staffing.."

Ivan M. Milman


A320 Airbus Fly by Wire System

"Geoff. Lane. Phone UK-061 275 6051" <ZZASSGL@CMS.UMRCC.AC.UK>
Fri, 04 Mar 88 10:46:05 GMT
In the Dec 12th, 1987 issue of Flight International there is a report by
Harry Hopkin on his experiences of flying a A320 in various failure modes.
He reports that even a simulated total electrical failure the aircraft is
still flyable by means of the rudder and pitch trim alone.

Geoff Lane, UMRCC


Black Monday not caused by program trading, MIT's Thurow asserts.

"LT Scott A. Norton, USN" <4526P%NAVPGS.BITNET@CUNYVM.CUNY.EDU>
Fri, 04 Mar 88 01:30:45 PST
In a one-page article in the February-March issue of Technology Review,
MIT's Lester C. Thurow, Dean of the Sloan School of Management, states
that neither stock-index arbitrage or portfolio insurance caused the
stock market to fall in October.  He compares October's panic with
some classic panics, such as the Amsterdam tulip-bulb craze of 1637
and the London South Sea Bubble of 1720, as well as the crash of 1929.

For the cause of panic on October 19, Thurow points immediately to "herd
panic", and ultimately to the difference in price/earnings ratio between the
stock market and bonds.  The final motion that caused a loss of heart by stock
investors was a trend of interest rates up to defend a weak dollar.  This
caused bonds to look even more attractive to stock owners.

Although Thurow explains how programmed trading does not differ essentially
from the trades a human arbitrageur would make, he does not discuss the effect
that the greater speed of programmed trading had on the market's volitility.

LT Scott A. Norton, USN, Naval Postgraduate School, Monterey, CA 93943-5018
4526P@NavPGS.BITNET   4526P@NPS.ARPA

   [Have you herd panic?  [And have you heard panic?]  PGN]


Re: Ada-caused bugs? [RISKS-6.36]

<mnetor!utzoo!henry@uunet.UU.NET>
Sun, 6 Mar 88 00:11:03 EST
> [Ada's] complexity makes it ripe for misuse.  It is nominally mandated for 
> all military embedded systems, except that various limitations have resulted 
> in its being eschewed in some security-community applications...       [PGN]

Considering Ada's application domain (and my personal dislike for Ada), I
laughed long and hard when I noticed the following quote in the first issue of
the new journal "Computing Systems" (Marc H. Donner and David H.  Jameson,
"Language and Operating System Features for Real-time Programming", Computing
Systems vol 1 number 1, winter 1988, pp 33-62):

     Ill-chosen abstraction is particularly evident in the design of
     the Ada runtime system.  The interface to the Ada runtime system 
     is so opaque that it is impossible to model or predict its 
     performance, making it effectively useless for real-time systems.

(Donner and Jameson are with the IBM Thomas J. Watson Research Center;
the paper is very interesting.  Computing Systems is being published by
U of California Press for the Usenix Association.)

Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,decvax,pyramid}!utzoo!henry


Magnetic card sensitivity test (a sort of)

Matti Aarnio <FYS-MA%FINTUVM.BITNET@CUNYVM.CUNY.EDU>
Tue, 23 Feb 88 15:39:43 EET
    My laboratory got some questions from the local newspaper concerning
magnetic card sensitivity against magnetic locks used on purses.  We got their
suspected purse, and measured its magnetic field.  Because of magnet
construction and gauge structure, I have my doubts about this value, but it
seems to be AT LEAST 35 mT at about 5mm distance of magnet poles (that
particular had structure similar to loudspeakers magnets).  This is just single
measurent from single sample.  (BTW: Earth field is about 5 mT)

    Then I made simple experiment:  Blank formatted PC diskette (360kB) was
briefly touched with a magnet (single point).  Then the diskette was read thru
as far as sectors were readable.  (Diskette was reformatted and verified
between each individual test.  Reading was done with MSDOS Debug.)

    Every time, when the diskette was touched to the magnet on top of it, it
did lose some sectors, e.g., the field was affected enough.  But never, when
the diskette was put inside the purse (even next to magnet), was there any data
loss.  The affected area was small, only few millimeters in diameter, thus data
loss didn't happen on every track.  This means also that, to 'destroy' the
magnetic stripe, one must hit on it, not just within an inch or so.

    While discussing more about how this journalist did handle her card, we
came to the conclusion that at least with this kind of lock magnets there is a
simple possibility to accidentally handle the card above the magnet.  She did
open her purse, took her card out, and put it on top of the purse (and magnet),
kept it there for a moment (took some papers from purse), and then handled them
to shop clerk. (Small shops don't have electronic card readers even today, but
those shops are becoming rare.)

    As you understand, this test isn't scientifically solid (made within 30
minutes), but it does give some idea about how sensitive these things are.  I
also made an assumption that the diskette and the magnetic card do contain
similarly sensitive material.  What this does prove is that, with a specific
(and quite common) type of magnetic lock, it is possible to damage data on
diskette.

Matti Aarnio, University of Turku; Wihuri Physical Laboratory, 
SF-20500 TURKU; FINLAND   (Phone:+358-21-645917)  BITNET: FYS-MA at FINTUVM


Perrow's "Normal Accidents"

Brian Randell <Brian_Randell%newcastle.ac.uk@NSS.Cs.Ucl.AC.UK>
Thu, 3 Mar 88 19:08:46 GMT
I've recently been reading "Normal Accidents", by Charles Perrow, (Basic Books,
New York, 1984), which I received through inter-library loans after such a long
delay that I can't remember whether it was through RISKS that I first learnt
about it, though I certainly have seen it referenced there since. However I'm 
not aware of it ever having been extensively discussed in RISKS, so although it
contains few explicit references to computers, and is written from the
viewpoint of a social rather than a computer scientist, I thought the following
quotes from it might be of interest:

 "Complex systems are characterized by:
 * proximity of parts or units that are not in a production sequence;
 * many common mode connections between components (parts, units or subsystems)
   not in a production sequence;
 * unfamiliar or unintended feed-back loops;
 * many control parameters with potential interactions;
 * indirect or inferential information sources; and
 * limited understanding of some processes.

 "Complex systems are not necessarily high risk systems with catastrophic 
 potential; universities, research and development firms, and some government 
 bureaucracies are complex systems . . ."

 "In complex systems, not only are unanticipated interdependencies more likely
 to emerge because of a failure of a part or a unit, but those operating the
 system (or managing it) are less likely, because of specialized roles and
 knowledge, to predict, note, or be able to diagnose the interdependency before
 the incident escalates into an accident."

 "On the whole, we have complex systems because we don't know how to produce
 the output through linear systems. If these complex systems have catastrophic
 potential, then we had better consider alternative ways of getting the
 product, or abandoning the product entirely."

 "Tight coupling is a mechanical term meaning that there is no slack or buffer
 or give between two items. What happens in one directly effects what happens
 in the other....Elaborating the concept as used by organizational theorists
 will allow us to examine the responsiveness of systems to failures, or to
 shocks.  Loosely coupled systems, whether for good or ill, can incorporate
 shocks and failures and pressure for change without destabilization. Tightly
 coupled systems will respond more quickly to these perturbations, but the
 response may be disastrous. Both types of systems have their virtues and
 vices."

 "Since failures occur in all systems, means to recovery are critical. One
 should be able to prevent an accident, a failure of a part or a unit, from
 spreading.  All systems design-in safety devices to this end. But in tightly
 coupled systems, the recovery aids are largely limited to deliberate,
 designed-in aids, such as engineered-in safety devices..."

The above quotations are from the main analytical chapter in the book.
Subsequent chapter titles are: 'Petrochemical Plants', 'Aircraft and Airways',
'Marine Accidents', 'Earthbound Systems: Dams, Quakes, Mines and Lakes', and
'Exotics:  Space, Weapons and DNA'.

The final chapter in entitled 'Living with High Risk Systems', from which the
following quotes come:

 "I propose using our analysis to partition the high-risk systems into three
 categories. The first would be systems that are hopeless and should be
 abandoned because the inevitable risks outweigh any reasonable benefits
 (nuclear weapons and nuclear power); the second, systems that we are unlikely
 to be able to do without but which could be made less risky by considerable
 effort (some marine transport), or where the expected benefits are so
 substantial that some risks should be run, but not as many as we are now
 running (DNA research and production). Finally, the third group includes those
 systems which, while hardly self-correcting in all respects, are
 self-correcting to some degree and could be further improved with quite modest
 efforts (chemical plants, airlines and air traffic control, and a number of
 systems which we have not examined carefully but should mention here, such as
 mining, fossil fuel power plants, highway and automobile safety). The basis
 for these recommendations rests not only with the system accident potential
 for catastrophic accidents, but also the potential for component failure
 accidents. I think the recommendations are consistent with public opinions and
 public values."

 "My recommendations must be judged wrong if the science of risk assessment as 
 currently practiced is correct. Current risk assessment theory suggests that
 what I worry about most (nuclear power and weapons) has done almost no harm to
 people, while what I would leave to minor corrections (such as fossil fuel
 plants, auto safety, and mining) has done a great deal of harm." 

This leads on to a very interesting critique of risk assessment, from which I
have extracted:

 "While not as dangerous as the systems it analyzes, risk assessment carries
 its own risks ..."

 "When societies confront a new or explosively growing evil, the number of risk
 assessors probably grows - whether they are shamans or scientists. I do not
 think it an exaggeration to say that their function is not only to inform and
 advise the masters of these systems about the risks and benefits, but also,
 should the risk be taken, to legitimate it and to reassure the subjects."

 "This is a very sophisticated field. Mathematical models predominate;
 extensive research is conducted ... yet it is a narrow field, cramped by the
 monetarization of social good."

 "The risk assessors, then, have a narrow focus that all too frequently (but
 not always) conveniently supports the activities elites in the public and
 privare sector think we should engage in. For most, the focus is on dollars
 and bodies, ignoring social and cultural criteria. The assessors do not
 distinguish risks taken for private profits from those taken for private
 pleasures or needs, though the one is imposed, the other to some degree
 chosen; they ignore the question of addiction, and the distinction between
 active risks, where one has some control, and passive risks; they argue for
 the importance of risk but limit their endorsement of approved risks to the
 corporate and military ones, ignoring risks in social and political matters."

Finally, I asked Jim Reason (Professor of Psychology at Manchester, whose work
on human errors I have commented on in RISKS earlier) for his opinion of
Perrow's book, and got the following reply:

 "I was very impressed by the Perrow book.  It provided an extremely
 interesting systems view on accidents (i.e. from a sociological perspective),
 and certainly influenced my thinking quite markedly.  There is much in it that
 I disagree with — I'm not entirely happy with the Luddite solution proposed
 at the end, for example — nor do I entirely agree with his dismissal of the
 human error contribution.  But it's an excellent read.  You don't have to wade
 through the case studies.  The meat is easily discernible in about two 
 chapters."

          [A quick grep shows Perrow mentioned in RISKS-1.37, 1.45, 2.44, 3.27,
          5.14, and 5.62.  Quite popular!  There is much that can be learned,
          even if his book is not DIRECTLY computer relevant.  PGN]

Please report problems with the web pages to the maintainer

x
Top