The RISKS Digest
Volume 4 Issue 94

Tuesday, 2nd June 1987

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…


o Australian Computer Crime
Donn Parker
o PCs and Computer Fraud
PC Week via PGN
o Technological vs. (?) human failure
Nancy Leveson
o Risk of Inappropriate Technology to Prevent Password Overwrite
Henry Spencer
o A twist on modems calling people
Steve Valentine
o Risks of Compulsive Computer Use
Steve Thompson
o Perhaps the Bill of Rights you sought?
Bruce Wisentaner
o Error(s) in "Phalanx Schmalanx"
Mike Trout
o Info on RISKS (comp.risks)

Australian Computer Crime

Tue 2 Jun 87 11:16:51-PDT
A sophisticated computer crime occurred in Australia recently and is being
investigated by Kevin Fitzgerald and Stuart Gill in Melbourne for the victim
company.  Sketchy details, more later.  A disgruntled employee modified PC
circuit boards.  One called "Icepick" attacked ACF-2 on an IBM mainframe.  The
other called "Big Red" was used in a virus attack.
                                   Donn Parker

PCs and Computer Fraud

Peter G. Neumann <Neumann@CSL.SRI.COM>
Tue 2 Jun 87 17:36:48-PDT
The proliferation of PCs and other computers throughout U.S. businesses has
led to larger losses to fraud, according to a recent study.  Computer crime
is on the rise, says a 54-page report by the Cleveland accounting firm Ernst
& Whinney, in part because there are more computers in the United States
from which to steal.  The increasing use of computers in business has raised
the sophistication of users and, at the same time, fed the expanding pool
of potential computer criminals, the study notes.  

The FBI estimates the average loss from computer theft at $600,000, or about
25 times the average loss from "conventional" crime, the report says.  Of
the 240 companies surveyed, more than half said they have been a victim of
computer fraud, which the report estimates costs U.S. businesses from $3
billion to $5 billion a year.

[From PC Week, vol 4 no 21, 26 May 1987.]

Technological vs. (?) human failure

Tue, 02 Jun 87 16:14:19 -0700
In Risks 4.93 PGN writes:

     I noted with interest the articles in this morning's paper, which 
     imply that there were no technological failures, only human failures...

It seems like man/machine interface issues are greatly ignored in computer 
science and software engineering.  A current court case has the company that 
wrote the software for a device that killed two people arguing that the fault 
was the operator's.  In this instance, the software and documentation provided 
the operator with a command that was dangerous to use without any warning 
about how and when to use it safely (in fact, it probably should not have
been provided at all).  Was the operator at fault for acting in a
natural, human way or was the designer of the equipment at fault for
designing a technological device in a way that could easily lead the human
to make a mistake?  Can the design of a technological device be judged 
"correct" without considering the environment in which it will be operated?

We design programming languages so that they are less error-prone and
discourage the use of some languages because they are harder to use.  
Should we not also be responsible for doing this for the other types
of software we create?  We know that humans can make mistakes and, from 
psychologists, understand a great deal about their "failure modes."  If 
we can consider hardware failure modes in our designs, why not consider 
human failure modes?  Many of the large firms do employ human factors
experts, but not enough is done and people seem much too willing
to blame the human instead of the technology that failed to consider 
the human in its design.  I am starting to be leery when I read that 
the technology did not fail, the operator did.  

Nancy Leveson, University of California, Irvine

Risk of Inappropriate Technology to Prevent Password Overwrite

Henry Spencer <decvax!utzoo!henry@ucbvax.Berkeley.EDU>
Sun, 31 May 87 00:00:23 edt
>   The particular error cited by Bill Young could not have happened if the
> implementation had been in a language such as PL/I or Ada, where
> over-running the bounds of an array is a required run-time check...

I'm not an Ada aficionado, but my recollection is that every PL/I compiler
I've ever seen has a turn-checks-off option, and usually it's the default.
The reason is clear:  such checks are expensive, particularly with a naive
compiler that can't eliminate many of them at compile time, and the overrun
condition is rare.

> Such checks are clearly not new technology, since Multics (written in PL/I)
> has been doing such for over 20 years.  Nor is the technology new to
> hardware, since the Burroughs B5500-series and MCP (written in Algol) has
> also been checking for a similar period.

The distinguishing feature here is that both Multics and MCP are running on
special hardware.  The reason that these are relatively unpopular systems,
while Unix and C are everywhere, is that the latter will run efficiently on
almost anything.  As we all know, many people will trade off safety for
performance any day.  As is less widely appreciated, this is not necessarily
a foolish thing to do — it depends on the application.  One negative aspect
of having hardware and languages that enforce checking is that you have no
control over the tradeoffs.

My personal conjecture, not yet verified by experiment, is that with a
cooperative language and a reasonable amount of intelligence in the
compiler — more than most current compilers have, mind you — something
like 90% of the run-time checks could be eliminated at compile time, and
about 90% of the remainder could be eliminated by reprogramming that would
make the program clearer to the compiler (and incidentally to humans) while
leaving efficiency unaffected.  The remaining 1% might require a way to tell
the compiler "believe me, it's right", but otherwise the need for a run-time
check could be made a fatal compile-time error.  Result:  safety with no
efficiency penalty.  Trouble is, verifying this conjecture would require
building such a smart compiler, a sizable project.  Maybe next year...

Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,decvax,pyramid}!utzoo!henry

A twist on modems calling people

Tue, 2 Jun 87 13:10:25 EDT
The folks at our main facility just installed a new telephone switch, and made
two changes which are not user-transparent.  The two changes involve the method
used to reach our remote switch, and the method used to dial an international
call.  If you haven't guessed yet, the old international prefix corresponds to
the new method of ringing my extension from the main facility.  This would be
amusing if it weren't for all the auto-dial facsimile machines trying to phone
home to Japan with the old dialing codes.  They're not much fun to talk to, and
they don't seem to report the fact that the calls aren't getting through.

The moral of this story: Get your Fax straight, before you make changes.

Steve Valentine, NEC Information Systems 289 Great Rd., Acton, MA 01720

Risks of Compulsive Computer Use

Steve Thompson <>
Mon, 01 Jun 87 15:33:25 EDT
I saw a piece on the Cable News Network (U.S.  cable television station)
last night concerning compulsive gamblers.  During the roughly 5 minute
story, a theory was presented which held that certain types of people may be
more likely to become compulsive gamblers than others.  I was doing
paperwork as I watched, and so was distracted, but I thought they said
something to the effect that those people are especially susceptible to the
(physiological?  psychological?) disease of gambling in much the same way as
alcoholics appear to react (physiologically?) to alcohol abnormally.
   I was surprised by the comparison, since alcoholics are reacting to a
drug (the alcohol), while gamblers are reacting to a behavior.  Is it fair
to call the gambling a disease, or "simply" a noxious habit?
   Anyhow, my train of thought led to computer use.  Certainly there are
individuals whose attraction to computers is greater than average, and
possibly greater than is healthy.  I regretfully admit a period as an
undergraduate when I would play computer games as a response to a slow
social life, which led to a vicious cycle — my computer use took time that
I could have been using to work on my social life.  I broke the pattern, but
am left with concerns: Need we worry about compulsive computer users?  Need
there be a Hackers Anonymous?  Should compulsive computer use be considered
a disease, or a worthwhile funneling of energy?  If a disease, the vast
number of computers being introduced into the schools and the workplace
without our fully understanding the problem seems to present a potentially
large RISK to susceptible individuals.
   I'd love feedback on these thoughts,  as well as knowledgeable responses.
It would probably be wise if flames  and responses correcting my knowledge of
alcoholism, etc., be directed to me and I'll summarize to RISKS.

Stephen W. Thompson, User Services Specialist, User Services
Brown U., Box P, Providence, RI  02912  USA                (401) 863-3619

Perhaps the Bill of Rights you sought?

Bruce Wisentaner <wisen@CCA.CCA.COM>
Tue, 2 Jun 87 12:04:46 EDT
Regarding Eugene's message to RISKS about info-age Bill of Rights:  See if you 
have this book in your friendly neighborhood tech library:  "FREEDOM'S EDGE: 
The Computer Threat To Society" by Milton Wessel (Addison Wesley, 1974).  It
seems to have the Bill of Rights that you seek in its appendix.  I have neither
the legal right nor the time to type it in.
                                                    Bruce Wisentaner

Error(s) in "Phalanx Schmalanx"

Mike Trout <rpics!brspyr1.BRS.Com!miket@seismo.CSS.GOV>
1 Jun 87 12:19:16 GMT
My earlier posting concerning problems in the Phalanx missle defense system
contained some errors.  I'm not sure how much, if any, of the article is
usuable, but I'm a fanatic about accuracy so I'll point out my mistakes.

In one paragraph, I mentioned a U.S. Army vehicle-based gatling gun called the
"Chapparal."  That is incorrect.  The Chapparal consisted of a number of
Sidewinder heat-seeking missles mounted on a vehicle, and did not include a
gatling.  The Chapparal did indeed die a quiet death, probably due to the short
range of the missles.  There have been, and still are, various types of ground
weapons consisting of gatlings mounted on vehicles.  They don't seem to be
meeting with enthusiastic popularity.  My statement about ground-based gatlings
having problems with short range and voracious ammunition consumption is
generally accurate.

Actually, since that whole paragraph I wrote about ground-based gatlings has
marginal relevance to the Phalanx, I suppose it could be deleted.

Sorry about that.  Thanks for your attention.  Michael Trout
BRS Information Technologies, 1200 Rt. 7, Latham, N.Y. 12110  (518) 783-1161

Please report problems with the web pages to the maintainer