The RISKS Digest
Volume 7 Issue 80

Friday, 18th November 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Computer glitch causes Fresno `flood'
Ira Greenberg via PGN
Election Computing
PGN
Re: Vote Count Error
Brint Cooper
Casiers numeriques! (Digital lockers!)
Marc Vilain
Re: Toll Road information collection
David Phillip Oster
Risks of non-technologists' reactions to technological failures
Fred McCall on Al Fasoldt
Info on RISKS (comp.risks)

Computer glitch causes Fresno `flood'

Peter Neumann <neumann@csl.sri.com>
Fri, 18 Nov 1988 14:27:22 PST
  FRESNO — The computer that controls the city's water service malfunctioned,
or ``crashed'', three separate times Monday within an hour and a half, causing
at least 12 mains to rupture and damaging nearly 50 residential plumbing
systems.
  The $2.3 million computerized telemetering system, which has been in
operation for only six months, controls 106 water pumps and wells and 877 miles
of piping.
  ... the malfunction — which centered in a burglar alarm at one of the pumps
-- sent confusing signals to the computer that temporarily shut down the pumps.
  An automatic restart device that shifts the system over to manual controls
sent water pressure levels of up to 75 pounds per square inch surging through
the pipes.  Usually the level ranges from 40 to 45 [ppsi].
  With the computer inoperable, the manual system took over with each pump
operating on its own fixed settings.  ... the settings apparently weren't
properly set and the resulting heavier flow of water proved too much for some
of the city's older mains to handle.
  It also triggered 24 automatic fire alarms ...

[From the San Jose Mercury, 16 November 1988, thanks to Ira Greenberg]


Election Computing

Peter Neumann <neumann@csl.sri.com>
Fri, 18 Nov 1988 16:03:04 PST
A law suit has just been filed in Texas on behalf of the voters of the state
challenging the entire election and requesting not a recount but an entirely
new election.  The grounds are that the State did not follow its own procedures
for certifying the election equipment.  Perhaps one of our Texas readers can
keep us informed of the details.


Re: Vote Count Error

Brint Cooper <abc@BRL.MIL>
Thu, 17 Nov 88 11:53:34 EST
Re: Kenneth Jongsma's contribution on vote count error.

    Back in 1958 (!), the Black & Decker Co. was converting their
inventory records to an automated (they didn't do "computers" then) system.
Among my duties as a summer student trainee was to copy data from those
dull, yellow inventory cards to forms from which keypunch would be done.

    The chap in charge of the project told us that they would run the
manual and the automated systems in parallel for one full year before
abandoning the manual system.  These folks had a very healthy respect for
"the unknown" and sought to minimize their risks.

        Have we forgotten what we've learned?  In something so important as
an election, why are not the votes counted "manually" as well as by the "new
system" until all the bugs are worked out of things such as Lotus scripts.
It's such a simple idea that we assume it must have occurred to our
political leaders and the Boards of Elections when, in fact, it probably has
not.
                                           _Brint


Casiers numeriques! (Digital lockers!)

Marc Vilain <MVILAIN@G.BBN.COM>
Thu 17 Nov 88 14:41:49-EST
While in Paris last week, I stopped at the luggage check of the Gare du Nord
train station to drop off a suitcase.  To my great surprise, the familiar
cluncky keyed lockers had been replaced by their gleaming high-tech equivalent.
The French, who so enthusiastically brought us Minitel, now have computerized
luggage lockers.

The basic unit is a block of six lockers which are shut by some kind of servo
latch.  The six lockers share a little keyboard and LED display.  It works like
this: You put your baggage into a free locker, close the door, and drop FF 15
(= $US 3) into a coin slot.  The machine latches your locker door and prints
out a little ticket indicating the identification number of your locker and a
5-digit password, apparently generated at random.  When you want to retrieve
your bags, you key in the password and, voila, the locker door opens up.

The locker system guards fairly well against the most obvious security flaw: a
nefarious individual reading the code on the ticket as it is printed out.  The
ticket is actually printed on a double strip of paper.  The writing only
appears on the inner strip, and you have to peel away the outer one to read the
password.

Throughout my stay in Paris, I wondered how the lockers guarded against a brute
force attack on their password.  I found out as I was retrieving my bags.  Near
me a group of clearly puzzled passengers were trying to collect their own
belongings, and were typing away on the keyboard of their locker.  Suddenly, a
siren sounded from the bowels of the locker, alerting the attendant in charge
of the luggage check — the befuddled passengers must have typed one password
too many.

Befuddlement, unfortunately, seemed the general response of newcomers to these
clever machines.  I used these lockers several times during my stay, and I
never failed to see perplexed faces staring at the instructions.  Given that
France seems to have pushed computer literacy in a big way recently, one may
view with some degree of pessimism the success of the enterprise.  But perhaps
I should be more charitable — I too was confused at first. 


Re: Toll Road information collection

David Phillip Oster <oster@dewey.soe.Berkeley.EDU>
17 Nov 88 13:04:08 GMT
Many toll roads in the U.S. give you a ticket at the spot you enter the toll
road, and collect the ticket when you leave.  The tickets are stamped with
their origin, so the distance driven can be computed. So far so good.

Is it fair to also stamp the tickets with the time of issue, so if the
distance traveled divided by the time elapsed is greater than the average
speed limit the toll taker can hand you a speeding ticket at the same time?
An appropriate computer would help the toll taker in this task.

Massachusetts has drastically higher fines the faster you go. The above
system can only conclude that your average speed was above the legal limit.

If there is a monitoring system measuring when your car crosses each sensor,
every ten miles say, then the system can draw conclusions about your speed
on the inter-sensor segments of your trip. Segments at 80 mph can be fined
at a much greater rate than those at 60.

Do people have a right to violate the speed laws? If not, should the state
be making investments in speeder catching gear so long as the "take" is more
than the capital cost?

A related question: Where can I buy a radar gun, and how much do they
typically cost? I want to aim one at speeders to make their radar detectors
sound off.

--- David Phillip Oster            --When you asked me to live in sin with you
Arpa: oster@dewey.soe.berkeley.edu --I didn't know you meant sloth.
Uucp: {uwvax,decvax}!ucbvax!oster%dewey.soe.berkeley.edu


Risks of non-technologists' reactions to technological failures

<mccall@skvax2.csc.ti.com>
Fri, 18 Nov 88 17:46:03 CST
There seems to be a genuine risk involved with regard to public perceptions
of complex and little understood technologies, in that when the inevitable 
failures occur there is an unthinking overreaction, based, I suppose, upon 
disappointed expectations of perfection in technology.

In the wake of the inevitable failures involving a technology, those who
don't understand the issues are prone to call for sweeping changes to 
'correct the problems'.  This is similar to outcries against 'electric
jets' in the wake of the Airbus crash in France and against NASA after
the Challenger incident (although in my opinion, NASA was more than ripe
for it).

Those who call for the most drastic measures with regard to issues they
know nothing about are often the most adamant in adhering to their 
belief that the 'elite' are really conspiring to cover things up.  

For instance, with regard to the article that follows, when I attempted 
to correct some of the factual errors I found myself subjected to public
abuse.  Pointing out errors in the usage of words with regard to 'virus'
and 'hacker' earned comments about refusing to write to pander to "the 
incestuous coterie of computer insiders" and comments about how the 
perpetrator of this act is really the one to blame and that laws about 
this sort of thing need to be enforced if we're ever going to stop them
rather than simply regarding them as 'pranks' evoked phrases about "the 
neo-fascists of the computing world" and about how enforcing laws isn't 
the solution.

When someone who is a reputable journalist is reacting in this way, what
solutions are there to risks involved in people misunderstanding the
technology and events associated with it?

I wonder how many articles like the following are appearing in various
places around the country in the wake of the Arpanet worm?  The fact that
it's by someone who describes himself as a "technology writer" and 
"computerist" and who is involved in reputable journalism only makes 
the point more strongly.

[Article and the author's online profile follow.]


==============================================================================
| Fred McCall  (mccall@skvax1.ti.com) | My boss doesn't agree with anything  |
| Military Computer Systems           | I say, so I don't think the company  |
| Defense Systems & Electronics Group | does, either.  That must mean I'm    |
| Texas Instruments, Inc.             | stuck with any opinions stated here. |
==============================================================================

================================== ARTICLE ===================================

AL FASOLDT

Technology writer (syndicated newspaper columnist) and audio writer (Fanfare
Magazine), newspaper editor in Syracuse, NY (the daily Herald-Journal),
poet, bicyclist, computerist who loves simple programming; a fan of the Atari
ST and no fan at all of MS-DOS computers; 2 grown children.


1 (of 7) AL FASOLDT Nov. 14, 1988 at 20:48 Eastern (4846 characters)

Let's start things off with some thoughts on who is really responsible here.

This is an article I wrote for distribution this coming week.

AThis can be reproduced in electronic form as long as the text is not altered
and this note remains on top. Distributed by the Technofile BBS.

Publication date: Nov. 20, 1988

By Al Fasoldt

Copyright (C) 1988, The Herald Company, Syracuse, New York


There's an untold story in the furor over the electronic virus that infected
6,000 mainframe computers across the country earlier this month.

Left out of the many accounts of the prank pulled by a Cornell graduate
student is something that could be the single most important issue of computer
networking in the next decade.

It is put most simply in the form of a question: Who is in charge of our
mainframe computer networks?

In more complete terms, it can be stated this way:  Are we placing too much
trust in the systems managers who run our nation's medium- and large-size
computer systems?

I am posing this question for a practical reason, not a theoretical one. Lost
in the furor over the mass electronic break-in is the fact that it could have
been prevented - if the people in charge of the computers had been doing their
job.

The hacker, Robert Morris, exploited a weakness in the operating system of
these computer systems. The weakness was known to the operating system's
designers, and the company that supplies the operating system had long ago sent
notices to all its customers explaining how to patch the operating system to
fix the weakness.

All these thousands of systems managers had to do was read their mail.

Most of them didn't. Most of them ignored the plea from the operating system's
designers to make the fix before someone broke into these computers through
this weak area, called the "back door."

There is no other word for this than incompetence. Those who think it's
unlikely that most mainframe computer systems managers are incompetent - at
least in this one area, if in no other - have their heads in the sand.

Think of it in terms of human viruses. If doctors throughout the country were
warned of a potentially dangerous weakness in a major drug and most of them did
nothing about it, how forgiving would we be? We would demand that the medical
profession act immediately to remove those doctors who don't have enough sense
to protect the public.

Are we going to do the same thing in regard to our systems managers?

I'm a realist. I know what the answer is. They'll go on protecting their jobs
by making up excuses. They'll tell the people who hired them that the entire
subject is too technical to explain, but they have the situation well in hand.

Bull. Every systems manager who ignored the warnings on the flaws in Unix, the
operating system that Robert Morris sailed right through, should be fired.

It's as simple as that. It's time that we treated networked computer systems
seriously. It's time that we stopped accepting the technobabble from these
incompetents as something that no one else can comprehend. The rest of us can
comprehend it just fine, thank you.

If you agree, mail a copy of this column to your boss. Send a copy to the
person who hires and fires the systems manager in your company or university.

Send 'em a message before another Robert Morris sends them something else.


*    *    *

How can computers catch a virus?

It's easy.

Keep in mind that a computer works quite a bit like a human being. Both need a
central processor to run properly - a CPU chip in one case and a brain and
central nervous system in the other. And both need the correct programs to work
right - an operating system in the computer and an autonomous set of
instructions to the organs of the body in the human.

Each one can get sick when a virus works its way into the system and throws it
off stride. In both the computer and the human, the virus hides itself and
alters the day-to-day operations of its host.

In its mildest form, the virus merely slows everything down. The computer
responds sluggishly, and the human feels weak and rundown. At its worst, the
virus can make either type of host so sick that it may not recover without
intensive care.

So far, what we have been describing also characterizes a simpler form of
intruder, called a worm. The difference between a worm and a virus is that
worms don't create new copies of themselves, but viruses do; in fact, the
strongest viruses in computers and humans can create new clones of themselves
many times a minute.

The major conceptual difference is that human viruses are actual creatures,
and they can sometimes be seen under a microsope. But computer viruses are
formless groups of numbers written as a program. This may make them seem less
harmful than human viruses, but it would be a serious mistake for us to treat
them that way.

Please report problems with the web pages to the maintainer

x
Top