The RISKS Digest
Volume 5 Issue 22

Monday, 3rd August 1987

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Home of IBM computers succumbs to telephone computer up-down-upgrade
PGN
Re: IRS Sanity Checks
Jerome H. Saltzer
Re: Monkey business (clarification)
PGN
Computer (claustro)phobia
Kent Paul Dolan
Security-induced RISK
Alan Wexelblat
Another ATM story
Jeffrey Mogul
SDI is feasible
Walt Thode
Publicized Risks
Henry Spencer
Info on RISKS (comp.risks)

Home of IBM computers succumbs to telephone computer up-down-upgrade

Peter G. Neumann <Neumann@csl.sri.com>
Mon 3 Aug 87 11:11:10-PDT
New York Telephone's Poughkeepsie-area central offices experienced a
backfired attempt to upgrade the software for the (non-IBM) computers on 18
July 1987 in order to improve service for 50,000 customers in the area.  The
result was that for 21 hours only about one call in three got through for 8
exchanges, according to a NYT spokesman.  Between 12,000 and 14,000
customers were reportedly affected.  The problems were eventually solved,
but the spokesman said the actual cause was still not known.  [Source:
Poughkeepsie Journal, 19 July 1987]

The article was contributed via US Mail by Ronald S. Rosen of Poughkeepsie, 
who noted that despite the public explanation of only one in three calls
making it through, there were some customers (Ron among them) who could not
make ANY CALLS AT ALL.  (Statistics are fine unless it is YOU to whom they
refer.)  PGN


Re: IRS Sanity Checks

Jerome H. Saltzer <Saltzer@ATHENA.MIT.EDU>
Sun, 2 Aug 87 23:40:58 EDT
> Subject: Re: IRS Sanity Checks (RISKS-5.20)     [From RISKS-5.21]
> From: willis@rand-unix.ARPA
> Among other things, it could be a legal misstep to guess what the
> taxpayer intended to write, as opposed to what was actually written ...

It seems to me that second-guessing isn't the real RISKS issue here, but
rather what should the program do if the reasonableness check returns the
answer "unreasonable!"?

For the IRS case, with some hundred million returns per year to process, the
simple answer of "kick it out for human review" could easily generate work
for several million human reviewers.  Where can you find that many reviewers
all of whom are motivated to get to the bottom of the problem?  One way to
find an interested reviewer is to send the problem to the person who filed
the return originally.  Thus the IRS strategy of generating a completely
automated kickout letter to the original filer is probably both more cost-
and procedurally-effective than any alternative one could think of.

Admittedly, it is a little unnerving to receive an automated response from
the IRS asking you to send them an extra $10,000,000.17 because the computer
didn't realize that was a typo on line 11, but once you get over the initial
shock, all you have to do (in principle) is follow the instructions for
filing an amended return and the problem goes away.  (Horror stories about
IRS agents following up with inappropriate actions don't alter the
appropriateness of this strategy; they instead illustrate a misfeature in a
different part of the system.)
                    Jerry

    [A missing reasonableness check bit me today.  One of my multiple
    archive files for RISKS volume 5 had vanished: each issue had simply
    disappeared into a black hole.  The file was totally invisible, but I
    noticed an unaccountable directory overflow.  Creating a new version,
    TOPS-20 prompted with a NEW version number with protection "P1" instead 
    of "P775252.  Mark Lottor suggests that in recovering from a recent 
    system crash no one had run the disk reasonableness check...  PGN]


Re: Monkey business (clarification) (RISKS 5.21)

Peter G. Neumann <Neumann@csl.sri.com>
Mon 3 Aug 87 10:35:34-PDT
The item in RISKS-5.21 on the macacque-eyed 747 takeover was unfortunately
less than precise, possibly leaving the impression that the monkey
comandeered the plane in flight.  The monkey was at large in the cabin
toward the end of the flight.  After landing the pilot and copilot remained
in the cockpit until the animal control officer thought he had the monkey
cornered at the rear of the plane.  After the pilot and copilot left, the
monkey then entered the cockpit and was captured while sitting on the
instrument panel between the pilot and copilot seats.  [RISKS item: did the
monkey alter any of the control settings?  Presumably the next take-off
checkout would have spotted it...]

Sorry if my attempt to be brief came off half-(ma)cacqued.  PGN.


Computer (claustro)phobia

Kent Paul Dolan <kent@xanth.cs.odu.edu>
Sun, 2 Aug 87 13:51:29 EDT
Once upon a time, we had computers carefully confined in their own
circumscribed environments, hidden away in air conditioned rooms,
caged like the "extinct in the wild" species at zoos, and the earth
was safe for humankind.

Now, I look around me: a Commodore Pet, an Apple II+, an Amiga 1000,
and a Dimension 68000 occupy various horizontal surfaces.  Floor to
ceiling on two sides are shelves of 5.25" disks, 3.5" disks, back
issues of Byte, collections of ACM and IEEE computer journals, manuals
for the various systems, computer science textbooks, the collected
works of ANSI X3H3 for 4.5 years, stray 1/2" mag tapes, old listings.

Have Forum readers considered the risk that, like the prairie by the
pavement, we will simply be crowded out, displaced, inundated, overwhelmed,
buried, by our high-tech toys?
                                           Kent Paul Dolan


Security-induced RISK

Alan Wexelblat <wex@MCC.COM>
Mon, 3 Aug 87 11:49:25 CDT
At our site, we have several computers.  For security reasons, we are
asked to have different passwords on each machine.  In addition, these
machines (may - I'm not sure) keep logs of incorrect userid/password
combinations that are entered.

Now, being a fallible human, I occasionally type the id/password for
machine A while trying to log on to machine B.

It would not occur to me (in advance) that the log of incorrect
combinations should be safeguarded, but imagine if that log fell into
malicious hands.  The attacker would have a list of excellent
possibilities to try out on other machines at the site!

And, to make matters worse, while he was randomly trying combinations
from the log, he would be duplicating a "normal" pattern of errors,
thus being less likely to raise an alarm.

--Alan Wexelblat
UUCP: {seismo, harvard, gatech, pyramid, &c.}!sally!im4u!milano!wex

   [This is a very old problem, and has been noted here in other connections
   before.  Audit trails are littered (often quite accidentally) with
   sensitive information.  Once again, the tip-of-the-iceberg phenomenon
   is seen.  The deeper you dig in security problems, the more you realize
   that there are always some very serious vulnerabilities...  PGN]


Another ATM story [Sufficiently different to warrant inclusion]

Jeffrey Mogul <mogul@decwrl.DEC.COM>
3 Aug 1987 1832-PDT (Monday)
A friend of mine tried to withdraw money from a Versateller (Bank of
America; she's a BofA customer).  After asking her to re-enter her PIN
several times, it told her to give up.  She knew the number was the right
one (she uses it several times each week) so the next day she went to see a
human employee of the bank.  This person tried to tell her that she had
probably forgotten or misentered her PIN, but had to back down when several
people behind her in line said that they had the same problem.

My first thought was that the problem was with the cards, not the PINs.
Although some automatic tellers (such as the one I normally use) imply that
the card is readable (by welcoming you by name before asking for your PIN),
apparently Versatellers do not.  Still, I would expect them to complain
about an unreadable card before asking for the PIN.

I doubt the problem was with a specific ATM; my friend was using the
one in Palo Alto, but she lives in San Francisco and presumably most of
the other affected customers did not use the Palo Alto machine.

I'm also assuming that the PIN is verified with the central system, not
locally by the ATM, since there was some delay before the ATM complained
about her PIN, during which time it let her specify the transaction she wanted.

Sounds like the BofA system had spontaneously forgotten (or garbled?) a bunch 
of PINs.  This leads to some interesting speculations: does their system
lose other information (balances, for example)?  Has it been compromised?
Is there a "disgruntled employee" at work?  Do banks often forget PINs?


SDI is feasible

Walt Thode <thode@nprdc.arpa>
3 August 1987 1103-PDT (Monday)
(From the July 31 _Government Computer News_ (without permission))

         SDI Software is Feasible, AFCEA Report Concludes

In a 200-page report to be released in Mid-August, The Armed Forces
Communications and Electronics Association will conclude that
development of the needed hardware and software for the Strategic
Defense Initiative is difficult but attainable.

The study, begun in April 1986 for the Defense Department's Strategic 
Defense Initiative Organization, was carried out by a committee of 
civilian scientists from industry and research institutes.  Five panels 
were set up, to examine processors, software, networks, communications, 
and man/machine interfaces, all under the heading of battle management 
systems and command, control, communications and intelligence systems.

Although the report has yet to be released, its conclusions have been
aired in public by study participants.  Describing the hardware
requirements as "more firmly in hand than the software," the report says
that building the system's architecture around hardware will mitigate
software problems.

Stuart J. Yuill of RJO Enterprise Inc., Lantham, MD, chairman of the
networks panel, said he was impressed by the high quality of the study
teams.  He also noted that conclusions of the study reflected the
"nearly unanimous" view of the experts.

The task of developing the software needed for the SDI has been
described as impossible by some observers, who say that a perfect
defense shield is infeasible, partly because it is untestable.  The
AFCEA report instead suggests that developing effective software will be
possible, even though the requirements are complex.  Thus the system
software needs the most attention, it says.


Publicized Risks

<mnetor!utzoo!henry@uunet.UU.NET>
Sun, 2 Aug 87 22:08:43 EDT
  [This is not particularly computer relevant (and you may omit it if relevance
  is in with you today) — except that the conclusion is worth noting.  But,
  please don't respond to the items that are less than relevant.  Yes, it's my
  fault — I could have omitted Mark Day's precursor as well — except that
  it had a useful comment on a STILL EARLIER message...  Iterated Mumble.  PGN]

> Car wrecks and cigarette smoking kill more people than nuclear plants, sure,
> but the way that they kill people is very different.  Car accidents
> generally don't affect a zone of several miles' diameter, forcing evacuation
> and abandonment of homes...   [Mark Day, RISKS-5.18]

There is also the question of voluntary vs involuntary risks.  However, the
comparisons here are basically apples-vs-oranges.  A much fairer comparison
is to other risks that are involuntary, affect a zone of several miles'
diameter, force evacuation and abandonment, etc.  There are such, and they
get far less attention than nuclear risks.  One is driven to conclude that
the perceived seriousness of risks has much more to do with the amount of
publicity than with the magnitude of the problem.

Some examples:

- There is apparently at least one place in the US where a dam failure would
    probably kill a quarter of a million people.  The probability of dam
    failure is known to be nonzero, and they are much less carefully
    guarded against terrorist attack than nuclear plants.  Do you know
    whether there is one upstream of you?

- The Bhopal disaster probably (I don't have numbers handy) killed more
    people than all nuclear accidents to date, Chernobyl included.  There
    was noise about it at the time, but it's largely forgotten now.  Do
    you know whether there is a plant handling such chemicals within,
    say, ten kilometers of you?  Do you care?

- The largest peacetime evacuation in history had nothing to do with nuclear
    reactors.  Hundreds of thousands of people were evacuated from the
    center of Mississauga (which is essentially a suburb of Toronto) when
    tank cars loaded with chlorine derailed a few years ago.  How many
    rail lines are there within ten kilometers of you?  Do the railroads
    using them observe any restrictions on what cargos they carry on
    those lines?  How frequent are derailments on those lines?  (Usually
    the answers are "several", "no", and "much more common than you
    think".)

People who raise the issue of nuclear wastes should look into the arsenic
content of stack-scrubber sludge from coal-burning plants.  That stuff is
produced in far greater quantities than nuclear wastes, for comparable
power outputs, and arsenic has *no* halflife — it is dangerous *forever*.
Here we have another comparable, arguably rather worse, risk that is largely
ignored in all the uproar about nuclear power.  Why?  Less publicity.

Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,decvax,pyramid}!utzoo!henry

Please report problems with the web pages to the maintainer

x
Top