The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 14 Issue 12

Monday 30 November 1992


o Laser Printer Sucks up Cat
Douglas M. Kavner
o British Telecom find themselves being a phone pest
David Shepherd
o "The risk is not obvious"
Don Norman
Rex Black
o Re: Name confusions?
Don Norman
Jerry Leichter
o Electronic Banking Risks
Ross Anderson
o Re: How Is Technology Used for Crime ...
o SNL accidentally informs people about risks of caller ID
Sean Eric Fagan
o Re: Nuclear plant risks
Victor Yodaiken
o Info on RISKS (comp.risks)

Laser Printer Sucks up Cat

Douglas M. Kavner <>
Mon, 30 Nov 92 12:51:19 PST
Danger from personal computers?  Most people think of electromagnetic fields
or getting zapped while monkeying around inside the box.  Nothing immediately
threatening could happen while just printing a spreadsheet.  Right?

That's what I thought until last week when my wife was severely bitten by our
kitten as it was hanging in mid-air by the tip of its tail.  It all started so
innocently.  Our 8-month-old kitten likes to lie on top of our Apple Personal
LaserWriter LS.  We have tried to get him off, but he keeps getting back up on
it.  He must like the hum.  My wife was printing a few pages in the
background.  While she was talking on the phone, there suddenly was a shriek
from the kitten.  The printer was only about 2 feet away from her, luckily
turned the opposite direction.  The kitten was sprawled stiff on top of the
printer, like he had been stuffed.  We just had him declawed, but his teeth
were grabbing at anything in sight, including my wife's arm as she tried to
turn off the printer.  The party on the other end of the phone thought that
both the kitten and my wife were being murdered.

After a few deep bites, the printer was off, but the tail was still stuck in
the top roller that ejects the paper from the printer.  Apparently, the hair
on the tip of his tail had gotten inside the roller and was sucked in as the
paper was being fed out.  While my wife was getting a towel to prevent further
injury, the kitten jumped off the side of the printer.  The top of the desk is
slightly waxed and the printer nearly slid off.  It would have landed on top
of him.  Can you imagine how hard it is to figure out how to open a printer
under these conditions?  Before she got the towel around him, the kitten took
a few more deep bites out of my wife's leg through her bluejeans!  After what
must have seemed like an eternity, my wife got the printer open and freed the

A $27 trip to the vet informed us that we had a real lucky kitten.  If he had
been a little older and heavier, the tail would have separated and required
amputation.  If he still had claws, my wife would have had to have been
stitched back together.  What if it had been a child's long hair?

So Apple, how about a Kitty Guard?  Unfortunately, Cats don't read the generic
warnings that came with the printer.  I really like the quality and value of
the printer.  How much extra would I pay for more safety?  At least $27.  I
knew I was cutting corners when I bought the printer since it did not include
PostScript, but I really didn't expect this.

Several other companies also use the same type of printer case.  They all have
a max speed of 4 pages/minute and a cut-out in the top for the paper to
reverse stack or innocent kitties to take a nap.  Some have different paper
feed mechanisms, so the eject roller may also vary.

In case you were wondering, the kitten has been avoiding the printer the
last few days, but was seen standing on it once while it was off.

Doug Kavner, Hughes Aircraft Company, P.O. Box 3310, Fullerton, CA 92634  (714) 732-3682

   [Also posted to comp.sys.mac.hardware .  This version edited by PGN,
   who notes that there is a risk for children's fingers as well.
   Perhaps the sportier models should have scroll-bars.]

British Telecom find themselves being a phone pest

David Shepherd <>
Mon, 30 Nov 1992 11:10:37 +0000 (GMT)
A short news article in a weekend newspaper told how a woman was woken by a
mysterious phone call at 4:30am every day.  She reported it to British Telecom
who monitored the line for several months to track down the phone pest ... and
eventually discovered that the calls were due to a programming error in one of
their own test computers!

david shepherd: or    tel: 0454-616616 x 625
                inmos ltd, 1000 aztec west, almondsbury, bristol, bs12 4sq

Editorial: "The risk is not obvious"

Don Norman <>
Mon, 30 Nov 1992 10:12:24 -0800
This is an editorial comment against the tendency among Risks contributors
to take a rather simplistic view of things.

I assume that Risks readers shudder when they read a description of some new
technological advance that ends with the phrase "the benefits are obvious."
You should.  But you should shudder equally at the complimentary phrase "the
risks are obvious."  (Sometimes this seems to be said only in order to justify
submission of an article and get it past the eagle eyes of our esteemed
moderator. It is as if the person says, "I came across the following cute
story I wish to share. Why Risks?  Oh, well, umm, the risks are obvious."  Not
to me.)

Not only are the risks not always obvious to me, but I worry about the risk
of believing that complex things can be judged along simple,
uni-dimensional values. The risk of the alternatives, including that of not
doing anything, must also be weighed (here I simply repeat Brint Cooper's
plea that people who submit such claims should do a more thorough analysis)

Life is complex.  Life presents us with a series of tradeoffs: each benefit
comes along with some risks.  And many risks come along with some benefits.
We are doing society no favors if we simply emphasize the risks instead of
a more careful -- and more difficult-- analysis of the tradeoffs, both
good and bad.  You might still conclude that the item in question is dangerous
and immoral: the careful analysis will then strengthen the claim.

There was a similar over-simplistic view of things in the recent discussion of
the inability of people to perceive the correct odds for low-probability
risks.  Readers rushed in with well-intended, but simplistic solutions. In
fact, people -- most people, including you, very sophisticated risk readers,
cannot properly judge these low probability events.  This is well known in the
professional field that deals with these events -- even those professionals
have problems.  Human psychology simply cannot match probabilities in the
10^-3 to 10^-6 range with everyday experiences. Not only that, but it is well
known in cognitive science that human memory and decision making is biased
toward particular events and does not do a good job of matching real
probabilities. In fact, it is probably a counter-productive evolutionary
strategy to do so.

Describing one unlikely event as being similar or more or less risk than some
other low-probability event (that we similarly cannot understand the risks of)
helps, but does not solve the difficulty.

Everyone is an expert in folk psychology, for everyone has observed themselves
-- and others -- for their entire lifetime. Alas, folk psychology often has
little to do with real psychology.  Risks at times seems to degenerate into a
field day for amateur psychologists, where professionals in one discipline --
who understand that it took years for them to reach that level of
sophistication in their chosen field -- ignore that a similar level of
commitment and level of expertise is required to make professional judgments
in the human sciences.

Sorry for the editorial.  I like Risks, but at times I wish it had a higher
level of professional standards. [I make the above statements wearing several
hats: professional debunker of technology; professional praiser of technology;
Professor Psychology and former chair of a psych department; former member of
a team doing of nuclear power plant probability risk assessments (I was the
expert on human reliability -- the result of this exercise was to completely
distrust all such exercises, while admitting that I couldn't think of any
better method).]

Don Norman
 Cognitive Science; University of California, San Diego; La Jolla, CA 92093
  Internet:     Bitnet: dnorman@ucsd     AppleLink: dnorman
After January 1, 1993: Apple Computer, Cupertino, CA

Re: Obvious Risks? (Cooper in RISKS-14.10 on Mestad in RISKS-14.06)

Rex Black <>
Mon, 30 Nov 92 11:41:34 CST
>    "The RISKS seem obvious enough to me..."
> I grant the risk.  Again, however, I make a plea for identifying the risk of
> NOT doing something like this...

I do not believe we should talk or think in terms of all-or-nothing
propositions on this issue.  First, a computerized driver-advisory system is a
very different kettle of fish than a computer-driven car, since the advisory
system is passive and can not through its actions kill anyone.  Second,
computer-driven cars would pose a number of risks to the public socially in
terms of law enforcement and other Big Brother problems.  I do not believe
that a computer-driven car solves many problems that a computer advisory
system could not, and it creates a whole slew of new problems that a passive
system can't.

How can we deal with name confusions?

Don Norman <>
Mon, 30 Nov 1992 10:12:47 -0800
This is a request for solutions.  A recent Risks article pointed out the
confusion in databases when people had the exact same names and birthdate.
In this case, the contributor stated:

"The scary part is the quote attributed to Lt. Gerard Blouin of the Montreal
  'it's up to him to change his name somehow. If he can modify his name,
   just by adding a middle initial or something, it would help him.' "

Well, in this case, I sympathize with the police.  Let me ask all of you folks
-- how should society deal with this? I believe myself to be one of the most
fervent publicists of the notion that technology must adapt to people, but
there are some real problems to be faced that technology cannot solve. Names
are not just for the benefit of the individual: they are also to benefit
society. Asking people to select unique names doesn't seem all that

Names, you realize, are a technological invention to make it easy to identify
people uniquely. At first, people only had single names. This didn't work
beyond the village level: hence longer descriptions that eventually became
standardized as last names (well, family names, which is some cultures are
written first). Middle initials were part of the attempt to fully describe the
lineage, and they also helped discriminate among otherwise similar names.

Today, with world-wide commerce, full names are not enough. So we now use
longer descriptions to identify people: sometimes full name plus birthdate; or
full name plus parents' full names plus birthdate and birthplace and
.........? The problem with all these schemes is that they are arbitrary.
Without standards, they simply lead to chaos. And without standards there are
bound to be more confusions. But standards are easy to abuse, to turn into
national identification numbers for evil purposes. Some countries use unique
identification numbers, assigned at birth (in the US, the social security
number is more and more serving this purpose, regardless of the legality of
such usage). Many rebel against some universal identification number -- and
for good reason -- but there are cases where they would really come in handy,
cases where they are really essential. What are the alternatives?

In the case before us, asking each person to get a middle name is a temporary
fix. It won't always work -- I happen to know of at least one other person
with the same name as I have -- including middle name (although not the same
birthdate).  With billions of people in the world, amazing coincidences will
happen, even ones that have a probability of "one chance in a billion."

Would a world-wide registry for names work? Supose that before I can assign a
legal name to a newborn child (or change the name I already have), I must
check it out with the registry. The risks of this are obvious (isn't that a
wonderful phrase?), but so are some of the benefits. It can even work
smoothly: In California: I can assign myself any license plate I want as long
as it is less than 7 characters, not confusable with others, and not on the
socially-prohibited list. I think of a possible name, the name gets typed into
a computer, and I am told immediately whether or not I may have it. Several
tries later, I have my choice. Suppose the database only contained names, no
other personal information. Perhaps a successful database entry would give me
a certificate or other authorization that I had permission to use that unique
name, but suppose that was it -- the database wouldn't even know who had made
the request -- just the fact that the request had been made (it would
obviously know the location of the terminal as well). Would such a scheme work
for names?

This is a serious request. how can we invent unique identifiers for people that:
 1. Make it easy to select a name
 2. Work for an entire country and potentially scale to the entire world
 3. Do not violate civil liberties
 4. Do not make it possible for others to misuse the system

In other words, how to we get the benefits and avoid the risks?

Don Norman
 Cognitive Science; University of California, San Diego; La Jolla, CA 92093
  Internet:     Bitnet: dnorman@ucsd     AppleLink: dnorman
After January 1, 1993: Apple Computer, Cupertino, CA

Name confusion is problem confusion

Jerry Leichter <>
Mon, 30 Nov 92 17:20:58 EDT
In a recent RISKS, Stanley Chow reports on yet another case of "name
confusion": One Steven Reid of Montreal has been repeatedly confused with
another person of the same name, with the same birthday, living in the same
city.  He describes as "scary" a quote attributed to a police officer
suggesting that it was up to Mr. Reid to "change his name somehow", say by
adding a middle initial.

The problem of name confusion is neither new, nor related to computers, and I
for one find nothing scary or in any way troubling in the police officer's
suggestion.  Any society has to have a way to identify individuals.  At one
time, when the scale of society was small, a single name, plus perhaps a city
of origin, or a parent's name, or a job name, was enough identification.
Even when it wasn't, most transactions were of a personal nature, and if I
personally knew both "John of Dullville"'s, it really made no difference
to me that they had the same name.

Later, as the scale of society grew larger, the ambiguities became more of a
problem.  We began to use two names, initially just as a conventional form of
the older identifications by parent (in names like "Johnson") or by social
role (names like "Baker" and "Smith"); later simply as link to a family within
which one could probably assume that first names would not be re-used too
often, at least within a locale and generation.

Today, the scale of society is global and "first and last name" has long
become useless as a unique identification.  There are not all THAT many
additional natural identifying features we can use.  Adding the full birth
date is actually quite good; sometimes mother's maiden name is useful.  But
where can we go beyond that?  Use the address, hearkening back to the old
"John of Dullville"?  In today's mobile society, that's not a very useful
identification marker.  Should Mr. Reid perhaps identify himself as "Steven
Reid the taxi driver" (or whatever he is)?  People change jobs much more than
they once did, too.

Computer technology has helped us create and maintain a very large-scale
society, in which we long ago stopped relying on personal contact as a
reliable means of identification - we can't possibly personally know all
the people we will have to interact with.  The ambiguity of names is a result
of that social change, not of computers; it would appear in exactly the same
form in manual databases.  Just let them be large enough so that their users
have no personal knowledge of the people described, a threshold that was
probably crossed a hundred or more years ago.

There's an easy solution to the name ambiguity problem: Just assign everyone a
unique id number.  This is a trivial thing to do, with or without computers.
We could even make the id "number" be a pronounceable sequence of letters, or
of words.  Hell, we could even use first a last names, just requiring a check
of the database to make sure they are unique.

Many organizations have used this solution for years, with great success,
ranging from the military of any country you want to name to, say, any large
health care system, private or government-run.  In the US and Canada, we have
for various cogent reasons chosen not to use this solution, at least not for
a wide variety of interactions we have with government:  When identifying
ourselves to the police, we expect to be able to use our names, not our
"national ID numbers".

Any choice has costs.  The fact that one does not like the alternative of a
national ID number does not make the costs of using our traditional naming
system go away.  At one time, there was a significant monetary cost in keeping
and looking up records under alphabetic names.  The power of today's computer
systems makes that irrelevant - but the underlying ambiguity hasn't gone away,
and WON'T go away.  As long as it is there, as long as "Steven Reid, born on
xx/yy/zzzz" is unrealistically expected to uniquely identify an individual,
these problems will continue to arise.  No solution can possibly exist without
Mr. Reid's cooperation: If he stands by his insistence that "Steven Reid, born
on xx/yy/zzzz" is all the identifying information he will give, he cannot
expect to be distinguished from the other Mr. Reid who just as adamantly
insists on his right to identify himself in the same way.

Living in a society has both benefits and costs.  Since Mr. Reid, as a member
of society, has chosen the benefits of a police record system that does not
require us to provide our national ID number, or perhaps even a set of finger-
prints, on demand, he will necessarily bear the cost of somehow distinguishing
himself from his doppelganger.  No one else can possibly do it for him.


Electronic Banking Risks

Ross Anderson <>
Mon, 30 Nov 92 14:46:31 GMT
The Sunday Times (London) yesterday printed a piece about how easy it was to
get hold of people's bank and credit card statements.

They paid 200 pounds a time to private detectives for personal dossiers on
cabinet ministers, including their addresses and private telephone numbers,
and their last few months' transaction details.

This is highly security sensitive information: if you are an IRA sympathiser
and want to blow up the UK minister of defence, it is quite useful to know
what his favourite restaurant is.

With a bit of luck, the government will now take its own legislation
seriously. I understand it is an offence for the various banks to make client
information available to unauthorised persons in this way, and indeed the
negligence of the UK banks about computer security is well known.

Essentially, by making account enquiry facilities available to tens of
thousands of low-level staff, the banks make it virtually certain that there
will be at least one bent staff member who has access and who is prepared to
sell the information on to private detective agencies.

In France, on the other hand, at least one bank I know of has a system which
rings a silent alarm whenever a staff member makes an enquiry about an
account held at another branch. Anyone who started selling the database would
be caught quickly.

In Britain, on the other hand, they even have the cheek to charge you if you
draw funds at another branch.

Not everybody in government was unaware of this abuse: it was mentioned in
the newspaper article that the head of MI6, Sir Colin McColl, and the head of
MI5, Stella Rimington, had taken measures to ensure that their own bank
accounts could not be read (maybe they bank in France).

The abuse was also well known here at Cambridge: a speaker described it at
our fraud seminar about a month ago, before it was even a story in the press.

What odds will you give me on any of the bank directors going to jail? I'm
sure that if I, rather than the banks, had leaked this sort of information,
I'd have got ten years under the Prevention of Terrorism Act, and another
stretch under the Data Protection Act. Still, there's one law for the rich,
as they say, and another for us poor dogs.

May I suggest that next time you write to your bank manager, you demand that
he gives you a list of all persons (by name) who have access to your account
details? If enough people ask for this, it might make a difference,

Ross Anderson

Re: How Is Technology Used for Crime ... (Sherizen, RISKS-14.11)

Mon, 30 Nov 92 14:39:44 WET
>"Call girls" and obscene callers ...

>Any ideas, reactions to these comments, and suggestions of any social
>historical studies about these issues are welcomed...

These two paragraphs conjoined have suggested a likely area of crime.
Once comms bandwidths improve enough to start sending videophone around
the world, it'll be pretty easy to ship any sort of information.

Now obviously a lot of governments don't want certain sorts of
information to get around. There are the more obvious political sorts of
information which totalitarian governments don't want spread around, but
the same thing applies to more mundane sorts of info like pornography.

There's already been a few press scares about pornography and obscene
gifs on usenet. In the uk it's illegal to import pornographic videos or
magazines, where the government idea of pornography more or less amounts
to having two naked people in the same picture.

Naturally, since this is illegal in the uk, and legal in continental europe,
folks import these for the high prices you can obtain on illegal items.

At the moment though, importation amounts to actually carrying
magazines, or video masters through customs with the consequent risk of
being searched, caught, and arrested.

However, with higher bandwidth comms in the future, there's no real
reason why they couldn't just mail the digitised files over and produce
the videos (or magazines?) from the files. Since these files could also
be encoded, it'd be damned difficult, if not impossible, to detect.

A mundane, though profitable, sort of crime, and not dissimilar to your
"call girls" example above.

On the plus side, it's gonna be a lot harder for governments to control


SNL accidentally informs people about risks of caller ID

Sean Eric Fagan <>
Mon, 30 Nov 92 1:46:27 PST
For reasons I still don't fathom, I was watching Saturday Night Live this
week.  It was a repeat from a year or so ago.

One of the skits was a commercial spoof.  A man in an obviously cheap hotel
room dials a number from a phone book, and says, 'Mrs. so and so?  At
such-and-such address?'  The other person, a woman, said yes, and the man
replied, 'You have won a free trip to the Bahamas.  We just need a credit
card number to verify who you are.'  Classic scam, right?

The woman then asks for his phone number, and the guy hesitates.  She then
says, not to worry, I already have it, and presses a button on a little
thing next to her phone, and then reads the guy's phone number back to him.

He hangs up, and then says, "Hm.  She knows my phone number.  I guess I'll
have to kill her now."

Fade to black, and a logo that says "U S FON -- Maybe we're the right
choice," with a voice-over of something like, "We don't have it, so maybe we
are the right choice."

All in all, I think it did a passable job, accidentally, of pointing out a
risk of caller-id.
                      Sean Eric Fagan  sef@kithrup.COM

Re: Nuclear plant risks (Dolan, RISKS-14.11)

Victor Yodaiken <>
Mon, 30 Nov 92 19:10:39 -0500
Brad Dolan <> repeats some commonplaces from nuclear
industry advertising to the effect that the costs of nuclear power have been
vastly inflated by the costs of citizen intervention in licensing.  RISKS is
not the place for a debate on nuclear power economics, but I don't want to let
this dubious claim pass unchallenged. The book "Safety Second" by the Union of
Concerned Scientists presents a strong case for the contrary opinion.

Mr.  Dolan concludes as follows:

>I would like to see a comparison of safety benefits (in terms of expected
>lives saved, property saved, or whatever) resulting from intervenors'
>interventions with the safety detriments which have resulted from increased
>electrical bills.

This is a rather naive plea, in my humble opinion and technological innocence
is a significant source of risk. Risk assessment is at best a very inexact
science, and there are zero grounds for believing that such a comparison would
illuminate anything more than the presuppositions of the assessor. For a
survey of the complexities involved in risk assessment for nuclear power, see
the Brookhaven/EPRI workshop on "Health and Environmental Risk Assessment"
(Pergamon Press, 1985).
                           Victor Yodaiken

Please report problems with the web pages to the maintainer