The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 12 Issue 21

Saturday 31 August 1991

Contents

o `Risk perception'
Phil Agre
o Flaming makes the mainstream media (again)
Charles Forsythe via Gene Spafford
o Phone Fraud Story a Fraud?
Michael Barnett
o Re: Phone Fraud -- Langley VA
anonymous
o +&*#$
Bob Frankston
o Banks, Credit Cards, and Short Names
Bill Biesty
o YASSNS (Yet Another Social Security Number Story)
S. Peter Loshin
o Re: Programs Pester Public Policy People
Jeffrey Sorensen
o Police tickets & computers in the Netherlands
Ralph Moonen
o Re: Cracker charged in Australia
Gene Spafford
o Senseless Actions Invite Trouble
Charlie Lear
o A Danger Associated with Intelligent Terminals
Douglas Thomson
o Re: Unwarranted equivalence assumptions
Andrew Koenig
o Old School Reports of the Famous
Kernel Mustered via Spaf and Keith Bostic
o Info on RISKS (comp.risks)

`Risk perception'

Phil Agre <pagre@weber.ucsd.edu>
Wed, 28 Aug 91 18:25:46 pdt
I reread the LA Times article that Rodney Hoffman helpfully summarized the
other day, and suddenly I understood something about the peculiar logic behind
the rhetoric of `risk'.  This article described a series of `findings' to
the effect that people are basically irrational about technological risks and
other bureaucratic phenomena (in this case, the medical profession).  I tend
to be suspicious about any theory that treats ordinary people as irrational,
and indeed a close reading of this article reveals both better explanations
of the data and internal incoherencies in the framework within which these
data were reported.  The `findings' summarized by RH, or most of them, are
easily explained if we hypothesize that most people disbelieve the claims
that are made to them about the risks and benefits of new technologies etc
(perhaps because they believe the organizations making such claims to be
driven by profit and prestige and getting promoted rather than by genuine
concerns for public health and safety), and furthermore that people only
believe in risks and benefits they've had the opportunity to evaluate for
themselves.  This particular article was unusual in that this explanation was
given a few lines, though it was quickly dropped and the analysis continued
as before.

The point is important because it helps diagnose some of the hidden agenda
inside the notion of `risk'.  To talk about `levels of risk' and the like
erases the distinction between the experts' assessments of risk and the
assessments that ordinary people are in a position to make.  If ordinary
people make different assessments from the experts, then that calls for some
quasi-biological investigation of `risk perception'.  These investigations
will discover all manner of irrationality and ignorance, which will then
motivate calls for greater efforts to convince people to leave things in the
hands of the experts.  The irrationality ascribed to ordinary people helps to
draw attention from the open contradictions in the research: the conclusion
that ordinary people are unwilling to accept any risk at all is juxtaposed
comfortably with the observation that the same people regularly assume large
risks out on the highway.

The thing is, though, that the experts have a pretty crude understanding of
risk.  The LA Times article and many others of its genre are obsessed with
death statistics.  Levels of risk are routinely equated with the number of
people who die each year from a given cause.  Thus the obsessive interest in
popular assessments of the relative magnitudes of these numbers.  It may well
be that people falsely believe that many more people die in fires than from
drowning, for example, but the question is only interesting once one accepts
several premises.  Thus as well the obsessive interest in people's skills with
word problems from probability theory, which are only germane if you believe
(which most people apparently do not) that it's a responsible procedure to
assume (as risk theorists so often do, if only because it makes the math
simpler) that probabilities are independent unless evidence to the contrary
cannot be ignored.  Somehow the whole framework associated with the concept
of `risk' derails any attempt to critically investigate these premises.

In my opinion this is not an accident.  The whole rhetoric of `risk' started
out as corporate PR.  You probably remember the old oil-company ads (from
Mobil, right?) decrying those people who supposedly called for a `risk-free
society'.  These ads were the laughing-stock of the country, and rightly so.
How times have changed.  Oil companies no longer have to buy quarter page ads
on the NY Times op-ed page to get such stuff into print.  The same ideology,
made into a profession, now shows up as `research' in articles in the LA
Times.  Now we have sophisticated, scientific-sounding ways to ignore the
reasonable insistences of normal people -- on being told the truth, on being
able to find the world intelligible and sane, being consulted about things
that change their lives, on not being subjected to hazards without their
consent, and generally on being able to participate in collective decisions
about issues of technology and social change -- and remaking them as an
irrational aversion to `risk'.

This is why it's so ironic that many of the same people who use the discourse
of `risk' also complain about expressions such as ``risks to the public in
computers and related systems'' which, we are told, encourage a one-sided
focus on risks without the balancing context of benefits.  Here, surely,
is another instance of the irrational aversion to risk.  Such complaints
are both right and wrong.  Technology has certainly been associated with both
good and bad in the world, and often at the same time.  But it's important
not to take `technology' (or `computers' or `credit databases') as package
deals.  Computer technology is malleable; it can be reshaped endlessly as
important social goals are added to its requirements.  The problem with the
vocabulary of `risks and benefits', as with the vocabulary of `risk', is
that it presupposes the unilateral nature of technology, handed down from
on high, take it or leave it.  But it doesn't have to be that way.  Socially
responsible technology is technology that is developed *with* people, not
just `for' them.  Can the current social organization of technology even
conceive of such a process?

Phil Agre, UCSD


Flaming makes the mainstream media (again, I guess)

Gene Spafford <spaf@cs.purdue.edu>
Thu, 29 Aug 91 07:50:40 EST
Any RISKer's read this book?  It sounds like worthwhile reading....

------- Forwarded Message

Date:    Tue, 27 Aug 91 21:11:00 -0600
From:    forsythe@track29.lonestar.org (Charles Forsythe)
Subject: Flaming makes the mainstream media (again, I guess)

FLAME THROWERS: Why the heated bursts on your computer network?  by Doug
Stewart (copied without permission from Omni magazine Sept 1991 issue)

"You are a thin-skinned reactionary jerk," begins the computer message sent
from one highly educated professional to another.  "I will tell you this,
buster, If you were close enough and you called me that, you'd be picking up
your teeth in a heartbeat."  There follows an obscene three-word suggestion in
screaming capital letters.

The writer of the above message, sent over the Byte Information Exchange, was
apparently enraged after a sarcasm he'd sent earlier was misinterpreted as
racist.  In the argot of computers, his response was a "flame" -- a rabid,
abusive, or otherwise overexuberant outburst sent via computer.  In
networking's early days, its advocates promised a wonderful world of pure
mind-to-mind, speed-of-light, electronic conversation.  What networks today
often find instead are brusque putdowns, off-color puns and screenfuls of
anonymous gripes.  The computer seems to be acting as a collective Rorshach
test.  In the privacy of their cubicles, office workers are firing off
spontaneous salvos of overheated prose.

Sara Keisler, a social psychologist at Carnagie Mellon University and Lee
Sproull, a Boston University sociologist, have observed that networking can
make otherwise reasonable people act brash.  In studies originally designed to
judge the efficiency of computerized decision-making, they gave small groups of
students a deadline to solve a problem.  Groups either talked together in a
room or communicated via isolated computer terminals.  The face-to-face groups
reported no undue friction.  The computerized sessions frequently broke down
into bickering and name-calling.  In one case, invective escalated into
physical threats.  "We had to stop the experiment and escort the students out
of the building separately," Keisler recalls.  Kiesler and Sproul documented a
tendency toward flaming on corporate electronic-mail systems as well.  At one
large company, employees cited an average of 33 flames a month over the email
system; comparable outbursts in face-to-face meetings occurred about four times
a month.

Keisler and Sproull attribute the phenomenon largely to the absence of cues
normally guiding a conversation -- a listeners's nod or raised eyebrows.  "With
a computer," Keisler says,"there's nothing to remind you there are real humans
on the other end of the wire."  Messages become overemphatic -- all caps to
signify a shout; "(smile)" or ":-)", a sideways happy-face, to mean "I'm
kidding."  Anonymity makes flaming worse, she says, by creating the equivalent
of "a tribe of masked and robed individuals."

In real life, what we say is tempered by when and where we say it.  A remark
where lights are low and colleagues tipsy might not be phrased the same under
flourescent lights on Monday morning.  But computerized messages may be read
days later by hundreds or thousands of readers.  Flaming's ornery side is only
half the picture, says Sproull, who co-authored _Connections: New Ways of
Working in the Networked Organization_ with Keisler.  "People on networks feel
freer to express more enthusiam and positive excitement as well as socially
undesirable behavior," she says.  Sproull finds it ironic that computers are
viewed as symbols of cool, impersonal efficiency.  "What is fascinating is the
extent to which they elicit deeply emotional behaviors.  We're not talking
about zeroes and ones.  People reveal their innermost souls or type obscenities
about the the boss."  What, she asks, could be more human?


Phone Fraud Story a Fraud? (Re: Phone Fraud, RISKS-12.19)

Michael Barnett <mbarnett@cs.utexas.edu>
Sat, 31 Aug 1991 09:46:29 -0500
Missing from the quotes about the problems WRL has experienced is the
following:

    Even more suprising to experts, they [the theives] had managed to
    log 129,315 minutes of talking time over one line -- a seemingly
    impossible feat, because it equaled an average of roughly three
    calls going out simultaneously every minute of the day

Later in the article a spokesman for Bell Atlantic is quoted as saying, "There
simply cannot be a single outgoing line that routes multiple calls at once".
Perhaps the problems were not caused by malicious persons at all, but problems
in the billing system. How much easier to blame "low-income immigrants" and
"drug dealers"! (Anonymous "authorities" claim these are the culprits.) What
ever happened to the reports that hackers were responsible for the breakdowns
of the AT&T switches? That made headlines until the true causes were
discovered.

The real story, I think, which was buried in the article:

        In the past, long-distance carriers bore most of the cost [of phone
    theft], since the thefts were attributed to weaknesses in their
    networks. But now, the phone companies are arguing that the companies
    are arguing that the customers should be liable for the cost of the
    calls, because they failed to take proper security precautions on
    their equipment.

Michael Barnett (mbarnett@cs.utexas.edu)


Re: Phone Fraud -- Langley VA (RISKS-12.19)

<[anonymous]>
Thu, 29 Aug 91 12:01:35 XXT
>  The New York City Human Resources Administration lost $529,000 in l987. And
>  the Secret Service, which investigates such telephone crime, says it is now
>  receiving three to four formal complaints every week, and is adding more
>  telephone specialists.

Ironically enough, one of the PBX's that was breached was located in Langley,
Virginia. This went unnoticed for more than a year (!!). Yes, your very own CIA
wuz cracked. I have no information about the amount of fraudulent calls that
were made, but I am led to believe that it was a substantial amount.


+&*#$

<frankston!Bob_Frankston@world.std.com>
31 Aug 1991 10:17 -0400
No, I'm not cursing.  Just showing a possible New Hampshire license plate.
The problem is even worse since other nonASCII graphics such as a bell have
been spotted.  I'm curious about how various computer systems deal NH plates.


Banks, Credit Cards, and Short Names (Re: RISKS-12.19)

<biesty@ide.com>
Fri, 30 Aug 91 09:19:25 PDT
Benefits of having a short name?

A friend of mine who works at one of the larger Credit Card issuing Banks, once
told me that people whose last names were shorter than *three* letters would
never get pre-approved credit card letters from them. It seems the program that
went through the purchased list of names, addresses and consumer info
considered names of less than three letters to be corrupt data.

Before you think that this is a good thing since you'll be getting less junk
mail, pre-approved credit mailings often offer you a better interest rate or a
deal on the annual fee that you would not get if you applied on a generic
application.
                             Bill Biesty  <biesty@ide.com>


YASSNS (Yet Another Social Security Number Story)

"S. Peter Loshin" <peter@draper.com>
Fri, 30 Aug 1991 17:37:00 EDT
Having recently purchased a used Plymouth, I decided to take over the remainder
of the 7 year/70k mile power train warranty.  One of the items of information
requested was my Social Security number.  When asked why that was necessary,
the credit manager said "Because it's on the form.  If you don't give it to us,
we can't transfer the warranty."

I did not give them my SSN.  The business manager said that while he had NEVER
processed a form without SSN, he didn't know if it _really_ was required.  He
did say a form was once rejected because it was an out-of-state applicant who
did not provide the full 9-digit ZIP code!

He also said he'd call if there were any problems...

Peter Loshin     peter@draper.com    (617) 258-2480


Re: Programs Pester Public Policy People

Jeffrey Sorensen <sorensen@spl.ecse.rpi.edu>
Wed, 28 Aug 91 16:22:13 EDT
The Strings and Springs puzzle brought to you in hi-res laughics:
 ___________________________________________
  |   Z                          |  Z
  |   Z     Spring k=1           |  Z k=1
  |   Z__                        |  Z
  |   |  \                       |  |
  |   |   |                      Z  | L=1
   \__|   |                      Z  |
      Z   |                      Z  |
      Z   | Spring k=1          -----
      Z  /                     |     | w=1/2
    -----                       -----
   |     | Weight w=1/2
    -----

   Before                       After

Two springs with k=1 are tied together with a piece of string L=3/8 and the
bottom spring is hook to a weight of mass 1/2.  Now, two long strings with L=1
are tied from the roof to the bottom spring and from the bottom of the top
spring to the weight.  These are hangling loose and look like "safety" strings.
When the little string in the middle is cut, the weight actually goes up!!!!!!!

My calculations:

  F = 1/2 = kx = (1)*1/2  So the left weight is 1/2 + 1/2 + 3/8 from ceiling

  Second case F= 1/2 = (kx + kx) = 2kx = 2(1)x So the weight is
    1 + 1/4 from the ceiling.

A circuit can be designed using resistors for springs and Zener diodes for
strings according to _Science News_.

This diagram was adapted from the diagram _Science News_ adapted from _Nature_

Jeff Sorensen  sorensen@ecse.rpi.edu


Police tickets & computers in the Netherlands

<hvlpa!rmoonen@att.att.com>
Thu, 29 Aug 91 12:42 MDT
>From various newspaper articles in the past couple of weeks in the Netherlands:

The Dutch police is facing serious problems with the paperwork involved with
the enormous amounts of tickets for traffic violations.  Currently, 4 million
tickets per year are being given. However, the police can only handle 2.5
million per year. Last year, simplifications where made to the paperwork, and
computers where installed to help the police officers, but it only led to an
increase of 500.000 tickets per year, still leaving a gap of an astonishing 1.5
million tickets.  Chances are, if you get a speeding ticket or parking ticket,
you'll never hear anything from it.

To enable the police to catch up, three methods are being proposed: One, in
which ALL unprocessed tickets will be deleted, giving the police the chance of
starting with a clean slate, another being: install more computers to do the
work, and the third and best: Disallow police officers to give more than a
certain amount of tickets per day.

"I am sorry sir, I can not give you a ticket for this violation. I have
reached my quota for today."

And all this, because the installed computers didn't work easily enough
to increase the amount of processed tickets with more than 500.000....

--Ralph Moonen


Re: Cracker charged in Australia (RISKS-12.18)

Gene Spafford <spaf@cs.purdue.edu>
29 Aug 91 15:19:01 GMT
In RISKS-12.18, Richard O'Keefe comments on the article in RISKS-12.13 about
the Australian indictment of Nashon Evan-Chaim.

I believe Nashon is one of 3 people who had equipment and computer media
searched last year under warrant (April or May, as I remember).  Nashon
allegedly was an active associate of "Phoenix" (or was himself Phoenix) -- the
Australian who broke into Cliff Stoll's machine and mine, who then called John
Markoff at the NY Times to brag about breaking into our systems (February or
early March).  In both cases, damage was done to our systems; Cliff claimed his
system was thoroughly trashed during the incident, as I remember.

The same gang of crackers are alleged to have broken into systems at a major
telecommunications firm (I won't use their name) and caused damage, and I know
they raised havoc with machines at LLNL and UT Austin, as well.  Breakins
occurred many other places, too. System files were altered to insert backdoors
for later access, and log files were altered and destroyed to hide the
evidence.  We saw it happen, as did people at those other sites -- it was not
just simple exploration.  The breakins were purposeful, and continued over
several months despite warnings and attempts to stop them.  I'm aware of some
of the evidence collected in the case by the Australian Federal Police --
including transcript of hacking sessions -- and it shows that more than
"innocent exploration" was involved.

I'm not claiming that Nashon was the principal in this, or was involved in all
the activity; the Australian courts will decide the legal aspects of that
question.  However, if he *was* involved with these activities, he was
certainly doing more than harmless exploration (if, indeed, any unauthorized
exploration is "harmless").  Some of the damage may have been accidental or
incidental, but there was damage nonetheless, and it caused considerable work
for our staff here to clean up afterwards, as it did at the other sites
involved.

People (in general -- I'm not singling out Mr. O'Keefe) should realize that
individuals committing computer crimes don't all look the part....assuming
there is any typical "look" to them.  Pick almost any kind of "white-collar"
crime you wish to name.  Then interview victims, friends, and teachers of the
accused, and many of them will say "I never would have expected it of him (or
her)!  He was such a bright, friendly person from a good family....."  (It
doesn't even have to be white-collar crime: Ted Bundy comes to mind as an
example).

To bring this back around more squarely into RISKS: Bright students are just as
capable of stepping outside the bounds of propriety as are dumb students --
maybe even more so, as they often know how to get around the barriers that have
been placed to prevent accidental access.  Just as we shouldn't always believe
what the computer tells us, we should likewise not always believe what our
intuition tells us.

Furthermore, those of us who are teachers and role models need to be
sure we are teaching all our students (especially the bright ones)
where the boundaries of proper usage lie; teaching how computers work
is not a substitute for raising questions about how they are to be used.
I wonder if this was a topic Mr. O'Keefe and his colleagues every
raised in Nashon's classes?

Gene Spafford, NSF/Purdue/U of Florida Softw. Eng. Res. Center, Dept. of
Computer Sciences, Purdue University, W. Lafayette IN 47907-1398 (317) 494-7825


Senseless Actions Invite Trouble

<clear@cavebbs.gen.nz>
30 Aug 91 01:42:00 NZT (Fri)
The National Library of New Zealand runs an online database service
known as Kiwinet. BRS-Search is used to look up a number of databases
concerning legal, political and regulatory matters. There are several
hundred users and a number of dialup modems.

Users dial into Kiwinet and are charged an average of around $200/hr
for database access. Obviously security is of a major concern to users.
Imagine my surprise when the following appeared in with this month's
Kiwinet newsletter:

-----begin text-----

ALERT! For smooth transition to Kiwinet's new system on 2 September,
please ensure that all Kiwinet users in your organisation read this!!!

The first time you log on to Kiwinet after 2 September, you can NOT use
your usual Kiwinet password. Log on with your Kiwinet UserID as usual,
then, when prompted to enter your password, type in the default password
for all users, which is:
                           SPRING

It is vital that you change this default password as soon as you have
logged on, in order to help prevent unauthorised use of your UserID.
Any Kiwinet usage made on your UserID will be charged to you.

-----end text-----

Can you say, "Hackers Paradise"? Reading the above makes me wonder just
how some so-called professional system administrators actually get jobs.
I know damn well that if Kiwinet tried to bill me for any logins I
hadn't made, I would be investigating taking civil action against them
for negligence and for unauthorised tampering with my user account.

Charlie Lear, clear@cavebbs.gen.nz

    [Charlie's message was also sent to me by Tim Larson, tim@gistdev.gist.com
    Global Information Systems Technology Inc.  PGN]


A Danger Associated with Intelligent Terminals

Douglas Thomson <doug@giaea.oz.au>
Fri, 30 Aug 91 13:39:25 EST
Our UNIX system has a console that is left permanently logged in as root. This
is convenient for the operators, since they can do things like removing print
jobs from queues without having to keep logging in and out. I have more than
once queried the advisability of leaving root logged in, but the response has
always been that as the console is in the same locked room as the computer
itself, it poses no (increased) security risk.

I was never quite satisfied with this answer, and just recently I decided to
explore the question a little further. I found there was indeed a major
security hole, and one that did not involve any physical access to the computer
room.

As I have said, the console is always logged in as root. In addition, the
console is writable by everyone, so that anyone can send a message to the
operator. So far so good. This works well, and is convenient for everyone.
However, the console is an "intelligent" terminal (and perhaps, under the
circumstances, I had better not specify which type!). There are several of
these terminals around here, and the terminal's user manual may be borrowed
from the computer centre. I borrowed one, and checked up on what I could do.

Firstly, it turned out that I could remotely program certain function keys, so
that the next time someone pressed the key it would execute my command as root.
However, the operator would at least see this happening, so this security hole
would be fixed pretty quickly.

However, there was better to come. Naturally it was possible to send cursor
addressing escape sequences, and hence to display anything I wanted on any
region of the screen. What really caught my attention was that it was possible
to instruct the terminal to transmit the contents of a field on the screen back
to the computer! So I could define a field at some part of the screen, program
the field terminator to be a carriage return, write whatever command I wanted
executed to the region of the screen containing the field, and then ask the
terminal to transmit the field - thereby executing as superuser any command I
chose!

I also looked through the manual to see if there was any way to disable such
"intelligent" behavior, but I could not find one.

The moral is obvious: don't allow write access to an intelligent terminal! Any
user who can write to such a terminal can do anything they could do by typing
at the keyboard!
                   Doug.  (doug@giaea.oz.au)  ...!munnari!goanna!giaea!doug


Re: Unwarranted equivalence assumptions (Cooper, RISKS-12.19)

<ark@research.att.com>
Thu, 29 Aug 91 09:31:51 EDT
Brinton Cooper asks [relating to Andrew's four cases in RISKS-12.18]:

    Suppose the denial had been by computer:

         "Incorrect password:  Login aborted."

    Would he argue that this might have been the *genuine* user who
    had forgot her password and that the "system" should have known
    better because the login was from site known to her office?

No, of course I wouldn't argue that way.  Although present-day assumptions have
many kinds of bad side effects that result from making incorrect decisions,
that doesn't mean they should be replaced willy-nilly with other assumptions
that have other, equally bad side effects!

However, the purpose of an authentication system is not simply to keep
unauthorized people out -- that could be guaranteed by simply keeping everyone
out!  For an authentication system to be of any use it must simultaneously let
the good guys in and keep the bad guys out.

What I'm trying to point out is that people tend to treat such systems as
infallible, which sometimes causes anomalies.  The fact that such anomalies are
sometimes considered evidence that the system is working as designed doesn't
make them any less anomalous.
                --Andrew Koenig   ark@europa.att.com


Old School Reports of the Famous

Keith Bostic <bostic@okeeffe.CS.Berkeley.EDU>
Sat, 24 Aug 91 16:27:36 -0700
From: mathew@mantis.co.uk (Kernel Mustered)

         Old School Reports of the Famous, #1: Richard Stallman.

   "He is an excellent pupil.  Our only complaint is that he encourages
    all the other pupils to copy his work."

Please report problems with the web pages to the maintainer

Top