The RISKS Digest
Volume 16 Issue 08

Wednesday, 18th May 1994

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Phone system crash at San Jose airport
Tarun Soni
Logging on as root is easy!
Eddy
Computer Crime on Wall Street
Sanford Sherizen
Going by the Book [air accidents]
Richard Johnson
Editor functionality mutates code
Kevin Lentin
UK ATM Spoof
Mich Kabay
The RISKS of complex procedures
Ry Jones
Tactical research
Phil Agre
FIPS to be tied... [hashing hashed, Capstone]
Peter Wayner
Re: INS recordkeeping
Jonathan Corbet
Re: killers sue over phone taps
Robert Morrell Jr.
Re: Secret... not so secret
Tony Harminc
Novel risk of medical records
David Honig
Crossover of Diagnosis and Procedure Code ...
Carl Ellison
Info on RISKS (comp.risks)

Phone system crash at San Jose airport

Tarun Soni <tarun@windchime.arc.nasa.gov>
Tue, 17 May 1994 15:01:08 -0700
    I was travelling from San Jose to San Diego two weeks ago (Fri.
evening: 29 April) and showed up at the airport an hour early (Fri. evening
rush hour expectations.. )  This is when the fun started: I found that the
phone systems in the airport were not working.  The supposed reason was a
pacific gas and electric problem (that's the local utilities company).

The immediate effects included:
   a> air traffic control computers were no longer able to talk to each other?
    (Southwest airlines assured people that they would go back to the
        technology of the 60's and there was no real danger)

   b> flights were delayed.

   c> reservation computers were down and out..
      - the fact that I had a reservation and for certain flight was
        known only to me and the airline personnel took me on trust.
      - the fare amount was not known to the airline personnel in
    general, and they had to confer with their manager for some of the
        fares .. (in my case they accepted the figure I told them )
      - The computers could not talk to the central database and this
        led to my being given a one way ticket while I really wanted a
        round trip.  (This led to my waiting in line for another
        hour in San Diego on my return trip.. )

   d> credit card approvals were no longer possible.. they took me on trust.

   e> oh, the most comic of them all, flights were delayed and a number
        of people (I have to admit, I fell for it too !) were seen trying to
        call up their receiving parties in other parts of the country (and
        slowly realizing that the phone were not working !)

I assume there were other effects which i was not able to witness.

and to use a phrase I see on the digest fairly often :
        The risks are obvious.

Tarun Soni


Logging on as root is easy!

eddy <eddy@gen.cam.ac.uk>
Wed, 18 May 94 15:29:19 BST
This _really_ alarmed me; but this is Un*x, so I guess it could just be a
well-known `feature' ...

We mount a couple of disks from a neighbouring department's machine, mbfs.bio.
They had some emergency maintenance to do (well I assume it was an emergency --
taking the machine down with < a minute's warning is pretty d*mn unsound
otherwise) so did a shutdown.  When I got the notification for this, I set
about unmounting the disks asap:

eddy@eddy> su
Password:
NFS server mbfs.bio not responding still trying

but never got as far as delivering a password because of the downtime.  So my
machine hung and there was nothing I could do about it.  Perfectly normal
hassle, irritating but no surprises.  So I go shopping and come back to find
mbfs.bio is back up and running ... however, my `su' has _succeeded_ despite
the fact that I never typed the password:

NFS server mbfs.bio ok
eddy.gen.cam.ac.uk# whoami
root

Nice secure operating system this.

    Eddy


Computer Crime on Wall Street

Sanford Sherizen <0003965782@mcimail.com>
Sat, 14 May 94 11:40 EST
I'm surprised that no one has commented on the case of Joseph Jett, a managing
director/chief government bond trader at Kidder Peabody, who allegedly created
an estimated $350 million in phantom profits, resulting in his 1993
performance-based bonus of $9 million. Experts quoted in in recent articles
indicate that he must have made something like $35 BILLION in false trades
without anyone asking questions or the controls raising alarms. At this time,
lots of blaming is going on within Kidder Peabody as well as GE, the corporate
parent of Kidder.

Is this a computer crime?  It seems to me that this manipulation could only
have been accomplished through extensive computer manipulation by Jett and
possibly by others.  This may turn out to be one of the largest computer crime
losses to date.  It illustrates several points.

1. The growing problem of high level executives who are not being adequately
or in many cases even partially supervised.  They are in position to commit
crime by instructing others to enter a transaction and then destroying
evidence of their instructions or the transaction. This is a growth area for
computer crime.  Not a hacker in sight for this case.

2. Audit and accounting controls are often insufficient for large financial
systems and inadequate review requirements result in many of crimes being
overlooked, buried, or disregarded.  Wait until companies sign onto the
Information Superhighway!

3. Computer crimes and financial misdeeds get some (but inadequate) coverage
in the business press but very little in the material read by other relevant
people, such as computer professionals.  If this is a $9 million crime (his
false profits), a $350 million crime (the company's false profits) or maybe an
even larger loss (the company's negative reputation and possible financial
penalties due to legal proceeding), then how large of a loss must be reached
before a crisis is indicated?  Even the Volkswagen case of several years ago,
where an employee working in foreign currency transactions used his access to
computers to cause the loss of the equivalent of $US 256 million, didn't raise
many eyebrows in the business or computing communities.  If around a quarter
of a billion dollar loss doesn't indicate that computer crime is serious, then
what figure is enough to decide that the controls and the laws are inadequate
to meet the technological challenges?

Finally, anyone want to comment on the following statement of GE Chairman John
F. Welch, Jr.?  "It's a pity that this ever happened.  (Jett) could have made
$2 or $3 million honestly."


Sanford Sherizen, Data Security Systems, Natick, MA


Going by the Book

Richard Johnson <rdj@plaza.ds.adp.com>
Fri, 13 May 1994 08:11:26 -0700 (PDT)
The 9 May 1994 "Aviation Week" (pp. 46-51) has an article on Controlled Flight
into Terrain (CFIT) — when controllers or procedures fly you into the ground
short of the runway, usually in mountainous and/or poor weather conditions.
The article includes the following data (but given as a bar chart, and
oriented the other way).

                    Worldwide Airline Fatalities
            Classified by Type of Accident 1988 - 1993
    Source: Boeing (for aircraft > 60,000-pound takeoff weight)
               (Excludes sabotage & military action)

    Category                          # accidents  # fatalities
    ----------------------------------------------------------
    Controlled Flight into Terrain         28       1883
    Loss of Control (airplane caused)      10        460
    Loss of Control (crew caused)          14        357
    Airframe                                4        278
    Mid-air Collision                       1        157
    Ice/snow                                4        134
    Fuel exhaustion                         5        107
    Loss of Control                         2         79
    Runway Incursion                        3         43
    Other                                   5         15

76 total accidents, 3513 fatalities.  (How do you have *1* mid-air?
They must be counting accidents, not aircraft involved.)

I'm dismayed by the size of the category called "Loss of Control (airplane
caused)", especially since they differentiated between that and airframe.
This implies (to me at least) software errors.  I can't help but remember all
the discussions we've had in RISKS about poorly-designed cockpit management
and fly-by-wire systems.

Rather than pointing out the obvious, as they do in the article,
I chose to rework the table a little, and make the bins wider.

    Category                                          Totals
    ----------------------------------------------------------
    Procedural  CFIT-28/1883, Mid-air-1/157, runway-3/43  32/2083
    Aircraft    LoC (plane)-10/460, airframe-4/278        14/738
    Crew        LoC (crew)-14/357,  fuel-5/107            19/464
    Weather     Ice/Snow-4/134, LoC (WX)-2/79              6/213
    Other       ???-5/15                                   5/15

One the one hand, if the flight crews blindly follow procedures, they risk
CFIT.  On the other hand, if they don't follow procedures, or forget things,
they run out of fuel or into adverse weather or onto an active runway.  We
should also remember that, unlike private pilots, these transport pilots are
expected to fly into and through moderate to severe weather all the time.

The risks?  We can all learn from this, no matter what we're working on.  We
need to examine all of our procedures with an eye to long-term consequences
when put into actual practice.  The human tends to learn, then memorize, and
eventually internalize (habituate) routine behavior.  Sometimes this routine
can be fatal.  I'm not sure how we make procedures good enough to be complete
and quick, yet flexible enough to not grow boring.

It's not just procedures in use on the aircraft, either.  Consider our own
motivation in whatever job we have.  How many of those airframe failures could
have been prevented on the shop floor?  How many of those software failures
could have prevented before the first compile?  We who manipulate symbols all
day don't often get the opportunity to "feel" the true meaning or impact of
our work.  Finding a way to do so might produce both better products and more
satisfied programmers.

Richard Johnson  (rdj@plaza.ds.adp.com)   (richard@agora.rdrop.com)


Editor functionality mutates code

Kevin Lentin <kevinl@bruce.cs.monash.edu.au>
Wed, 18 May 94 11:57:10 +1000
The following story was related to me by a colleague of mine this morning.

She was editing some C source using a 'vi' compatible editor called
'vim'.  vim is a vi clone that adds some very useful features to the
editor. Many of us use it in place of vi. On this occasion she was
logged in to a Sparcstation via a terminal annex and modem from home.

She observed that numbers in here code kept on changing. Specifically,
decreasing by one for no apparent reason. A screen redraw might change:
    x = 1;
into
    x = 0;
and then
    x = -1;

It turns out that vim has a nifty feature whereby CTRL-A and CTRL-S add 1
and subtract 1 from a number respectively when in command mode. At the same
time, her modem was set up for XON/XOFF flow control and it seems that
somehow the CTRL-S's were getting through not only the modem but also
through her stty settings which might otherwise have interpreted the CTRL-S
as a STOP character.

The upshot of all this was that in an attempt to regulate data flow through
the modems, the XON/XOFF protocol was actually mutating her source if the
cursor was on a number when a CTRL-S came through.

The RISKS of this are painfully obvious. Critical code can end up being
mutated and having serious effects later on if the changes are undetected.
I am reminded of stories about a missing '-' causing a certain NASA mission
to fail in decades past.

Whether this situation can occur easily or whether it was an unfortunate
combination of settings, especially 'stty stop' which I suspect was changed
from the normal ^s, I do not know, but I will certainly be verifying all my
modem and terminal settings when I get home tonight.

Kevin Lentin   kevinl@bruce.cs.monash.edu.au


UK ATM Spoof

"Mich Kabay [NCSA Sysop]" <75300.3232@CompuServe.COM>
17 May 94 16:00:20 EDT
According to the Reuter newswire (via Executive News Service on CompuServe),

  LONDON, May 16 (Reuter) - Crooks used a stolen cash dispensing machine set
  up in a fake finance shop to steal 250,000 pounds ($374,400) in a high-tech
  swindle, a British newspaper said on Monday.
     The cash dispenser, ripped out of its legitimate location using a
  mechanical shovel and a fork-lift truck, was installed in a shop in London's
  Bethnal Green district painted to look like an advice centre for home loans,
  the Sun tabloid reported."

Apparently the stolen ATM simply recorded the card info and PIN of everyone
who tried to use it.

Total thefts from other cash dispensers amounted to over 240,000 pounds in 6
weeks.  The Sun reports that "A man has been charged with conspiracy to
defraud...."

[MK comment: now consider whether this scam would have been effective if users
were issued smart cards for their bank and credit functions.  The user
approaches the ATM and inserts the smart card.  The ATM challenges the smart
card (or prints a challenge string on a screen) and the smart card responds
with a cryptographic function of its serial number and a seed such as time of
day and date (or the user enters the challenge on a membrane keyboard and
copies the one-time response from the card's LCD into the keypad).  The ATM or
banking network validates the response and continues the transactions.
Results of setting up a spoof on a stolen ATM: zero.]

M. E. Kabay, Ph.D., Dir Educ, Natl Computer Security Assn, Carlisle PA USA


The RISKS of complex procedures

Ry Jones <rjones@poseidon.usin.com>
Tue, 17 May 94 16:15:55 PDT
I bought a lot of money orders at my local branch bank (to pay bills, they're
free and clear faster) and watched the teller make them.  Here's what goes on:

1) I give her the check.
2) She verifies the funds.
3) She makes the money orders.
4) She hands them to me.
5) She hits the "Hold Funds" button and stamps the check "HELD" in red ink.
6) Transaction ends.

I asked why she waited until she gave me the money orders to hold the funds.
She said they do so because the procedure to un-hold funds is very difficult
and time consuming, requiring many signatures.

Fraud opportunity:
1) Steps 1 and 2 as above.
2) While she is in steps 3 & 4, I use my cell phone to signal my accomplice.
3) My accomplice withdraws the cash from an ATM.
4) She does step 4. I run out (or hand off the money orders).  Whatever.
5) She cannot HOLD the funds, but I have the money orders. My accomplice
   has the cash.

RISKS to this fraud?
1) They know me by name at my local branch.
2) My account is real. It would need to be enough cash to escape the country,
   yet be under my maximum daily withdrawal limit ($1000). The maximum value
   of the funds would be $2000, unless multiple people coordinated multiple
   simultaneous withdrawals (like the bank personnel wouldn't notice)
3) many others... the cash reward for the risk is too small, but the window of
   opportunity exists.

Ry  rjones@halcyon.com


tactical research

Phil Agre <pagre@weber.ucsd.edu>
Tue, 17 May 1994 14:04:49 -0700
The 17 May 1994 Wall Street Journal includes an article, excerpted from a new
book (which I have not seen) entitled "Tainted Truth" by Cynthia Crosson,
about the practice of "tactical research", the creation of customized policy
research as part of public debates.  She details the example of computerized
life-cycle analysis for cloth versus disposable diapers.  As with most
computer models, you can get a wide variety of answers depending on the
estimates you give for a large number of hard-to-measure quantitative
variables.  (How many times does a cloth diaper get changed in its lifetime?
How often do people use two diapers rather than one?)  I have heard many, many
anecdotes of computer models being manipulated to give the desired answers;
you probably have too.  It's certainly not a new phenomenon, having risen to
prominence in during the zenith of the systems modeling fad in the 1960's.

Crosson's article tries to explain how this manipulation comes about.  Are
the modelers consciously trying to fool people?  Much as I resist that answer,
out of an ideological preference for more more complex and systemic kinds of
answers, that's basically what she says.  Once you start making your living
making models whose answers are convenient to certain sorts of people, she
says, a sort of treadmill gets going and it's hard to get off.  The phenomenon
is particularly important in the context of the ongoing US health care debate,
in which a blizzard of made-to-order numbers circulates through advertisements
and talk show hosts, all of them resting on more assumptions than you could
shake a stick at.

What's the answer?  How about a public education campaign about the concept
of sensitivity analysis?  The more reputable polling agencies might frequently
use loaded questions, but at least they feel obligated to explain that the
numbers have a statistical margin of error of +/- 3% or whatever.  Likewise,
people presenting models should expect to be asked, "what are your input
variables, and how sensitive is your answer to the range of plausible values
for each?"  That's a simple enough question that a fairly large percentage of
the population can understand and ask it.  It's not adequate, of course, since
assumptions can be built into computer models in a wide variety of ways.  But
I expect that it's sufficient to get rid of the first 90% of the bogus uses of
modeling.

Phil Agre, UCSD

PS Here are some relevant references:

Cynthia Crosson, How "tactical research" muddied diaper debate, The Wall
Street Journal, 17 May 1994, page B1 (marketing section).

Cynthia Crosson, Tainted Truth: The Manipulation of Fact in America, New York:
Simon and Schuster, 1994.

Kenneth L. Kraemer, Siegfried Dickhoven, Susan Fallows Tierney, and John
Leslie King, Datawars: The Politics of Modeling in Federal Policymaking, New
York: Columbia University Press, 1987.

Ida R. Hoos, Systems Analysis in Public Policy: A Critique, revised edition,
Berkeley: University of California Press, 1983.


FIPS to be tied... [hashing, Capstone]

Peter Wayner <pcw@access.digex.net>
Wed, 18 May 1994 10:39:28 -0400
There are two interesting lessons hidden in the fact that the NSA discovered
some flaw in the Secure Hash Algorithm (RISKS-16.07) and announced that
they were working on a fix:

*) They're become slightly more open-- if only to authentication technology.

*) This proves the RISK of using a hardware based standard. Apparently
there is a whole batch of now obsolete CAPSTONE chips sitting in a warehouse.
Capstone was supposed to be Clipper + authentication, but now it will have
to wait for a new version.

Imagine if Capstone was widely distributed when the error was found? Would
you replace your phone/modem/PCMCIA card is you had bought a version that
was made obsolete by progress? How long would it take the country to
recover from such a problem? What if the standard was blown wide-open?


Re: INS recordkeeping

Jonathan Corbet <corbet@stout.atd.ucar.edu>
Tue, 17 May 1994 11:01:16 -0600
John Oram finishes an interesting submission about the installation of
"palm readers" at the border with:

> Also, I find it interesting that the image is kept on the card only.  Will
> the INS keep a record of your travels?  (Or do they already?)

Anybody who wonders about that need only have gone to Nicaragua in 1985, as
I did.  My travels take me out of the country a few times a year, so it
became immediately apparent that I had made some sort of list when, on
scanning my passport, the immigration officer would invariably write the
magic code on my entry form that means something like "give this one a hard
time."  For about 4-5 years (after which I evidently aged off the list for
lack of further "infractions") I had to allow a solid three hours of
connection time when entering the country so that they could search my
bags, count my money, grill me about my life, and so on.  I had never been
searched before this period; I have never been searched since.  But I was
searched *every* time in between.

Many people who travelled in Central America during that period have much
worse stories to tell.

As a result of all this, (1) I am not worried about palm readers, since
they give the INS no capability that they do not already have, and (2) I
have no illusions about the benign nature of government power, even in a
relatively free (so far) country such as this.

Jonathan Corbet, National Center for Atmospheric Research, Atmospheric
Techn. Div.  corbet@stout.atd.ucar.edu  http://www.atd.ucar.edu/rdp/jmc.html


Re: killers sue over phone taps

"Robert Morrell Jr." <bmorrell@isnet.is.wfu.edu>
Tue, 17 May 1994 13:12:14 -0400 (EDT)
Adam Shostack, in bemoaning the invasion of convicted killer's privacy
said probably the most outrageous thing I have ever seen in the RISKS forum:
"I've also yet to hear of criminals who want to deny others rights they
claim for themselves"
Really? I believe that the right to =life=, =liberty=, and the pursuit of
happiness is somewhat well known, and a convicted killer, kidnapper or
robber has certainly shown a willingness to deny others those rights....
In our ongoing and totally justifiable drive to insure the integrity of
our informational system, I think we should, at least once a day, do a
reality check.
The RISKS of not doing so is to have safety, security and privacy =only=
in virtual realities, and not in physical ones.

Bob Morrell


Re: Secret... not so secret (Wexelblat, RISKS-16.07)

<tony.harminc@amail.amdahl.com>
Tuesday, 17 May 1994 13:11 ET
Alan Wexelblat <wex@media.mit.edu> writes about NYNEX/New York Telephone
mailing out replicas of customers' calling cards, complete with PINs.

There is an underlying historical problem with telephone calling cards in
North America that leads to a lot of this sort of trouble: those last four
digits were not intended to be a PIN in the first place.  The concept of PIN
as a 'something you know' identifier to be combined with the 'something you
have' card was kludged on to the original calling card scheme only after
massive fraud in the 1970s.  A simple (and widely publicized) check digit
scheme was previously used by operators to manually verify card numbers.

Current calling cards are unlike any other card one might carry, in that
they usually have the 'PIN' embossed on the card and encoded on the
magstripe.  Some newer cards (typically those that do not have a phone
number as the first ten digits of the number) do not have this problem.

Tony Harminc   Antares Alliance Group   Toronto Software Development Centre


novel risk of medical records (comment on "Future of US health care?")

David Honig <honig@ruffles.ICS.UCI.EDU>
Sun, 15 May 1994 15:44:09 -0700
In RISKS-16.06, Amy_McNulty@vos.stratus.com writes about how a little girl's
doctor's trick of using a serious 'diagnosis' to let the HMO's bureaucracy
authorize a specialist "could cause her to be unjustly rejected sometime next
century when she applies for life insurance, medical insurance, a physically
demanding job, college, or who knows what else."

Its *worse* than that.  The job she may be denied need not be "physically
demanding".  There is some big military-industrial firm (whose name I forget)
which publicly claims it will no longer hire *smokers* because they increase
the cost of group insurance.  (I suppose this has not caused major activism as
smokers "choose" to smoke.)  One wonders what the future holds...

"I'm sorry, Ms Jones, but during your probationary period you overvisited the
candy machine and that places you in ... "


Crossover of Diagnosis and Procedure Code ... (Stoufflet, RISKS-16.07)

Carl Ellison <cme@sw.stratus.com>
Tue, 17 May 1994 13:57:45 -0400
It is inexcusable to use code numbers in the first place, much less to re-use
them.  This is left over from human procedures on paper forms.  The computer
is capable of freeing us from this kind of stupidity — with arbitary length
on-line form fields, either intentionally readable by a human or intentionally
encrypted for privacy.  Even in the latter case, a decent code for a field
will be both unique and redundant (checked) so that a 1 or 2 digit typo won't
hit another defined code word.

P.S.  I'm a customer of HCHP and I'm not sure I want to continue to be, given
this!

Please report problems with the web pages to the maintainer

x
Top