The RISKS Digest
Volume 15 Issue 9

Friday, 8th October 1993

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Risks of disrupting air traffic control ("Mile High Club")
Richard Marshall via Tom Blinn
Sound of the Fury, Part II
Peter Wayner
Risks of "security" on Apple Newton
Doug Siebert
Re: Control faults cause train crash
Dik T. Winter
Marc Horowitz
Re: Conditioning and human interfaces
Nick Rothwell
Re: Separating parts in privileged applications
Paul Karger
Steen Hansen
A. Padgett Peterson
Re: "Change" and October 1993 CACM
Selden E. Ball
Jr.
James K. Huggins
Re: Libraries and Imagined Communities
Mark Gonzales
Re: Cancer Treatment Blunder
Bob Frankston
Rogier Wolff
Jon Jacky
Re: RISKS of unverified driving records
Jim Horning
Jim Hudson
Info on RISKS (comp.risks)

Risks of disrupting air traffic control ("Mile High Club")

"Dr. Tom Blinn (DTN 381-0646, ZKO3 3X05)" <tpb@zk3.dec.com>
Fri, 08 Oct 93 14:39:08 +28716
I'm sure there are multiple risks here — not the least of which is that
the reported incident disrupted ATC communications for about 50 minutes.
[Dr. Thomas P. Blinn, UNIX Software Group, Digital Equipment Corporation
Mailstop ZKO3-3/W20, 110 Spit Brook Road, Nashua, New Hampshire 03062]

     ------- Forwarded Message

Subject: :-) BRITISH COUPLE BROADCAST THEIR FROLIC IN THE SKIES

From:   MOVIES::RMARSHALL "Richard Marshall 824-3383 EDO-13  08-Oct-1993 1611"
Subj:   This looks true as air traffic in and out was disrupted that night...

(Our Technical Director was returning to Edinburgh from London that night
and was delayed...)

RTw  10/06 2320  BRITISH COUPLE BROADCAST THEIR FROLIC IN THE SKIES

LONDON, Oct 7 (Reuter) - A British couple who made love in a light aircraft
forgot to turn off their transmitter and broadcast their moments of passion to
air traffic controllers and radio enthusiasts on Wednesday.

The couple, flying in a private Cessna 150 plane near the Scottish city of
Edinburgh, began by debating whether they should have sex 5,000 feet (1,500
metres) above ground and join the "Mile High Club."  Their conversation grew
more and more passionate and then ceased.

"We've been trying to raise you for the past 50 minutes," an angry controller
was quoted by the domestic Press Association (PA) as telling the errant couple
when they came in to land.  "We've been listening to your conversation. Very
interesting. Please come and see me when you land."

Fifteen aircraft, including shuttles, holiday jets and cargo planes, had to
use an emergency channel while the two cavorted.

PA said the pilot reported to the authorities at Edinburgh Airport, where he
was carpeted for blocking radio communication.  "Apart from one aspect of his
airmanship — his failure to check in on a regular basis — there were no
breaches of aviation rules," PA quoted the airport's air traffic control
manager Paul Louden as saying.

   [No breeches, either.  Gives a new meaning to "Beam me up, Scotty!"  PGN]


Sound of the Fury, Part II

Peter Wayner <pcw@access.digex.net>
Mon, 4 Oct 1993 15:15:35 -0400
Several months ago, I noted that AT&T was planning on using its submarine
finding acoustical expertise to track the ebb and flow of traffic on the
highways. This Sunday's NYT (Oct 5, pg 30) mentioned that another company,
Alliant Techsystems of Edina, Minn is trying to get the Federal Government to
buy a system from them and install it in Washington, DC. The target?
Targetshooters.

They aim to place a network of sensors on top of telephone and utility poles
and link them into an array that would allow the police to track random
gunfire and respond much faster. (Up to 85% faster according to the article.)
There would be no need to wait for a good citizen to call and report the
reports.

The author (Warren Leary) spent some of the ink wondering whether such a
project was actually feasible. Several accoustical experts were "skeptical
about whether sensors could be designed to isolate gunshots from other city
noise." But the RISK is not just that the project will turn into a fool's
folly. Some of the city noises that might not be filtered out are
conversations...


Risks of "security" on Apple Newton

Doug Siebert <dsiebert@icaen.uiowa.edu>
Wed, 6 Oct 1993 19:47:26 GMT
The Apple Newton has a "security" feature which involves letting the owner/
user set a password to the machine, presumably to protect private data stored
in the machine.  Its possible of course to get it to dump its data to a
Macintosh for backup purposes, where it can be easily sifted through.  But
readers of this group probably would expect that.  Maybe even some users would
expect it.  But something I was surprised to find out is that in that data
dump the user's *plaintext* password is stored!  Given the number of people
who use the same password for about everything requiring a password, it is
easy to see what the risks are here...

Doug Siebert     dsiebert@isca.uiowa.edu


Re: Control faults cause train crash (Cohen, RISKS-15.08)

Dik T. Winter <Dik.Winter@cwi.nl>
Fri, 8 Oct 1993 02:26:45 GMT
It appears computer controlled trains are not yet there.  Although my
experience did not have casualties it points to some problems.  The London
Docklands Light Railways are completely computer controlled.  There is a
conductor/driver on board for emergencies, and the closing of doors.
Moreover, he has to take control on the station (Canary Wharf) the software is
not yet able to deal with after all those years of operation.

What we experience was that at the terminal station the train stopped as
intended, but a few centimeters short of the exact required place.  The net
result was that the doors refused to open.  The conductor/driver had to come
forward to the driving box to manually forward those few centimeters.  But
this happened only after a prolonged conversation with the Central Control.

So apparently (but this is just speculation) there are sensors that tell
whether the doors can be opened and there are different sensors that tell
whether a train is where it should be.  And they do not always agree.
--
dik t. winter, cwi, kruislaan 413, 1098 sj  amsterdam, nederland
home: bovenover 215, 1025 jn  amsterdam, nederland; e-mail: dik@cwi.nl


Re: Control faults cause train crash (Cohen, RISKS-15.08)

Marc Horowitz <marc@MIT.EDU>
Thu, 07 Oct 93 23:29:13 EDT
<> 60,000 passengers each day ...

Lets see.  That's 262.8 million passengers.  178 injuries and 48
hospitalizations means that 1 in 1.5 million passengers is injured, and 1 in
5.5 million is hospitalized.  This isn't the 1 in 10^9 figure we hear often on
this list, but it's a fairly admirable record, nonetheless.

Failing controls are certainly a RISK, but one must look at the entire record,
not just a single bad incident.

    Marc


Re: Conditioning and human interfaces (Dorsett, RISKS-15.06)

Nick Rothwell <cassiel@cassiel.demon.co.uk>
Fri, 8 Oct 1993 10:41:35 +0100
>I'm sure there's a RISK in there, somewhere...:-)  It was COMPLETELY
>instinctive for me to hit "No"...

There's a quite obvious risk. At one time I was using versions of EMACS on
UNIX and on the Mac. The command set common to both is identical, except
for the behaviour upon quitting if there is unsaved work, which looks
roughly as follows:

   (GNU Emacs)     Save changes to buffer FOO before exiting? (Y/N)
   (microEmacs)    One or more unsaved buffers exist, quit anyway? (Y/N)

After I'd been caught by that a couple of times, microEmacs hit my bit
bucket with a resounding clang.

                        Nick Rothwell   |   cassiel@cassiel.demon.co.uk
     CASSIEL Contemporary Music/Dance   |   cassiel@cix.compulink.co.uk


Rings (was: Separating parts in privileged applications)

<pkarger@gte.com>
Fri, 08 Oct 93 10:00:48 -0400
Monsieur Royer mentions NOS/VE as using protection rings, and Peter Neumann
points out that Multics is of course the classic example of the first
operating system to use rings.  However, many other systems since then have
also used rings including:

    VME/B for the ICL 2900
    AOS/VS for the Data General MV8000
    VMS for the DEC VAX
    the Hitachi 5020 time sharing system (first with hardware rings)

and probably many others.

    - Paul


Separating parts in privileged applications (Royer, RISKS-15.08)

Steen Hansen <steen@kiwi.swhs.ohio-state.edu>
Fri, 8 Oct 93 08:05:48 -0400
> [Never heard of Multics, eh?  Well, that was almost 30 years ago. ... PGN]

The Primos operating system uses this ring protection scheme. It was developed
by a number of the same people who made Multics.

Steen Hansen                e-mail: hansen+@osu.edu
Computer Specialist         (614) 292-7211 (Stores/Food: tue/thu/fri)
Ohio State University       (614) 292-9317 (Dentistry: mon/wed)


Separating Parts in Privileged Applications (Royer, RISKS-15.08)

A. Padgett Peterson <padgett@tccslr.dnet.mmc.com>
Fri, 8 Oct 93 08:50:08 -0400
While such a ring mechanism *can* be quite effective. It must be remembered
that all such schemes (including the "protected" mode of the 80286+) rely
at some point on software to effect the state change. While this can be
effective protection at the OS level, it is vulnerable in every instance
I have seen to a tunnelling or covert channel attack.

Conventional cpus (and the Intel iapx architecture in particular) are single
state machines and a properly presented instruction will be executed by the
hardware. If only software deciders are used to determine whether to change
rings and the higher rings are also implemented in software, they can be
bypassed.

Years ago in a galaxy far, far, away I had a problem with an OS that operated
in such a "protected" state and would periodically update its real time
clock with an "unmaskable" interrupt. We needed a precise 660 usec period
without any interrupts to the executing code. By placing an array at the
head of the program and storing a value into a reverse dimension - clk(-2078)
as I recall - it was possible to turn off the clock while our code executed.

    Padgett


Re: "Change" and October 1993 CACM

Huggins, RISKS-15.08 <"Selden E. Ball, Jr." <SEB@LNS62.LNS.CORNELL.EDU<>
Fri, 8 Oct 1993 10:29 EDT
In addition to the RISKS posed by the computing systems themselves, there are
risks when one tries to analyze computing systems from limited information.

In RISKS-15.08, Jim Huggins <huggins@eecs.umich.edu> made some comments about
a system being implemented in Cornell's Theory Center. Unfortunately, as best
I can tell, his comments were based solely on a single paragraph in
_Communications of the ACM_, October, 1993, v36, n10, p11: "NEWSTRACK — POWER
HUNGRY".

As a result, Jim seemed to be trying to apply standards which are usually
appropriate when evaluating production computing facilities.  Although it has
been an extremely useful tool for many of the people using it, the highly
parallel IBM computer system that was mentioned is still a (rather expensive)
research project.  By definition, research always entails "risk".

More information about the research facilities at Cornell's Theory Center, one
of the NSF funded national supercomputer centers, is available from their
gopher server at gopher.tc.cornell.edu, port 70.

Selden E. Ball, Jr., Cornell University, Laboratory of Nuclear Studies
230A Wilson Synchrotron Lab, Ithaca, NY, USA 14853-8001  +1-607-255-0688


Re: "Change" and October 1993 CACM

"James K. Huggins" <huggins@eecs.umich.edu>
Fri, 8 Oct 93 10:39:44 EDT
Selden is right, of course: the new research project at Cornell is far
different than the large projects chronicled in "Inside RISKS", and I don't
mean to disparage this particular research project.  My point may have been
obscured by my attempt to be a little too cute.

My critique is more of Cuomo's voiced attitude that "change brings strength".
I wonder how many of the large projects whose failures are discussed here got
started because some government official or corporate bigwig said "If we do
this with a computer, it will be better," without thinking through *why* it
would be better if done with a computer.  Such an attitude needs to be
challenged (though more carefully than I did).


Re: Libraries and Imagined Communities (Agre, RISKS-15.08)

Mark Gonzales <markg@ichips.intel.com>
Fri, 8 Oct 1993 17:12:51 GMT
>...aware that everyone else who is reading the paper sees the same articles.

Unfortunately computer based publishing of paper newspapers has already
broken the "imagined community". In Portland Oregon where I live, the
local newspaper, the Oregonian, publishes separate sections of local news
for downtown, and each of the suburbs. Subscribers living in suburb A
only get the local news for their suburb, remaining ignorant of goings
on in suburbs B,C,D and E. So two randomly choosen readers are likely
not to have received the same articles.

This is already having effects on local politics. There was a letter to
the editor this week from a political activist[*] on a State-wide issue
complaining that the efforts of his co-campaigners in suburb A are only
reported in suburb A's local news section, thus voters in the other
suburbs are deprived of news on how the campaign is being fought.

Mark Gonzales

[*] he is one of the opponents of the Oregon Citizens Alliance second
State-wide anti-gay rights campaign.


Re: Cancer Treatment Blunder

<Bob_Frankston@frankston.com>
Fri, 8 Oct 1993 10:59 -0400
At the Risk of being overly brief:

Regulation isn't an answer. I presume there is already regulation against
building devices that kill more patients than necessary. How does one inspect
new technologies? I've been told, for example, that there are regulations on
digital X-rays that prevent storing high resolution images. This is based on
some notions of standardization.

RTFM isn't the answer. The quality of documentation is inversely proportional
to cost of a device and negatively correlated with the need for a manual. A
printed manual is a great example of an open-loop device. It just sits there
in its own reality. For customizable equipment the odds of it all coming
together with the corresponding versions of everything are very low. Of
course, by the same reasoning, you shouldn't comment your code since comments
and execution paths don't necessarily cross nor stay in synch.


Re: Cancer Treatment Blunder (Randell, RISKS-15.05)

Rogier Wolff <wolff@liberator.et.tudelft.nl>
Fri, 8 Oct 93 16:31:01 +0100
I think the REAL risk in this case would be that the doctors at the
"defective" machine would write a paper saying that they get much better
results when they use higher doses than customary. That would lead to
OVERDOSES being applied at different sites to different patients.

Roger


Re: Cancer Treatment Blunder (Bakin, RISKS-15.08)

Jon Jacky <jon@violin1.radonc.washington.edu>
Fri, 8 Oct 93 12:56:08 -0700
I work in a radiation therapy clinic, so I had to respond to this
recent RISKS posting:

> I would hope that testing this device would include a test to make
> sure it was calibrated.  That if the machine is supposed to operate at
> so many roentgens for so many seconds, that it actually does so!

Well, of course they're calibrated!  Every modern therapy machine has
two independent dose monitoring channels (both independent of the
control the operator uses to select the dose) that measure the dose
emerging from the machine.  At most clinics these are calibrated
*every morning* against a completely independent reference which is
not part of the machine at all.

In fact, all of these procedures were probably being followed at the
clinic in question, as I understand the incidents described in this
thread.  As I heard it, the blunders involved a different issue
entirely, which postings here have seemed unaware of.

The hard part of the problem is determining what machine output will
deliver the prescribed radiation dose *at the tumor in the patient's
body*, accounting for absorption of some of the beam in the overlaying
tissue, irregular patient geometry, etc.  This involves a whole
additional set of measurements and calculations, some of which must be
done differently for each patient, and which are largely independent
of the machine control system itself.  My understanding is that the
errors involved this part of the process.

> Who built this, the same dolts who tested the Hubble mirror? ...

This remark illustrates a tendency that we sometimes in RISKS and
elsewhere.  Someone learns of a mishap through a very brief and
incomplete news account, makes a lot of assumptions about what must
have happened, proposes an obvious remedy, and is smugly sure that
*they* would not have been so careless.  But in fact the news reports
are incomplete, the reader's understanding is oversimplified and
naive, and the proposed safeguards (and some the reader didn't think
of) are already in place --- but didn't work in this particular
situation.  We have reached a stage where our technological systems
are very complex, most of the obvious things have already been seen
to, and there really aren't so many "dolts" out there in positions of
responsibility.

> Dare I suggest some official body regulate such devices, or would that
> be an example of government over regulation of private industry?

There is already some regulation of people, clinics and devices.  I'm
sure some of it helps, but just as there is a limit to what you can
allow to go unregulated, there is also a limit to the degree of
oversight it is reasonable to expect from regulators.  To put this all
in perspective, radiation therapy mishaps are very rare, especially in
view of the large number of patients treated, and the potential hazards.

Jonathan Jacky, Radiation Oncology RC-08, University of Washington
Seattle, Washington  98195    (206)-548-4117   jon@radonc.washington.edu


Re: RISKS of unverified driving records

<horning@src.dec.com>
Thu, 07 Oct 93 12:44:22 -0700
This is a typical instance of problems caused when one organization supplies
(possibly erroneous) information about individuals to others.

There is a relatively simple remedy which would go a long way to solving
the generic problem.  It has the advantage that it is simple to state and
easy to understand:

    Each time an organization supplies data on an individual to another
    organization, it must also promptly send TO THE INDIVIDUAL a notice
    specifying what information was supplied to whom.

Of course, some details need to be added, like requiring that coded
information be translated into plain language, and that the criteria used
to select the individual for the data transfer be given explicitly (e.g.,
"We sold a list of all our subscribers with ZIP codes in neighborhoods with
median family incomes above $100,000/year."), but I don't think this would
be hard to spell out in a way that would inform the individual without
requiring information that the provider doesn't already have.

Since a typical sale results in something being mailed, the cost of mailing
the notices ought not to cause a major economic impact.

Jim H.


Re: RISKS of unverified driving records (Kabay, RISKS-15.06)

<jhudson@legent.com>
Wed, 6 Oct 93 13:39:32 EDT
Mich Kabay writes:
>Such information is supposedly restricted to "authorized requesters" ...

Veteran RISKS readers probably already know how much damage can be done with
the magic three pieces of information (your name, DOB and Social Security
number).  For newcomers, let me relate a (personally painful) anecdote.

In Massachusetts, the default driver's license number is your Social Security
number.  In the USA, the default identifying number on your Credit History
file is also your Social Security number.  2 years ago, my wallet (with
driver's license inside) was stolen.  Using the magic three pieces of
information which were on the license, some person called TRW Information
Services and obtained a copy of my credit history.  It cost them $8.  On the
credit history was every current credit card number.  Armed with the magic
three pieces of information plus a credit-card number, the person convinced
the credit-card company to change my mailing address.  A few days later, the
person called and reported the card had been destroyed, and got a new one.
Within a week, the person had run up $8000 in Automated Teller Machine cash
withdrawals using the card.

The credit-card company readily admits that their customer-service agent
should NEVER have changed the mailing address of the card based on only the
magic three pieces of information.  However, their security system clearly
failed in this case.  Having dealt with 6 different credit-card companies
during the history of this little affair, I can attest to the fact that the
magic three pieces of information are ALL that is needed to pass most
companies' security.

Mich goes on to suggest that perhaps we are moving to a "universal
identifier".  I shudder to think that it could get any MORE universal than
NAME+DOB+SS#.  What we really need is an authorization scheme that will work
for all the copies of our identifiers that are NOT electronic copies.

Jim Hudson <JHudson@legent.com>

Please report problems with the web pages to the maintainer

x
Top