The RISKS Digest
Volume 5 Issue 15

Thursday, 23rd July 1987

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Access by 'hackers' to computer not criminal
Robert Stroud
On expecting the unexpected in nuclear power plants
David Chase
Risks of Nuclear Power
Mark S. Day
Chernobyl predecessors?
Henry Spencer
Who's responsible - ATC or pilots
Andy Freeman
"Intelligent" control
Alex Bangs
Taxes and who pays them
William L. Rupp
Computer Know Thine Enemy; Reactor control-room design
Eugene Miya
Medical computer risks?
Prentiss Riddle
Electronic cash registers
Michael Scott
Re: Credit card risks
Michael Wagner
Re: "The Other Perspective?"
Baldwin
Info on RISKS (comp.risks)

Access by 'hackers' to computer not criminal

Robert Stroud <robert%kelpie.newcastle.ac.uk@Cs.Ucl.AC.UK>
Thu, 23 Jul 87 13:02:45 BST
Regular RISKS readers will remember the case of Stephen Gold and Robert
Schifreen who broke into the British Telecom Prestel network and were
successfully prosecuted under the UK Forgery and Counterfeiting Act of 1981
last year.  I believe they were responsible for breaking into Prince
Philip's [demonstration] mailbox on the system.  [See RISKS-2.44.]  Anyway,
this was a test case and it went to the Court of Appeal.  Last week the
judges decided to allow the appeal.  As a result it would seem that under
English Law

    The dishonest obtaining of access to a computer data bank
    by electronic means is not a criminal offence.

I am not a lawyer so I am not sure I can explain the legal arguments behind
this decision accurately! I am not even sure from the legalese whether they
got off on a technicality or not! However, it would appear that existing law
cannot be applied to computer hackers.

What follows is based on the account of the decision given in the Law Report
section of The Independent dated Wednesday 22nd July 1987 by Simon Cassell,
Barrister. (c) Copyright Newspaper Publishing PLC 1987

The legal reference is Regina v Gold and Schifreen, Court of Appeal
(Criminal Division), 17th July 1987.

Under the terms of the Act, Gold and Schifreen were charged with "making
a false instrument on, or in, which information was recorded or stored by
electronic means, with the intention of using it to induce the Prestel computer
to accept it as genuine, and by reason of so accepting it to do an act to
the prejudice of British Telecom plc".

At the root of the case was the question of what this false instrument was.
In the original ruling, the judge stated that "... the defendant here made
a series of electrical impulses which arrive at, affect and operate on what
is called a user segment". The two candidates for the false instrument
were therefore the electronic impulses and the user segment.

The defence argued that the electronic impulses could not be a device
since they simply carried information, being no more than a translation
of the customer identification number (CIN) and password. The court agreed.

The prosecution argued that the user segment was modified by the CIN and
password which it recorded momentarily in order to check they were genuine.
This was the false instrument manufactured by the appellants and the fact
that it had only existed for a brief moment of time was immaterial.

The court disagreed with this on three grounds.

(1) A forgery is a document containing messages of two kinds: a message
about the document itself (e.g. that it is a cheque), and a message in
the words of the document that it was to be accepted and acted upon (e.g.
that a banker was to pay a certain sum of money). The user segment did
not contain both these types of message at the moment when it was supposed
to be a forgery.

(2) The Act did not seek to deal with information that was held for a moment
while automatic checking took place and was then expunged.

(3) The prosecution had to prove that the appellants had intended someone to
accept the false instrument they had made as genuine. But the machine (i.e.
the user segment) which it was intended should be induced to accept the
false instrument seemed to be the very thing which was said to be the false
instrument (i.e. the user segment) which was inducing the belief.  If this
was a correct analysis, it reduced the prosecution case to an absurdity.

The judges concluded that the language of the Act was not intended to apply
to the situation which was shown to exist in this case, and "the procrustean
attempt" to force the facts to fit the Act "produced grave difficulties for
judge and jury which the court would not wish to see repeated." What the two
individuals had done amounted to a trick. If it was thought desirable to make
this a criminal offence, "it was a matter for the legislature rather than
the courts."

<End of Legal summary>

It is not clear to me as a non-lawyer whether a more carefully drafted charge
which specified the false instrument as being the micro computer and modem
which Gold and Schifreen presumably used to gain access would have succeeded.
However, it will be interesting to see in the light of this decision whether
some specific legislation is drafted to make hacking a criminal offence.
The whole area would seem to be a legal minefield at present!

Robert J Stroud, Computing Laboratory, University of Newcastle upon Tyne.
UUCP ...!ukc!cheiot!robert


On expecting the unexpected in nuclear power plants

David Chase <rbbb@rice.edu>
Wed, 22 Jul 87 23:54:42 CDT
On nuclear power plants, and making them work--unforeseen problems arise.
For example, at the South Texas Nuclear Project, one unexpected problem
was ... clams growing in the cooling water pipes.  You'd like to filter
them out, but their larvae are very very small.  You'd like to poison
them, but the water is kept in great big ponds that could percolate slowly
into the clay.  What to do?

As if that weren't enough to test your foresight, I believe that these clams
are exotics (i.e, illegal alien species).  In spite of the bad press that
technology has received over the years, purely natural problems can be (I
think) just as bad.  Consider, for example, fire ants, killer bees,
hydrilla, water hyacinths, and citrus canker.  On the other hand, manatees
(which eat hydrilla, though not as quickly as it grows) like the cooling
water outlets from the Crystal River Power Plant.  Florida Power has this
great bumper sticker, "I Brake for Manatees".  It was a bit of a coup for the
power company.  [There was more on alligators in the Indian Point (River?) 
plant cooling water canals...]
                                David
                                        [Let's hear it for Hugh Manatee.  P.] 


Risks of Nuclear Power

Mark S. Day <MDAY@XX.LCS.MIT.EDU>
Thu 23 Jul 87 11:17:03-EDT
It is worth noting, too, that nuclear power is not only risky when
things "go wrong" (e.g. bungled construction, operator/control errors)
but also when things "go right".  A normally-operating fission reactor
produces large amounts of wastes that are dangerous for longer than
the lifetime of a typical plant, and smaller quantities of wastes that
will be dangerous for millenia.

We would all do well to remember that our solutions to current
problems should not create worse problems in the future.


Chernobyl predecessors?

<mnetor!utzoo!henry@uunet.UU.NET>
Thu, 23 Jul 87 15:32:23 EDT
In a recent issue of Science there was a long article by an American
scientist who had visited Chernobyl and talked to all the major people
involved in the response to the accident.  It makes interesting reading.  He
measured radiation in the area; nothing very serious any more, in the
sections he visited.  Several aspects of the cleanup were being delayed by
cold weather, e.g. replacement of contaminated tar on building roofs.  There
is an implicit comment that attitudes towards radiation have changed:
outside the immediate vicinity, the fallout from Chernobyl was similar in
magnitude to that of a very large atmospheric nuclear test — an event that
was not uncommon in the 50s, albeit usually in more isolated areas.  In
general he was impressed by the speed and competence of the Soviet response
to the mess.  The medical handling of the situation, in particular, was so
rapid and skilled that it strongly suggests they've had practice at this.
He also notes that the other reactors at the plant were back in operation
within a year — a considerable contrast to Three Mile Island #1, an
undamaged and perfectly functional reactor that sat idle for the better part
of a decade.

His overall assessment of the cause of the disaster agrees with the common
consensus:  a somewhat hazardous reactor design combined with truly major
ignorance and carelessness by the operators.  He notes that just making
more rules for the rank-and-file operators would not have solved the problem,
because it was one of the senior engineers who was responsible for most of
the violations of normal operating rules.  As an immediate precaution, the
operators are now told that the plant management is *not* authorized to
override the basic safety rules.  Much more attention is now being paid to
operations issues in general, and some modest modifications to the reactors
have also been made.  The real solution is to use a safer reactor design,
but the worker's paradise is no more willing to scrap a dozen power plants
overnight than the dirty capitalists would be.

                Henry Spencer @ U of Toronto Zoology
                {allegra,ihnp4,decvax,pyramid}!utzoo!henry


Who's responsible - ATC or pilots

Andy Freeman <ANDY@Sushi.Stanford.EDU>
Wed 22 Jul 87 23:49:22-PDT
In the July 22 issue of the SF Chronicle, page 20, Layne Ridley wrote
an article about flying fears.  (This is the same issue that reported
that two Delta incidents are being blamed on the ATC, not the pilots.)
Part of the response to "The pilot is drunk or incompetent." is:

   The captain of an airliner is held legally responsible for every
   aspect of the flight.  No matter if he or she was acting in good faith
   on instructions from the airline, an air-traffic controller or anybody
   else, if an plane under a captain's control is involved in any
   infraction or incident, the captain will be suspended, guilty until
   proven innocent.

What are the pilot's responsibilities and liabilities?  What about the
controller's?
                                   -andy


"Intelligent" control

Alex Bangs <bangs%husc8@harvard.harvard.edu>
Thu, 23 Jul 87 12:04:59 edt
  >From Nancy Leveson <nancy%murphy.UCI.EDU@ROME.UCI.EDU>
  >The reasons for the nuclear power industry problems go way beyond
  >construction difficulties...

Oh yes, I realize that. I just hate it when people are dishonest when it
comes to such critical safety issues; it reads too much like a horror movie,
but it's for real. As for economics and engineering, I realize that those
are problems. Sorry for presenting such a simplified view without explanation.

  >I have not seen Robocop, but from the descriptions in Risks of the robot
  >following orders and killing someone inappropriately, I would hate to think
  >that this is the way we would want to build a nuclear power plant...

Again, please allow me to clarify. I am talking about systems that would
only have the intelligence to help watrch for operator mistakes (like noting
that a valve has been open for longer than it normally should), and helping
to consolidate information in case of a major alert--summarizing the critical
information and making suggestions for courses of action. Basically, I envision
a system that is similar to the direction being taken in cockpit avionics.
With such a system, however, there is a risk that is the bane of all pilots--
boredom. For a wonderful set of information on aviation risks, see the
November 1986 issue of IEEE Spectrum. I don't suggest reading it on a plane :-)

I have no intention of ever letting a computer run a nuclear power plant, or
the US ICBM system (see WarGames). We are not at that stage, and I am not
sure that we ever want to be. Computers can be useful in presenting information
in a useful fashion. Making decisions is another matter altogether.

Alex Bangs, Harvard Robotics Lab, bangs@metatron.harvard.edu

  Just a quick addition. John Page, from our lab, suggested that the computer
  ought to call up the NRC on operator overrides and request that they come
  take a look, i.e. the computer plays the rat :-)  Alex


Taxes and who pays them

William L. Rupp <nosc!rupp%cod.nosc.mil@sdcsvax.ucsd.edu>
23 Jul 87 15:29:41 GMT
In a comment on the propsed FCC data transmission fee, Alex Bangs states
that in this case the "users, not the compay" will pay the tax.  I would
just like to comment that in *all* cases it is the users, or customers,
who pay the tax.  I.e., the company gets all its money from what it
charges the buyers, and must therefore pass along increases in taxes.

My comments reflect neither agreement with nor oppositon to the FCC
proposal.  I just wanted to clarify that one point. 

    My Usenet comments are strictly my own opinions.


Computer Know Thine Enemy (Reference); Reactor control-room design

<eugene@ames-nas.arpa>
23 Jul 87 10:22:49 PDT (Thu)
>... "misconception" that Strategic Computing is promoting "killer robots" ...
>                                   - Jon Jacky

An interesting reference just appeared (I don't normally read this publication):

%A Andrew Borden
%Z TRW
%T Computer Know Thine Enemy
%J AI Expert
%D July 1987
%P 48-54
%K Bayesian analysis, aerial combat, battle management,
%X No comment.

Re: Reactor control room design

This is regarded in several sectors as an increasingly sensitive subject,
and I am of the growing opinion that it not appropriate to discuss
this subject in an open forum.  Readers should also note that several
of the FORMER readers have included `experts' in this field.

--eugene miya, NASA Ames Research Center


Medical computer risks?

Prentiss Riddle <ut-sally!im4u!woton!riddle@seismo.CSS.GOV>
22 Jul 87 17:02:18 GMT
This is a request for information: can any of the readers of this list
provide examples of problems caused by computer errors in the medical field? 
         [Presumably the requestor has been following the Therac-25...  PGN]
I would especially be interested in any relating to loss or damage of
electronically stored medical records and medical orders.  My institution is
in the process of purchasing a Patient Data Management System (PDMS) which is
supposed to eliminate the use of paper recordkeeping in an intensive care
unit.  It will hopefully provide a significant improvement in the speed,
detail and reliability of patient charting, but there is also clearly an
element of risk involved.  Some horror stories might help us open our eyes a
bit and avoid repeating anyone else's errors. 

--- Prentiss Riddle
--- Opinions expressed are not necessarily those of Shriners Burns Institute.
--- riddle@woton.UUCP  {ihnp4,harvard,seismo}!ut-sally!im4u!woton!riddle


Electronic cash registers

Thu, 23 Jul 87 10:42:44 EDT
I response to your note in RISKS:

I recently stopped at a nearby supermarket to purchase milk.  I chose the
cheapest brand (not an easy task, since the labels on the shelves were very
poorly maintained and the packages are not marked at all).

When the clerk scanned the cartons the register rang up a price
considerably higher than what was on the shelf (though not enough
higher to be noticable in the middle of a large bill).  Since I
was only buying one item, I *did* notice the difference and objected.
The clerk went off to check with somebody, then gave me the lower price.

What was truly amazing was the reaction at the customer service counter
when I stopped to complain.  I was informed that, yes, the prices in
the store's computer do not necessarily match the prices marked on the
shelves, and the customer who isn't careful may get cheated.  I received
the impression that mine was a relatively common experience and that
the store did not consider it a problem.

I sent a very formal, very strongly-worded letter to the store manager,
but received no reply.


<Michael Wagner +49 228 303 245>
Thu, 23 Jul 87 15:58 CET
      <WAGNER%DBNGMD21.BITNET@wiscvm.wisc.edu>
Subject: Re: Credit card risks (Amos Shapir, RISKS 5.14)

There must be something very different about the way Bell Canada
calling cards work, or else I missed the point somewhere.

> >AT&T phone credit cards use a credit card number that consists
> >(in most cases) of your phone number followed by four
> >(presumably somewhat random) digits.

I suspect this is only true in some cases.  It's certainly not true for me.
The first 10 digits of my credit card number are some random telephone
number in some exchange I've never heard of.  It's certainly not my
telephone number.  I don't even have a Canadian telephone number, currently.
                [Actually this is true of several of the nonAT&T carriers.] 

> When I realized that, and that the only purpose of the card was
> to remember the number, I memorized the last 4 digits and
> destroyed the card.

But that's not the only use of the card.  My card, at least, has 2 numbers
(and a mag stripe that we'll come to later).  The second number is an
international calling card number, good (in theory) outside North America
(in practice, no one in Western Europe seems to honour it yet, but I keep
hoping).

> The possibility that someone who knows my name will look over
> my shoulder while I was using it was just too great.

I can't imagine what difference knowing your name makes.  They
never ask me what my name is.  In any case, 'they' can look at
your fingers pushing the keys on the telephone just as easily as
at the card.  Actually, the mag stripe is safer in this respect;
you never have to hold it still in anyone's line of vision.

> Even now when the cards are magnetic there's not much point in
> keeping them, as there are as many systems that accept regular
> credit cards wherever the AT&T machines are.

This doesn't seem to match my experience.
(a) I've been to many places where the credit card machines only
    take telephone cards.
(b) some credit cards cost more for the same phone call.  At the
    least, there is the service charge for the use of the card.
(c) the audit trail isn't as good.  When I have used VISA to make
    a phone call, I get back, on my VISA account, an undecodable
    string of digits.  When I use my calling card, I get a line
    item on the standard telephone bill in a standard,
    comprehensible format.

All in all, I'd say there is a lot of point in keeping the card
and using the mag stripe.


Re: "The Other Perspective?"

<baldwin@cs.rochester.edu>
Thu, 23 Jul 87 09:15:04 EDT
>From: Heikki Pesonen <LK-HPE%FINOU.BITNET@wiscvm.wisc.edu>

>According to my opinion, the risk that computers blindly follow
>instructions is not so high than the risk that people
>blindly follow orders. So they have always done, but the risk
>is greater nowadays, when we have the so called advanced technology
>in our hands. ....

    True - people blindly following orders are probably worse
than computers doing it. The intent behind my original posting though
(which seems to have been widely missed) is that an even bigger
potential risk is that some large fraction of non-technical society
(a) doesn't realize that there ARE risks in the use of computer
technology (no surprise there really), and (b) when shown risks,
these people do not say "hey, that's a problem", they say "hey, that's
a good joke". At least in places (if any :-) where democratic government
works according to theory, those same people help make decisions about
some rather critical, risky uses of computers ("Shall we spend
X billion dollars on computerized anti-missile defenses?", "It must
be OK to build the new nuclear power plant, the computer simulations
say it's safe", etc.) Depending on how widely people really mistake
serious risks for funny glitches, we have a situation in which the
people deploying new technology (at least in a broad policy sense)
may neither know what they are doing, nor realize that there are
important things that they don't know. This, to my mind, is a much
more serious risk (or meta-risk) than the failure of particular languages
to check subscript limits or whatever.

    I guess what I hope to do was get some discussion/feedback
on this kind of meta-risk going (other instances? does it really exist or
am I just a pessimist? what can/should be done about it? etc.) Apologies
if earlier postings didn't make this clear.

Please report problems with the web pages to the maintainer

x
Top