The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 5 Issue 39

Saturday, 26 September 1987

Contents

o Another Australian ATM Card Snatch
Dave Horsfall
o AT&T Computers Penetrated
Joe Morris
o On-line Robotic Repair of Software
Maj. Doug Hardie
o Re: An Aporkriffle Tail
Michael Wagner
o Risks in the Misuse of Databases?
Brint Cooper
o SDI Simulation
Steve Schlesinger
o Ethical dilemmas and all that...
Herb Lin
o Info on RISKS (comp.risks)

Another Australian ATM Card Snatch

Dave Horsfall <munnari!astra.necisa.oz!dave@uunet.UU.NET>
16 Sep 87 13:55:24 +1000 (Wed)
>From the Sydney edition of "The Australian", Tue Sep 15:

``Human error blamed in great ATM card snatch

A fault in the Commonwealth Bank automatic teller machine on
Saturday - which impounded more than 900 Mastercards - has been
attributed to human error.  But technicians are continuing to
investigate the system's software.

Mastercard users were confronted with "No account, card retained" messages
on Saturday morning when they attempted to use their cards in Commonwealth
Bank or Westpac ATM's, and 921 cards were impounded.

The Commonwealth Bank's acting general manager for retail banking, Mr John
Koch, said that although the Mastercard reference file had been loaded into
the system on Friday, "it did not take".  "Every night the Mastercard file
is updated and loaded ... but as the computer did not take the Mastercard
file on Friday night it had no record of the accounts.  The next step was
for the ATM's to impound the cards, which is a very desirable security
feature.  The principle of the system is fine, it's just the practical
application that went wrong." ''

Went wrong indeed! It left hundreds of customers without cash for the
weekend.  The article went on to say that it was "almost certain the fault
lay with incorrect loading commands by a human operator".

Readers will recall a previous snafu with Westpac ATM's, when they went live
with a new version of software.  The repercussions are still rippling...

Dave Horsfall  (VK2KFU)        ACSnet/CSNET: dave@astra.necisa.OZ
NEC Information Systems Aust.  ARPA: dave%astra.necisa.oz@uunet.UU.NET
3rd Floor, 99 Nicholson St     JANET: astra.necisa.OZ!dave@ukc
St. Leonards 2064  AUSTRALIA   UUCP: {enea,hplabs,mcvax,uunet,ukc}!\
TEL: +61 2 438-3544  FAX: 439-7036    munnari!astra.necisa.OZ!dave


AT&T Computers Penetrated [More]

jcmorris@mitre.arpa <Joe Morris>
Fri, 18 Sep 87 12:24:43 EDT
Washington Post, 18 September 1987, p. F1, ff., (C) [filtered by Joe]

YOUTH ACCESSES AT&T'S COMPUTERS

A 17-year-old Chicago high school student using a personal computer in his
bedroom broke into AT&T's computer systems around the country, stole $1 million
worth of sophisticated software and was "on the verge" of being able to 
disrupt the company's telephone network, according to federal prosecutors.

The youth also appears to have gained access to AT&T computers at two
military bases: the NATO Maintenance and Supply Headquarters in Burlington,
N.C., and Robins Air Force Base in Georgia, an Air Force logistics command
center, according to prosecutors.  The computers did not store classified or
sensitive material, they said.

[Comments on other attempted penetrations, including the Washington Post's
own computer system.]

[...AT&T says: ] "We view this as a kind of Yuppie vandalism [...].

[The investigation involved] agents from the Secret Service, the FBI, and
the Defence Criminal Investigative Service [...].

[The youth's lawyer] said yesterday that the youth "categorically denies
doing anything that he should not have been doing.  I can assure you that
my client had absolutely no sinister motives in terms of stealing property.
Right now we're very much in the dark as to what this is all about."

[...] several persons familiar with the case said the incident was highly
embarrassed [sic] AT&T and showed potentially serious lapses in its security.
[...] AT&T officials believe the incident does not demonstrate flaws in the
company's security system but in the failure of its employees to follow
proper security procedures. [...] "We're saying the locks are pretty damn
good, but we need to remind people to close the door."


On-line Robotic Repair of Software

"Maj. Doug Hardie" <Hardie@DOCKMASTER.ARPA>
Tue, 15 Sep 87 12:27 EDT
The following is taken from Business Week, Sep 7, 87 page 113.

      THIS SOFTWARE ROBOT FIXES SYSTEMS - WHILE THEY'RE RUNNING

   With computer systems growing more pervasive and complex, maintenance
is a mounting headache - especially as networks rope together more
unrelated brands of equipment.  Finding and fixing problems too often
turns into a finger-pointing contest among vendors.

   But suppose there were a robot technician residing in software that
could worm its way freely through the system, periodically "exercising"
the diagnostic programs provided for each piece of equipment.  Imagine
that this software robot is "smart" enough, when it discovers problems,
to run the applicable repair routines.  Command Technologies Inc., a
Boston startup, claims its SoftRobot software does this and more.  For
example, if a hard disk crashes, the SoftRobot might move data to
another system, restart the first drive, and restore the data.  The
program, says Command President Franco Vitaliano, "fools the system into
thinking that a person is doing the work." As a last resort, the system
summons a human.  SoftRobots, which cost about $2,500 for
workstation-level systems, are being evaluated by a dozen computer
companies.


Re: An Aporkriffle Tail (RISKS-5.38)

Michael Wagner +49 228 303 245 <WAGNER%DBNGMD21.BITNET@wiscvm.wisc.edu>
Fri, 25 Sep 87 11:29 CET
  > IPFRCVM - Iowa Pig Farm Research Center ...

What was not mentioned in RISKS is that this message strongly [and pungently]
suggests that the site IPFRCVM is fictitious:  it appears in none of the
current tables for BITNET to which I have access.

             [Note: APOCRYPHAL = of doubtful authenticity.  Add this one to
             Chernenko@MOSKVAX.  (I have edited Michael's note slightly.)  PGN]


Risks in the Misuse of Databases? [RISKS-5.38]

Brint Cooper <abc@BRL.ARPA>
Fri, 25 Sep 87 0:45:15 EDT
> A (simple ?) diff operation between 2 databases and you have it: ...
> Cliff Jones, Manchester

Correct me if I'm wrong but isn't this info used merely for the enforcement
authorities to decide where to search for unlicensed TV receivers?  They
won't arrest you solely because you're not in the database, will they?

What's the alternative?  When we uncover risks or abuses in the use of
computer systems, we are obliged to compare these with the risks or abuses
in accomplishing the same job without computer systems.  The only effect of
the automated databases is to help find unlicenced TV sets more quickly than
by searching manually.  In either case, some number of such sets will be
found.  Only the numbers differ.

    [Cliff's notes that he is indeed paranoid and abviously STRANGE for not
    owning a TV, plus the statement that they have uncovered EVERY HOME
    WITHOUT A LICENCE -- irrespective of the presence of a TV -- are indeed
    illustrative of the more general problem of drawing inferences from
    information gathered out of context from networks of networks of databases.
    This is by no means a new problem, and has been discussed here in the past.
    It gets more difficult to control HUMAN misuse as the scope of the search 
    widens.  But the key point is that the linking together of databases for
    purposes other than their original intent does raise social questions.
         (Cliff and friends -- help!  The network is suddenly
         no longer accepting your CS.UCL.AC.UK relay.)   PGN]


SDI Simulation

Steve Schlesinger <steves%ncr-sd.SanDiego.NCR.COM@sdcsvax.ucsd.edu>
21 Sep 87 16:34:20 GMT
The following recently appeared on modsim distribution, a mailing list on
issues in simulation.  I thought RISKS would appreciate the point that the
simulator understood the complexity of the simulation task, but not the fact
that the simulation is easy compared to the real problem of SDI control.

I've deleted the name of the poster, but not his organization.
Steve Schlesinger, NCR Corporation <Usual Disclaimer - I speak only for myself>

  To: modsim-dist@sunset.prc.unisys.com
  From XXXX@mitre-bedford.ARPA Fri Sep 18 10:56:49 1987

    This has been touched onb before but I'd like to see if
  some more discussion can be generated. I am involved in developing
  a distributed simulation of a large SDI communication network.
  The two approaches to distributed simulation are the roll-back, TimeWarp
  and the Chandra Misra message passing. I am leaning very heavily
  towards the Chandra Misra model mainly because of my misgivings
  about the TimeWarp model. These misgivings are

   1) The amount of memory devoted to maintaining a history one
  can roll back to. The networks we will be modelling will consist
  of 500 to 1500 satellites, each with at least 10 queues and/or
  buffers. Checkpointing could be expensive.

   2) The roll-back support would have to be developed from scratch.
  We would be building a complicated software support environment
  along with the simulation itself.

   3) Debugging in such an environment would be a BEAR. We would
  be catching bugs in the roll back support, bugs in the simulation
  itself and bugs in the interaction between the support and the simulation,
  I can see us tracing a message, not being sure its valid or not
  and then seeing it being "taken back" and not being sure whether
  the "take back" is valid - oooh gives me chills.

  I'd welcome some discussion on these points (or others)

     [All articles for inclusion in modsim should be sent to
     ..!isdcrdcf!sdcjove!modsim  OR  "modsim@cam.unisys.com"]


Ethical dilemmas and all that..

<LIN@XX.LCS.MIT.EDU>
Fri, 18 Sep 1987 15:37 EDT
All of us have been exposed to the question of whether or not an engineer
should consider the implications of his or her work.  The question is
usually posed in the following way: Should an engineer work on certain
projects if the social implications of that project are negative?  Questions
like "Can (or will) the project be used to do bad things?" become relevant.
This is one kind of ethical dilemma.

Another type of ethical dilemma arises when a similar question is asked of
policy makers.  Do policy makers have an ethical responsibility to prevent
engineers from having to confront ethical dilemmas?

At first, I thought the answer was clear -- Don't ask someone else to do
something you wouldn`t do yourself.  But that's insufficient.  I am willing
to pay someone to provide food for me (I'm not vegetarian), and that
involves things I am unwilling to do myself.

The problem for a policy maker comes from the fact that he must plan for an
uncertain future.  That means hedging against the possibility that his
judgments might be wrong.  As a defense policy maker, for example, it may be
that he cannot imagine any circumstances under which a certain weapon might
be used.  Should he then not think about how to use that weapon at all?  Or
should he spend a little bit of time, money, and energy thinking about
possible ways it could be used in the event that his judgment is wrong?  If
the latter, that could mean that he would have to ask someone to think about
doing things that he (the policy maker) would never want done and moreover
could not imagine wanting done.

It is clear to me that the policy maker should not spend a lot of money etc and
thus ask many people to confront ethical dilemmas.  But should he ask a few?
I invite comments.   

   [This could lead to an interesting, albeit rather free-form, discussion.
   Responses to Herb (cc: RISKS), please.  PGN]

Please report problems with the web pages to the maintainer

Top