The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 6 Issue 86

Wednesday 18 May 1988

Contents

o $70 million computer fraud attempt
Werner Uhrig
o DeutschApple Virus Alerts
Otto Stolz via Vin McLellan
o Market stability
Martin Ewing
o Matching Dormant Accounts
STEYP-MT
o Risky academic software development
Woody
o AIRBUS
Steve Philipson
Henry Spencer
Mark Mandel
o Re: Navigation and KAL 007
Joe Morris
o Info on RISKS (comp.risks)

$70 million computer fraud attempt

Werner Uhrig <werner@rascal.ics.utexas.edu>
Tue, 17 May 1988 19:35:14 CDT
I think it was Dan Rather who in tonight's prime-time news reported on a $70
million embezzlement attempt at First National Bank of Chicago.

An employee used "International Network Computer Links" for a "wire transfer
to a bank in NY".  The system used was "CHIPS" and the matter seems to have
been noticed yesterday when "Merrill Lynch discovered a discrepancy".

    [Apparently there was collusion involving at least four people.
    The amount evidently exceeded a threshold, but they were able to
    control the telephone response that requested overage authorization.    
    They were caught apparently only because the amount blew ML's account!
    Watch your favorite news sources.  PGN]


DeutschApple Virus Alerts

"Vin McLellan" <SIDNEY.G.VIN%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Wed 18 May 88 01:50:24-EDT
German Virus Alert: RELAYED FROM VIRUS-L @ Lehigh U.

A Special Warning of Three Infected MAC programs

From: Otto Stolz +49 7531 88 2645 
<RZOTTO%DKNKURZ1.BITNET@MITVMA.MIT.EDU!U>

Hello,

A couple of minutes ago, I run into a letter dated 21th March 1988, that
was circulated by a Software Distributing House in southern Germany to
their customers.  I will not post their name or address to this list; if
somebody really needs it, please drop my a note, privately.

As I don't have access to a MacIntosh, I can't assess the importance the
message might bear to MacIntosh users; so I deemed it best posting it in
this list for anybody who might be concerned.  As none of the programs
below is mentioned in the DIRTY DOZEN, somebody (Ken Van Wyk?) should
forward this note to Eric Newhouse whose BITNET address is unknown to me.

Following is the main part  of this letter (translated into English):

> Subject: MacIntosh Virus!!!
>
> Regrettably, also MacIntosh has been befallen by some virus, meanwhile.
> Please do *not* use any of the following programs:
>      Pre-Release PageMaker 3.0
>      Pre-Release HyperCard German
>      Pre-Release Multifinder German
>
> *Beware:* Virus spreads through networks (e.g. AppleTalk)!!!
>
> Symptoms: Difficulties when using the Hard Disk, even to the amount
>           of completely loosing the Hard Disk.

Best regards
              Otto Stolz


Market stability

Martin Ewing <msesys@DEImos.Caltech.Edu>
Tue, 17 May 88 13:54:05 PDT
In connection with program trading and stock market volatility, Vint at
CERF@A.ISI.EDU asks "Would some form of damping (limits on maximum stock value
excurions as a percentage of stock value, for instance) serve as an adequate
damper?"  Herewith, an inquiring but non-authoritative submission. 

I note that a number of proposals for controlling the market involve setting
limits.  For example, the Brady Commission's "circuit breaker" would stop
various kinds of trading once the marked dropped by N points.  In control
terms, these set constraints on the system, but they are not "damping".
Damping requires a lossy device of some kind, a dashpot. 

One sort of damping would be a transaction tax that is a function of market
rate of change, "tic" figure, or some such.  I.e., if you want to trade in a
crashing market, it will cost you more than on a quiet day.  Another tactic
would be to make delayed trades cheaper than prompt trades; this would
particularly discourage program trading. 

Savings and loan institutions provide another example.  Since the Feds insure
all deposits, a failing S&L will attract many investors with offers of
above-market interest rates - while other banks have trouble obtaining
deposits at prudent rates.  To avoid "financial runaway", we need another
damper.  For example, if I were only insured for the first 80% of my balance,
I'd probably make a better choice of S&L. 

I don't recall ever seeing an "engineering" discussion on stability of
financial markets, can anyone point to one?  (I have, however, talked with
economists who could derive Hamiltonian Equations for the economy.  Mind
your p's and q's.)

Martin Ewing  mse@caltech.edu


Matching Dormant Accounts

STEYP-MT Materiel Test Dir <steypmt@yuma.arpa>
Tue, 17 May 88 15:35:59 MDT
Extracted without comment from:
Federal Register, 53:90 (10 May 1988); p. 16577-8

Defense Logistics Agency

Privacy Act of 1974; Notice of a Proposed New Ongoing Computer Matching Program
Between the Department of Defense and Financial Institutions to Preclude
Escheatment of Individual Accounts Under State Laws

... "Send any comments or inquiries to: Mr. Robert J. Brandewie, Deputy
Director, Defense Manpower Data Center, 550 Camino El Estero, Suite 200,
Monterey, CA 93940-3231.  Commercial phone number: (408) 646-2951; Autovon:
878-2951.

"For further information contact: Mr. Aurelio Nepa, Jr., Staff Director,
Defense Privacy Office, 400 Army Navy Drive, Room 205, Arlington, VA
22202-2803.  Telephone: (202) 694-3027; Autovon: 224-3027.

"The Defense Manpower Data Center... is willing under written agreement to
assist individual financial institutions to be a matching agency for the
purpose of providing up-to-date home or work addresses of persons of record of
abandoned money or other personal property subject to escheatment laws.  The
computer matching will be performed at the Defense Manpower Data Center in
Monterey, CA using records supplied on computer tape by the financial
institutions and the DoD employment records of both military and civilian
personnel, active and retired.  The match will be accomplished using the social
security number.

Matching records will be returned to the financial institution, the activity
responsible for reviewing the matching data and for assuring that the account
owner receives proper notification and due process before any adverse action is
taken on the abandoned property...."


Risky academic software development

Woody <<WWEAVER%DREW.BITNET@CUNYVM.CUNY.EDU<>
Tue, 17 May 88 15:22 EDT
I think this verges on the RISKy.  In the MARCH/APRIL 1988 issue of _ACADEMIC_
_COMPUTING_ subtitled "Covering Computer Use In Higher Education" there is
an article on page 30 by Dennis J. Moberg of Santa Clara University.  The
article is titled "The Last Hurdle: Some Lessons For Software Developers Who
Plan To Market Their Product With Academic Publishers".  Column 2, paragraph
3 and 4 are

    We decided it was really time for us to put a proposal together, so that's
  what we did.  We got stuck, though, on two scores.  First, we were reluctant
  to send out the prototype of ur product for fear that some reviewer
  somewhere would rip us off.  Perhaps we were cynical about the level of
  ethics in the academic community about software copying, but were worried
  that someone somewhere would copy our disks and immediately start using them
  with their students without permission.  Obviously, every publisher needs
  to have prototypes reviewed, so we found ourselves vulnerable.  Ultimately,
  we decided to plant a worm in the software that allowed reviewers only
  four boots.  That trick gave us the peace of mind to go on.
    Lesson 2.  [in red]  If you are worried about protecting your software
  from being stolen by unscrupulous reviewers, plant a worm in it.  [in bold]

The risks I see here are philosophical ones to the academic community.  My
first reaction was one of outrage: that an academic, writing to the
non-technical community, would suggest that developers "plant a worm" in their
software.  In boldface type, yet.  It reeks of the developer who put a worm in
a business spreadsheet (was it an old version of Lotus?) that if it detected
that its copy protection had been broken destroyed all the data files it could
find.  I don't want anyone intentionally writing trojan programs, especially in
an academic environment.

The second one is the risk we have let turn into a real problem: by condoning
software piracy at the academic level, we have created an atmosphere where
developers do not feel safe about their product.  This means that packages are
not being written because developers don't feel there will be a profit in it.

What can be done about this?  I certainly want to write a letter to Dennis
J. Moberg of Santa Clara University and discuss alternatives to worms.  But
what can the academic community as a whole and the computing community in
particular do to abate this problem?
                                                         woody


AIRBUS [RISKS-6.85]

Steve Philipson <steve@ames-aurora.arpa>
Tue, 17 May 88 11:48:42 PDT
RE: Robert Dorsett (mentat@huey.cc.utexas.edu) writes:  [on the Airbus A-320]
   reliability.  They have multi-level redundancy on many systems, and enforced
   strong separation of design teams for the redundant equipment.  They used
   different manufacturers for each level of redundancy, and made sure that
   there were no common members of the software development teams.  ...

How interesting.  This approach sounds rather like using random algorithms
to generate a sequence of random numbers.  One would think that the best
approach to redundancy would be to design the redundant systems with
detailed knowledge of how the primary system works.  One would endeavor to
make the backup systems as different as possible from the known primary
systems, not use any common assumptions (as much as that's possible), and
not use common software/hardware.  If the approach reported is actually how
the systems were developed, there is no guarantee that the systems do have
common assumptions, algorithms, etc., and thus have common failure modes.

I think I'll try to schedule my flights to avoid the A-320 for awhile.

RE: mcvax!geocub!anthes@uunet.UU.NET writes: ["Sciences & Vie Micro"]

   When taking the plane [the A320], what is the probability that it will crash
   due to a software error? One chance in a million? Wrong! One chance in a
   billion and that for each hour of flight.

One of our defense ministers had a similar comment about a major military
system (can't remember if it was the B1 or SDI).  A reader pointed out that
even an ANVIL doesn't have that high a level of reliability.  Happy flying!

And again, Robert Dorsett (mentat@huey.cc.utexas.edu) makes an excellent
contribution on the various flight navigation systems.  I have a few
"nit picks" that I hope will not detract from this text-book quality summary.

   As one might guess from the rest of this article, we are moving away from
   pilotage and back towards dead reckoning as a primary means of flight--with
   the exception that it is all automated and the pilot is largely out of the 
   loop.

Well, not quite.  Modern systems incorporate inertial navigation equipment
which is far more accurate than simple dead reckoning.  INS systems do
internalize navigation functions to the aircraft, but systems accept external
input for recalibration on a frequent basis. (Robert does mention this later.)


A320 update

<mnetor!utzoo!henry@uunet.UU.NET>
Wed, 18 May 88 14:47:11 EDT
> ... British Airways accepted its first A320 a couple of weeks ago...
> Information that I have suggests they don't really like the air-
> plane, but can't get out of their commitments.

This seems unlikely, since Airbus Industrie's A320 order backlog is the
biggest in jet-airliner history, and they would jump at the chance to
take back some early delivery slots from an unhappy customer, in hopes
of using them to hook some happy customers.  Flight International reports
that Airbus told BA so, in so many words, and BA has been much more
positive about the A320 since.  For the piece of glitch-plagued junk that
some people claim the A320 is, it is selling awesomely well.

Henry Spencer @ U of Toronto Zoology   {ihnp4,decvax,uunet!mnetor}!utzoo!henry


Airbus 320: risks of translation

Mark Mandel <Mandel@BCO-MULTICS.ARPA>
Wed, 18 May 88 10:10 EDT
RISKS DIGEST 6.85 includes a brief excerpt translated from the
French-language "Sciences & Vie Micro" referring to chances of a crash
due to a software error:
  "One chance in a million?  Wrong!  One chance in a billion and that
   for each hour of flight!"

I haven't seen the original, but note well that the French "billion" =
the USA "trillion" (10**12).  The British, with whom we think we share a
language, also call it a "billion".  Our USA "billion" (10**9) is a
French "milliard".  I suppose this makes an argument for using
scientific notation, rather than words, for all large numbers.

((My employer, Honeywell Bull, Inc., is not responsible for anything I
think, say, do, or eat.))

 [Early RISKS were inundated with milliard canards.  This problem is
 destined to haunt us forever unless we can say thousand million, 
 million million, or use the international standard!   PGN]


Re: Navigation and KAL 007

jcmorris@mitre.arpa <Joe Morris>
Tue, 17 May 88 12:07:18 EDT
In RISKS 6:85, Robert Dorsett discounts the possibility of the KAL 007 flight
having been off-course due to an error in the data entry of its initial
airport co-ordinates:

>                                                        However, I do not
> remember any major gripes reported about the KAL flight by ATC or other
> authorities, which should have come up if this had happened, since the
> airplane would have been off course practically from the minute it started
> its enroute climb.

Not really.  I don't have any experience flying in Alaskan territory or the
adjacent international waters, but I would expect that westbound flights
would be routed over the normal radionavigation fixes until there are
no more within usable range, and only then would the flight be cleared for
do-it-yourself navigation.  The INS would not be used until the last station
was passed, so the path which could be seen by ground stations would match
the clearance.  Given the long baseline for the overwater leg where the
INS is used, the error in the course set (incorrectly) by the autopilot
could have been invisible to the long-range radar screens.

If the crew instructed the autopilot to fly to the (far-away and incorrect)
waypoint directly instead of telling it to fly the proper route, it would be
possible for the crew to fail to recognize the problem.  They might have
dismissed the right-of-anticipated-bearing situation as representing a strong
unforcast wind from the north.  If the autopilot had been set to follow
the planned route then the pilot's instruments would have indicated an
extreme left-of-course situation; a fly-to-defined-point command probably
showed an on-course condition since the airplane would be doing exactly
what it was told, and had no reason to display an error warning.

One potential RISK here is in the analysis: the flight followed its planned
path while under radar surveillance, but since we don't know when the
pilot began using the INS and its suspect data we can't say for sure what
part the INS played in the tragedy.

Please report problems with the web pages to the maintainer

Top