The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 5 Issue 09

Thursday, 9 July 1987

Contents

o BIG RED, ICEPICK, etc.
David Purdue
o Air Traffic (out-of?) Control
PGN
o Cause of the Mysterious Bay Area Rapid Transit Power Outage Identified
PGN
o Sprint access code penetration
Geof Cooper
o Eraser's edge
Martin Harriman
o Hardware/software interaction RISK
Alan Wexelblat
o How to (or how not to) speed up your computer!
Willie Smith
o Re: Aviation Safety Reporting System
Jim Olsen
Henry Spencer
o Re: RISKS in "Balance of Power"
Eugene Miya
Hugh Pritchard
o Info on RISKS (comp.risks)

BIG RED, ICEPICK, etc.

David Purdue <munnari!csadfa.oz!davidp@uunet.UU.NET>
Tue, 23 Jun 87 12:15:35 est [JUST ARRIVED, 8 JULY 87!]
         [David found some background on Donn Parker's item in RISKS-4.94.
         ``Big Red'' seems to be a Trojan horse, not a virus, but then
         the article contains a few other technical curiosities as well.  PGN]

>   Date: Tue 2 Jun 87 11:16:51-PDT
>   From: DParker@Stripe.SRI.Com <Donn Parker>
>   Subject: Australian Computer Crime                   [From RISKS-4.94]
> 
>   A sophisticated computer crime occurred in Australia recently...  A 
>   disgruntled employee modified PC circuit boards.  One called "Icepick" 
>   attacked ACF-2 on an IBM mainframe.  The other called "Big Red" was 
>   used in a virus attack.

From "The Australian", Tues, 14th April 1987, page 41.

LOCAL CRIME TEAM CRACKS THE RIDDLE OF BIG RED
=============================================

The lethal computer virus known as Big Red has been tracked down and beaten
by a team of Australian computer crime investigators, solving a riddle that
has eluded foreign experts.  The so-called virus has been held responsible
for a string of computer disasters in several countries since it was first
detected in a NASA system in the United States in 1985.  It is generated by
an advanced technology device planted in computer systems.  At least 50 Big
Red devices have been found in the US, England and Australia in banking and
information-handling systems, but have never been cracked before.

Mr Stuart Gill, head of the Australian team that cracked the virus,
described Big Red as a "solid state data diddling device capable of breaking
encryption devices and corrupting data".  Mr Gill said his team had found
Big Red in the computer system of a "medium-sized Australian company that
handles very sensitive information".  He said the device was being used to
capture and store encrypted word-processing data in the contracts division
of the NSW-based company.

It was imbedded in the thick resin that had frustrated previous attempts to
examine the inner workings of the system.  "It is the first time anywhere in
the world that somebody has been able to find one in an operating
environment and find out how it works," Mr Gill said.

The investigation exposed the actual workings of the system, including
its circuitry, how it is connected with computer systems, the interface
with security control systems and the device's power source.

It confirmed that Big Red, frequently regarded as a part of industry
mythology, operates as a parasitic operating system within a host
environment.  "A lot of rumours and a lot of folklore have built up around
Big Red," Mr Gill said.  "The device is not as sophisticated and
comprehensive as it has been rumoured to be."  

There is now no doubt that Big Red has the power to transfer encrypted data
from legitimate files into "invisible" files, which can be accessed by users
without disturbing the host's access control and encryption systems.

Mr Gill also demonstrated another suspect device, designed to function as a
stand-alone system and probably used to operate an illegal gambling machine
disguised as a computer game.

Both devices were found by an investigation team led by Mr Gill, one of
Australia's leading experts on the unauthorised use of computer systems.
His five-member investigation team took 19 hours to uncover, analyse,
anaesthetise and remove Big Red from a host computer system where it had
been planted inside a terminal.  The team, part of the Melbourne-based
Comreco Computer Security, was called in by the company involved after
management, using an audit trail, had discovered that the host system was
being used after hours.  

Half pages of encrypted wordprocessing data were "just disappearing off the
face of the Earth".  The disappearence of the pages resulted from a
malfunction in the Big Red device, which normally "echoes" the appropriated
data back into its legitimate file, leaving no sign of unauthorised
activity.  "As far as I can tell, part of the circuit was burnt out.  It was
there for about three months," Mr Gill said.

A circuit test procedure revealed a trace of circuit activity that was not
part of the normal system's operation and the team "found the activity of
Big Red as it detected that the normal system was being shut down".  "I
thought to myself, now that I have you, the question is where the hell are
you?  The system had 50 terminals," Mr Gill said.

Big Red was found in an unused terminal, where Mr Gill suspects it was
planted by someone using "the old `I'm here to service the PCs' lurk".  "My
advice to anybody is that if you have a workstation that is going to be
unused for any given length of time, pull it out," he said.

The device was identified using several criteria, including the small red
LED, which gave the system its name.  It lights up when the system is
operational.  "Its location in the communications port of the terminal, the
resin casing and the red circuit board were all pointers to Big Red.

"We found it after twelve hours and it took another 7 hours to actually
determine what it was doing," Mr Gill said.  "I wasn't sure whether to
remove Big Red from the system or bring the computer here (to Comreco's
office) and surgically remove it.  "Sparks flew when we pulled it out but
that was all."

[The article goes on to say that this shows that Australian companies are
capable of coping with high tech security, and that if American companies
are thinking of setting up shop here, they will meet stiff competition.]

I can't find information that is a more technical description of Big Red,
or whether they caught the person who installed the device.
                                        DavidP

Mr. David Purdue       Phone ISD: +61 62 68 8165
Dept. Computer Science         Telex: ADFADM AA62030
University College  ACSNET/CSNET: davidp@csadfa.oz
Aust. Defence Force Academy UUCP: ...!seismo!munnari!csadfa.oz!davidp 
Canberra. ACT. 2600.        ARPA: davidp%csadfa.oz@SEISMO.CSS.GOV
AUSTRALIA              JANET: davidp@oz.csadfa


Air Traffic (out-of?) Control

Peter G. Neumann <Neumann@csl.sri.com>
Thu 9 Jul 87 06:51:31-PDT
The federal panel report blamed the air traffic control system itself for
the Aeromexico crash, although it also blamed the entry of the smaller plane
into the restricted airspace.  It criticized ``limitations'' of the control
system's policy of relying on pilots scanning the skies for other aircraft
instead of advising planes under their control about other planes that are
flying under visual flight rules.  Board members also noted an apparent
weakness in the radar signal, which may have made the small plane less
noticeable.  The safety board tried not to blame either the controller or
the pilots, but said that the controller should have seen the Piper -- it
was on his radar screen, although the controller has maintained it was not.
(AP item in SF Chron 8 July 1987)

Last night's evening news noted that Delta Airlines has had three
altercations recently -- the pilot who accidentally turned off engines and
nearly crash landed in the ocean, a pilot who landed at the wrong airport in
Kentucky, and a very near miss ("They could read each other's markings.").


Cause of the Mysterious Bay Area Rapid Transit Power Outage Identified

Peter G. Neumann <Neumann@csl.sri.com>
Wed 8 Jul 87 14:49:34-PDT
You may recall that BART had an unexplained power failure on 17 May 1987,
unprecedented in its 15-year history.  17 switches opened for five hours,
and then mysteriously closed again.  On 7 July 1987 BART announced that the
cause had been ascertained -- a short circuit resulted from the use of a
battery charger in combination with a faulty switch.  (SF Chronicle, 8 July
1987).


Sprint access code penetration

Geof Cooper <imagen!geof@decwrl.dec.com>
Tue, 7 Jul 87 11:45:27 pdt
The following is excerpted from the Peninsula Times-Tribune, Sunday,
July 5, 1987.  The byline is by Jack Brenan [sp?], Tribune Media Services
(reproduced without permission).

  FORT LAUDERDALE, Fla.  -- When Jim Wyatt received his monthly telephone bill
  last Tuesday morning, he got the surprise of his life -- a $14,720.96
  surprise.  The bill was 2-inches thick, 210 pages long, detailing 4,931
  long-distance calls made on his U.S.  Sprint account.  Total number of
  minutes logged: 72,849.  That's 50.6 days of non-stop talking in 30 days. ...

  John Frusciante, who prosecutes computer related crime for the Broward State
  Attorney's Office said Wyatt appeared to be the victim of one of the larger
  long-distance fraid cases in recent memory.  ...  U.S. Sprint spokesman
  Steve Dykes said officials had not started investigating the bill yet, but
  that Wyatt probably was victimized by a hacker who used a computer to obtain
  his secret access code and pass it around.  ...  "This is something we take
  very seriously.  We have been beefing up our security department," Dykes
  said.  "It's about a half-billion-dollar -- that's with a B -- problem a
  year for the industry."  ...  "Most of the time, a computer is utilized to
  attempt to locate valid access codes using an automatic dialer.  In essence,
  you put it on automatic pilot and let it do its thing," Frusciante said.

  Wyatt ...  said he made his only concern clear on the telephone [to 
  U.S. Sprint] before he hung up. "I'm not going to pay the bill," he said.

The article is amusing, but it raises an interesting concern.  The
response of the U.S.  Sprint people is to "beef up security." If their
system was penetrated as they believe, the problem is probably better
solved by improving the lock rather than looking for the thief.

The activity profile of an auto-dialer that is trying to find legal
access codes is VERY different from that of a legitimate customer.  It
should be easy to have the computer foil such attempts (by holding the
circuit open for 30 seconds and disconnecting after N attempts), to
confuse such attempts (by always returning a FAILED code after the Nth
attempt) or even to help apprehend the perpetrator (by flagging the
line for a trace, or even tracing it automatically if this is legal). 

Even if the penetration in this case was not so trivial, the article seems
to indicate that the people at U.S. Sprint believe such a penetration is
possible.  Perhaps there should be legal recourse for customers who are
inconvenienced by such inadequate security.
                                                  - Geof Cooper


Eraser's edge

Martin Harriman <MARTIN%SMDVX1%sc.intel.com@RELAY.CS.NET>
Wed, 8 Jul 87 10:10 PDT
I am more familiar with the internals of the GM system, but the GM and Ford
engine control computers have enough in common that this should be true for
both:

>   1) If disconnecting the negative post will erase the computer's memory,
>      what happens when your battery dies?  Or _you_ yourself take the 
>      battery off for some reason?  Does that also erase it, or does taking
>      the positive post off within 45 seconds somehow keep you from losing 
>      the memory?

  If you lose battery power, the computer's volatile memory is erased.  It
doesn't matter whether the battery dies, explodes, is disconnected, or the
computer falls off the firewall.  Disconnecting the ground (negative) post
has been the preferred way to remove power on cars since long before the
computer era; I believe this is simply because the negative post is less
cluttered.

>   2) From that, what exactly do you lose when the memory is erased?

  The GM computer keeps a "crib sheet" of mixture data (indexed according
to operating conditions) so that it can get close to the correct mixture
without waiting for the oxygen sensor feedback, a memory of fault
conditions, and some state information to help it decide whether the
engine is likely to require warm-start or cold-start techniques (for
instance).  All this is lost when the computer loses power.  If you have
disconnected power, the computer will have to rebuild its crib sheet, and
GM suggests that you drive for a few miles at varying speeds and throttle
openings to give it the data it needs (the engine has to be warm, too, since
the computer needs the feedback from the oxygen sensor).

  This crib sheet is basically a set of corrections to the standard data for
your engine and car; the engine computer will work adequately without it,
but performance and driveability improves as the crib sheet data gets better.

  Incidentally, on cars with electronic dash displays, the odometer is an
electronic display of the value on a mechanical counter buried in the
dash somewhere (even on cars like the GM C-bodies which have electronic
speedometer/odometer systems, where the mechanical counter is driven by
an electronic circuit).  The GM C-bodies are "Chrysler resistant" (so to
speak), since the same signal feeds the engine control computer and the
odometer circuit; disconnecting the odometer (unless you get fairly
fancy) also disconnects the engine computer, and this causes things not to
work very well (the engine computer really likes to know how fast the
car is moving, and gets upset if you're sitting at wide-open throttle
and zero mph for hours at a time).

>   3) Is this mentioned by Ford manuals or any shop manuals (i.e. Chilton's)?
>      I know dealers don't mention this - my family just bought a Sable, and
>      I would presume that the computers are basically the same, and our 
>      salesman never mentioned it.

  The GM shop manuals explain this quite clearly.  It is possible to read
most of this information out (GM cars have a little connector under the
dash which includes a serial line to the computer), and some of it is
quite valuable for service purposes (for instance, the crib sheet will show
if the engine is running unusually lean or rich, which may indicate a leak
in the intake system).

  The GM computer is quite paranoid about its sensor data, too, and a large
chunk of the ECM code is just checking the sensor values to see if they
make sense; if the ECM decides something's strange, it records a fault
code and lights the "Service Engine Soon" indicator on the dash.  The
fault codes can be read out through the connector, and the service manuals
have flow charts for tracking down the problems indicated by the codes.

  I'd be surprised if the average car salesman knew much about the engine
control computer; even if he or she did, I doubt the average car buyer would
be interested.  I'm not surprised the salesman never mentioned it.

>   4) What does this do with emissions control?  Some counties in Maryland
>      have regular testing - would you still pass with the fuel being mixed
>      differently?

  The volatile memory (the stuff you lose when you disconnect power) is (at
most) a set of minor corrections to the standard settings for your combination
of engine, transmission, and options.  Once the engine is warm, and you've
driven a few miles and given the computer a chance to rebuild its tables,
you will be (more or less) back to the state you were in before the power
failure.  The standard settings are all in non-volatile memory (PROMs in
most cases) which won't be affected by power loss.

  --Martin Harriman  <martin@smdvx1.sc.intel.com>


Hardware/software interaction RISK

Alan Wexelblat <wex@MCC.COM>
Wed, 8 Jul 87 18:00:25 CDT
Friends of mine were recently stung by an unanticipated problem in the
interaction of the hardware and software.  They were using a random
number function which was not truly random.  One way to more closely
approximate true randomness was to get a new seed from the system
clock for each call to the generator.

When the software was being debugged, this worked fine.  However, when
the debugging was turned off, the random numbers suddenly started
repeating (in a predictable and non-random way).

It turned out that when the software ran at full speed (without the
debugging code to slow it down) it sampled the clock too quickly.
Thus, the seed number was not changed and the random-number generator
would produce the same number.

This bug was quite difficult to catch because it depended on the speed
of series of calls to the clock-reading function.  This speed was not
only slightly different from execution to execution, it depended on
the system load.  And, of course, it changed as soon as you started to
monitor it.
                                --Alan Wexelblat
UUCP: {seismo, harvard, gatech, pyramid, &c.}!sally!im4u!milano!wex


How to (or how not to) speed up your computer!

Willie Smith, LTN Components Eng. <w_smith%wookie.DEC@decwrl.dec.com>
08-Jul-1987 1841
From Digital Review April 6, 1987 p.75

           "Victims Sought in Wall Street Fraud"

     "The [S.E.C.], the U.S. Secret Service, and the Arizona attourney
general's office are all investigating a reported $28.8 million computer
fraud (and a bizarre cover-up) alleged [!] to have taken place at a New York
City brokerage house on June 20, 1986."

     To paraphrase a bit, there is a problem during the triple witching day
(hour?) when the volume of transactions skyrockets. This firm's solution to
the resulting computer overload was to "speed up the operation of it's IBM
computer system by turning off the applications-level software that recorded
information for the audit trail for each transaction."  A clever clerk set
up 22 bank accounts under fictitious names for himself and selflessly
volunteered for overtime that evening to help clear up the backlog.  He then
sold a bunch of customer's holdings and credited the money to his phony
accounts, transferred the money to a numbered Swiss account, and left for
'vacation'....
     The really scary part about this is that no-one knows for sure what
securities he sold, who they belonged to, or how much money he ended up
with! They also don't know where he is, but that's hardly surprising. :+)
They wouldn't have even known that it happened except that another clerk was
working on the same data and saw something change that wasn't supposed to.
The firm is calling it a computer error and "hopes that customer complaints
would allow it to identify and reimburse the victims."

Willie Smith  w_smith@wookie.dec.com  w_smith%wookie.dec.com@decwrl.dec.com
{USENET backbone}decwrl!wookie.dec.com!w_smith


Re: Aviation Safety Reporting System (RISKS-5.8)

Jim Olsen <olsen@XN.LL.MIT.EDU>
Wed, 08 Jul 87 22:26:52 EDT
I got my information on ASRS from the "Pilot's Audio Update", March 1987,
Vol. 9, No. 3 (Educational Reviews, Inc., Birmingham, AL)...


NASA Safety Reporting System

<decvax!utzoo!henry@ucbvax.Berkeley.EDU>
Wed, 8 Jul 87 23:44:17 edt
It would be worth examining the [reporting systems] that do work, and
they're not all in Japan:  both the US and the UK have confidential
aviation-safety reporting systems which get serious use.  One factor which
may be significant is that these systems are *not* run by the people
involved in managing aviation safety! They funnel through third parties (in
the US it's NASA, in the UK it's the aviation-medicine people I think) who
are far enough removed from the ongoing policy wars that they can
convincingly claim to be honest go-betweens.  Pious proclamations of good
intentions often are not enough to convince would-be reporters that their
names won't get back to management.
                            Henry Spencer @ U of Toronto Zoology
                     {allegra,ihnp4,decvax,pyramid}!utzoo!henry


Re: RISKS in "Balance of Power"

Eugene Miya <eugene@ames-nas.arpa>
Wed, 8 Jul 87 11:49:06 PDT
>From: Heikki Pesonen <LK-HPE%FINOU.BITNET@wiscvm.wisc.edu>
>Subject:      RISKS in "Balance of Power"
>There is a computer game about "Geopolitics in the nuclear age" called
>BALANCE OF POWER.  It is sold at least for Commodore Amiga.  I bought one and
>now master the beginners level -- being able to beat Soviet Union. ...

"Beat the SU?"  Chris Crawford, the author of the game, gave a talk about
its development to the Palo Alto CPSR chapter before the game was marketed.
His stated goal (which I might might have changed) was to have the world
survive 20 years avoiding TN war.  If you have beat the SU, I suggest
sending a nice little letter to Chris telling him how.  The fact that you
are Finnish will be even more of a kick.  Chris thinks, "It's only a game."
He was approached by several University Poli Sci Dept. as a teaching tool.

The RISK of simulations is quite real.  There exist several "games" which I
hear about in the "shadows."  They are played by non-computer "hard-ball"
people [I was informed (at LLNL) that George Schultz was one].  I can guess
it runs on an Apple II because of portability requirements.  No flashy
graphics.  I doubt if these models are validated in anyway.  Chris's game
was given serious scrutiny, [the graphics were seen as a major advance, as
well as the use of mice], but it apparently lacks some statistics.  Note: I
have not given you the name of the group which showed me this.  There are
other groups at the Livermore Lab (where I am allowed to visit) which do
things like tactical and strategic nuclear war simulations.  (I have played
the most visible tactical simulation named JANUS [objective: train middle
level officers use and abuse of nuclear weapons]: Blue team (me) was overrun
by Red team.)  There are also other National Labs, and at least 24 companies
each employing close to a thousand people each who simulate matters of
policy [perhaps RAND being the best known].

The US and the SU both have a propensity with technological toys.  Their
attraction is very powerful as adult games (lacking video).  As long
as the older politicians breed a healthy skepticism in the younger
politicians, we are probably okay.

--eugene miya


BALANCE OF POWER

Hugh Pritchard/Systems Programming <<PRITCHAR%CUA.BITNET@wiscvm.wisc.edu<>
Wed, 8 Jul 87 10:49 EDT
> There may be some risk in
> designing games simulating international affairs, if they are seemingly
> realistic.  Some childish people may take them as the truth.

About 10 years ago, here in the suburbs of Washington, DC, a video game was
introduced which simulated cars chasing pedestrians.  The game was shortly
removed, because people felt that the game was teaching kids that pedestrians
were fair game for motorists.

On the other hand, world conflict simulations may be useful:  I recall a game
used in some psychology experiments.  The game concerned trucks and limited
roadways.  The experimenters noted that the players found that cooperation
between players (sharing the roadways) gained more for each player than
hostile actions.

Note that 45 years ago, the Japanese war-gamed their intended attack on Midway
Island.  In their simulation, they lost.  They proceeded with the attack,
anyway.  They lost the real battle, too.

Hugh Pritchard, Systems Programming, The Catholic University of America,
Computer Center, Washington, DC 20064, (202) 635-5373

Please report problems with the web pages to the maintainer

Top