The RISKS Digest
Volume 6 Issue 77

Wednesday, 4th May 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

$15.2 million Pennsylvania lottery scam
PGN
Risks of marketing computer products
Mark Eckenwiler
ERIC and VULT identified
WHMurray
Virus Distribution Idea
Fred McKay
ATM card / Mail Verification
Bruce Howells
Paying Cash to Avoid Records?
Russ Nelson
More on engine overspeed and autothrottle
Leonard N. Foner
More SS# RISKS
Les Earnest
Info on RISKS (comp.risks)

$15.2 million Pennsylvania lottery scam

Peter G. Neumann <Neumann@KL.SRI.COM>
Wed 4 May 88 14:09:30-PDT
  HARRISBURGH, PA (AP) — Authorities accused a computer operator from a
company that helps run the state lottery of forging a winning $15.2 million
ticket and another man with trading it in for the jackpot.
  Mark S. Herbst, 33 of Harrisburgh, was arraigned Tuesday less than a week
after he traded in the ticket for the first $469,989 installment of the prize
from a Super 7 drawing last July 15.  He was jailed in lieu of $50,000 bail.
  Jailed in lieu of bail Monday night was Henry Arthur Rich, also 33 of
Harrisburgh.
  Officials alleged Rich used of computer at his firm, Control Data Corp.,
to identify unclaimed jackpots and to print a copy of the unclaimed winning
ticket, which he gave to Herbst to cash in.
  Officials became suspicious, in part because the bogus ticket was printed on
a blank from a Scranton lottery-ticket outlet, while a computer check showed
the actual winner was sold in Bucks County.
                                   [Source: San Jose Mercury News, 4 May 1988]


Risks of marketing computer products

<apollo!eck@csl.sri.com>
Tue, 3 May 88 18:01:00 EDT
I just received some marketing information from Radian Corporation (of Austin,
TX) about their product CHARM ( = Complex Hazardous Air Release Model).

Basically, CHARM provides software simulation of airborne toxic substances
release (isopleths based on cloud density, wind, temperature, etc.).

Radian states that "[m]ore than 85 users in industry and local, state, and
federal agencies are using CHARM to develop emergency response plans, to train
personnel in emergency response procedures, and to rapidly assess real-world
situations should they arise [sic]."

For some reason, after reading the above I am morbidly amused by the fact
that Radian includes in the License Agreement the usual disclaimers:

    "The program is provided 'as is'..."

    "The entire risk...is with the customer."

    "Radian does not warrant...that the operation of the program
     will be uninterrupted or error-free."

Mark Eckenwiler      eck@apollo.uucp    ...!mit-eddie!apollo!eck
Disclaimer: My comments are provided "as is."  By reading them you
            implicitly indemnify me against claims for loss or damage.

     [Before anyone responds, recall the flurry of RISKS contributions
     begun by Jim Horning's "Risks of Warranties" in RISKS-4.76.  PGN]


ERIC and VULT identified

<WHMurray@DOCKMASTER.ARPA>
Tue, 3 May 88 18:22 EDT
"ERIC" and "VULT" Identified

ERIC and VULT, the specific targets of the SCORES Apple MacIntosh virus,
were internal projects at EDS in Dallas according to EDS spokesman Bill
Wright.  These labels identify proprietary trade secret programs that were
once, but no longer used at EDS.

While SCORES was specifically designed to destroy these applications, it
would infect anything.

All the above was gleaned from "Macintosh Today," May 2, 1988 which also
contained a highly speculative article entitiled "Viruses:  Nothing to
sneeze at." If you believe this article, computers have seen their day.  In
the future, viruses will make them unuseable.

William Hugh Murray, Fellow, Information System Security, Ernst & Whinney
2000 National City Center Cleveland, Ohio 44114                          
21 Locust Avenue, Suite 2D, New Canaan, Connecticut 06840                


Virus Distribution Idea

<FMCKAY%HAMPVMS.BITNET@MITVMA.MIT.EDU>
Wed, 20 Apr 88 15:09 EST
 6 Apr 88 15:20:39 CST
> From: Will Martin — AMXAL-RI <wmartin@ALMSA-1.ARPA>
> Subject:  Virus distribution idea [...]
> Now, what immediately occurred to me was, "What a beautiful way to
> disseminate a virus!"

I also recently received an unsolicited request to run an enclosed disk for
the purpose of evaluation.  This disk was from IntelliQuest in Austin.  This
disk was a "User Interface Prototype" reportedly under development by Ashton-
Tate.  Since no AT logos were in place anywhere and I had read all the recent
reports of viruses in RISKS and elsewhere, I was suspicious.  I have an old
Bernoulli Box as my hard disks so I unmounted them and fully intended on
powering down after using the disk.  Upon booting the disk, I was shocked to
see "DRIVE C: NOT READY".  I then place every write protect possible on the
[[[blanks in received mail]]].  I assume one of the first functions done by the
interface is to check the C: directory.  The program booted, but was unable
to impress me.  I was contacted last week by IntelliQuest and spent about 10
minutes talking to them about the product and my negative opinion of it.  I
am confident that modern day electronic vandals would not spend the time or
money to call me from Austin.  In short, trust the dealer but always cut
the cards.
                                        Fred McKay


ATM card / Mail Verification

"Bruce Howells" <engnbsc%bostonu.BITNET@BUACCA.BU.EDU>
Mon, 25 Apr 88 23:43:44 EDT
My bank recently mailed out new ATM cards to all of its cardholders, mostly
as advertising for a new network.  Familiar sounding RISK?

The way that this bank handled this risk merits mention:  They placed
telephone calls to each of the card-holders that it mailed new cards
to (at least that's what the voice on the phone told me).

Perhaps such a telephone followup could serve to limit some of the risks
mentioned in previous entries; from personal experience trying to sell
newspapers via telephone in New Jersey, such a verification could be done
quite cleanly, especially since people will be much more willing to
determine if their new ATM card arrived than to subscribe to a newspaper!


Paying Cash to Avoid Records? (Re: RISKS-6.75)

Russ Nelson <nelson@sun.soe.clarkson.edu>
Wed, 4 May 88 12:01:59 EDT
  > ...  Paying cash is the only sure way to avoid this.  [David Chase]

The local videotape rental store has an XT clone w/ a hard disk on
which they keep a record of every tape that you've ever rented.  All
the clerks have access to this information.  Of course, because you're
renting, paying cash is insufficient to preserve your privacy.  Hmmm...
libraries must preserve confidentiality; why not video tape rental shops?


More on engine overspeed and autothrottle

"Leonard N. Foner" <FONER%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
27 Apr 1988 00:48 EDT (Wed)
Since I was the individual who told this story to Joseph, I suppose I
should verify it and add some authenticating details to it.

I was told this story by Professor Alan Epstein of the MIT Aero/Astro
department during a talk of his during MIT's Independent Activity Period of
January 1985.  The talk was titled, "Testing Jet Engines:  Why It Takes All the
Money in the World".  Anyone who really wants to nail this down precisely
should ask him.

The reason this story is so important is that it demonstrates the unfortunate
interaction of several design failures, each of which alone should not have led
to cabin depressurization (not to mention the passenger who went out a rather
small hole).  The aircraft involved was some Boeing flavor, 727 or 747 type.

The first failure was in the crew, which should not have been playing
games by doing this sort of experimentation.  They got very long,
unpaid beach vacations for their conduct.

The second failure was in the autothrottle mechanism itself, which did indeed
read its input from the panel display in the cockpit rather than directly from
the tach in the engine.  I can't imagine what possessed the engineer to read
input from something with a breaker in the path, but that's neither here not
there.  Even worse than this, though, was in not detecting an obviously
open-loop (i.e., bogus) value of a sensor, and in thus generating wild control
signals that should never have been generated.  (After all, sensors DO fail.)
We saw this sort of failure in the PDP-11's controlling the blast furnace (in
some RISKS about two months ago).  The control circuit should instead have
insisted on some sort of manual intervention (though, as we'll see below, such
manual intervention could not have arrived in time to save the aircraft), or at
least "failed safe" by leaving the engine running at the same speed as before
(and bleating loudly that something's wrong).

The third failure was in the engine testing itself.  Here are the details.
When the breaker was flipped, the autothrottle circuit for that engine went
open-loop.  When the engine reached 109% of maximum rated power (about a second
later), it stalled the compressor blades.  This means the compressor wasn't
compressing efficiently any more, allowing a blast of essentially white-hot air
to come out the FRONT of the engine.

This blast of air started an oscillation in the front set of engine fanblades,
which rubbed on a cowling and started a fire.  The fire fed the oscillation,
because of timing and positive feedback between pressure regions and the flame
at the cowling.  Elapsed time is now about a second and a half from the breaker
being flipped.

After enough abuse (the fanblades were not designed for highspeed oscillation
in this axis), one of the blades of the frontmost fanblade assembly failed at
the root.  Now, jet engines are designed and tested to withstand a blade
failure.  The failed blade supposed to get chewed up and go out the back of the
jet.  I've watched tests in which they have blown up explosives at the blade
root to simulate just such a failure, in a jet on a stationary test stand at
full power.  Even though the engine is not expected to run after this happens,
it's expected to shut down cleanly without tossing anything radially out the
wall of the engine.

This failure was different, because such tests are not made with the compressor
blades stalled (I suppose that no one ever realized that the jet would be run
stalled, since the normal control channels probably can't run the engine up to
that speed).

Since the compressor was stalled, air was blowing out the front of the jet,
rather than the back.  This forced the broken blade out forwards, at which
point it was no longer constrained by the body of the engine, and was free to
fly off radially---in this case, through the fuselage.  The blade went through
the fuselage less than TWO SECONDS after the breaker was tripped.

The three failures---human, electronic, and mechanical---are an example of how
tightly coupled such failures can be.  They are also an example of just how
fast such failures can occur:  the higher the power level being controlled, the
faster such failures can take place, because there's more energy available to
cause things to fail.  The explosion of the Shuttle was a similar lesson in
power densities.  (For comparison purposes, one 747 on takeoff roll is
generating 400 MW total [100 MW/engine].  An aircraft carrier generates about
120 MW all told; a large nuclear reactor, 1200 MW or 1.2 GW; the Shuttle on
liftoff, about 7 GW.)

Incidentally, while the engine did indeed fail and toss a blade radially, I'm
inclined to believe that the human and control failures were the real failures
here.  Almost any engine can be made to fail if it's purposely driven beyond
its performance envelope (witness the short life of racing car engines, which
run at the ragged edge).  The real problem here was in allowing any AUTOMATIC
control circuit to force the engine outside its envelope.  (I can see why a
human might be given the benefit of the doubt---if the engine is being
overstressed to avoid a head-on collision, for example, I'd rather let the
human do whatever he likes if it might save the aircraft, even at the risk of
blowing something up, rather than keeping the engine nice and safe and letting
it be destroyed [along with the passengers!] in the resulting collision.  If
the engine fails in such a case, well, it wasn't supposed to work under those
circumstances anyway, but if it DOESN'T fail, then allowing deliberate,
considered operation outside its rated envelope might save the aircraft.  But
an AUTOMATIC system should never be given the benefit of such doubt!---because
now you're designing with two sets of inconsistent constraints.)
                                                    <LNF>


More SS# RISKS

Les Earnest <LES@SAIL.Stanford.EDU>
02 May 88 1958 PDT
In RISKS 6.76, Stanley Quayle described another intrusive Social Security
Number practice.  Here is an account of some of the RISKs of _not_ giving
out your SS# freely.  Overall, I find these risks more acceptable than
those on the other side, but there have been times . . .

For the last decade, I have declined to give my social security number to
anyone other than those that are entitled by law to have it.  I have been
refused credit on a number of occasions because of this, but have
encountered no serious problems in getting credit that I needed.  For
example, I have a full complement of credit cards that have no annual fees.

Some of the larger credit data banks, such as the one operated by TRW,
apparently require the SS# in order to access _anything_.  While some
organizations refuse to deal with me, others with more sensible policies
simply check my banking and mortgage references, which show a perfect
credit history, and give me credit.  (I have a sneaking suspicion that one
or more of my credit references may have given away my SS# without
authorization, but I know of no way to determine this.)

When I returned to Stanford University in 1985 and signed up for medical
and dental insurance, I was told that the identifier that would be used
for these services was my SS#.  "Over my dead body," I said.  I pointed
out that doing so would tie my medical records to my government and
financial records and that I preferred to keep these things separate.

The Benefits people expained that "Stanford has contracts with the
insurance companies that require that we give them your Social Security
Number."  I pointed out that they had a contract with me to provide
medical insurance, that I consider my SS# to be confidential, and that it
was up to them to solve this problem.  I also pointed out that it would be
relatively easy to add one field to the personnel data records for an
"Employee ID" that could be used instead of SS#.

Incidentally, I believe that the insurance companies prefer to use SS#
instead of employee number because it makes it easier for them to cross-
connect medical records from different periods, which is occasionally useful
in fraud investigations.  Of course, this same feature also makes it easier
to find medical reports for the purpose of political or other harassment.

The Benefits people dithered over the problem I posed for a couple of
months while I harassed them.  They finally decided that instead of
augmenting the Personnel database, which they apparently regarded as
next-to-impossible, they would give me a phoney SS#, which would be
changed to the correct one just before they sent W-2 forms to the
government at the end of the year.  I was suspicious that this wouldn't
work and said so, but agreed that it would theoretically meet my needs.

The Benefits office asked one thing of me:  that I not tell anyone else
that they were doing this.  They were apparently afraid that there would
be a mass of troublemakers who would exceed their capacity to cope.  They
subsequently demonstrated that they were not even able to cope with me.

I did manage to get my dental checkups paid for the first year, but I had
a hunch that I was not home free.  At the end of the year, I called
Accounting to make sure that my earnings would be reported to the
government under my true SS#.  "Oops," was the reply, "We'll send them a
correction on that."

A few months later I received a copy of a letter to Stanford from TIAA-CREF,
which manages my retirement account, asking where the bizarre SS# came from.
Fortunately, they had somehow been able to figure out who I really was.

Things went fairly smoothely after that until Benefits decided to give me
another phoney SS# in 1986.  That one caused the dental charges to bounce,
so they gave me another phoney number, which also didn't work.  They then
announced that the only way to get those bills paid was for me to use my
true SS#, which they acknowledged they had given to Delta Dental.  I sent
them a rather nasty and threatening note and they subsequently managed to
get the bills paid and to make the new phoney SS# work.

I understand that the Personnel Department is now in the process of
converting to Stanford employee numbers instead of SS# as the basic
identifier, which they should have done long ago.  I would like to think
that I helped stimulate this conversion, but there is no direct evidence.

It is clear that I brought most of the problems described above on myself.
I would (and probably will) do it again.  If you wish to straighten out
the world, you have to do it one piece at a time.
                                                        Les Earnest

Please report problems with the web pages to the maintainer

x
Top