The RISKS Digest
Volume 6 Issue 62

Friday, 15th April 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Neural Hype
Brian Randell
Bay Meadows Sued Over Computer Betting Glitch
PGN
Carl's Jr. alleged inside trading caught "by computer"
Dave Suess
DoD simulations
Gary Chapman
The Israeli virus bet
Y. Radai
Types A and B: doesn't anyone read CACM?
Eric Roskos
Accountability
George
Info on RISKS (comp.risks)

Neural Hype

Brian Randell <Brian_Randell%newcastle.ac.uk@NSS.Cs.Ucl.AC.UK>
Fri, 15 Apr 88 17:32:31 WET DST
The following article (reprinted without permission), appears - I am
embarrassed to say - on the front page of the April 14 issue of The Times, no
less. I hope that it is largely based on the reporter's imagination and his
misunderstanding of what he was told by the Imperial College researchers - so
that it is the reporter rather than the researchers who constitutes the
"computer-related risk to the public"!

Brian Randell, Computing Laboratory, University of Newcastle upon Tyne
UUCP  = ...!ukc!newcastle.ac.uk!Brian_Randell    PHONE = +44 91 232 9233


COMPUTER IN A TANTRUM HOLDS UP "BABY" PROJECT

By Robert Matthews, Technology Correspondent

A computer built at Imperial College, London as a crude simulation of the human
mind has startled its creators by going on strike and refusing to cooperate
with their work.

Mr Michael Gera, a scientist in the Neural Computing Group at the college, said
yesterday that the computer, known as a neural net, had simply refused to carry
on with its lessons when it was given a task it considered was beneath its
capabilities: "You might say it had an attack of boredom".

Mr Gera and his colleagues had designed the machine to test a theory about the
way in which human babies learn to communicate. They attempted to simulate the
working's of the baby's mind by instructing the computer to turn itself into a
"neural net", a collection of dozens of electronic devices which mimic the
operation of neurons, or brain cells.

Some theories in psychology claim that babies learn to talk to their parents by
babbling randomly, and looking for responses. For example, babbling that sounds
like "mama" wins a response, with mother pointing to herself. Then baby
remembers that "mama" corresponds to the object doing the pointing.

In the first set of experiments with the machine at Imperial, Mr Gera switched
on the neural network and let it babble away. When the machine hit upon a
sequence of babbling that Mr Gera had decided was the electronic equivalent of
a sensible word, the machine was given a suitable response. Sure enough, the
machine soon picked up a crude "vocabulary".

Mr Gera has gone a step further in a second set of experiments, still under
way.  The machine is told that a specific object it is being shown corresponds
to the electronic equivalent of, say, a black cat. Later, another type of cat
is shown to the machine, which is then expected to recognise quickly that this
new object is also a cat, and say the word accordingly.

However Mr Gera has made the unnerving discovery that unless the objects shown
to the machine are sufficiently different and exciting, it goes into a huff. He
said: "It just sits there and goes on strike".

The Imperial team, led by Professor Igor Aleksander, has seen the machine throw
its weight about on a number of occasions.

The long-term aim of the research is to develop neural nets capable of tasks
still beyond today's most powerful computers. Those "supercomputers" are
excellent at tasks such as solving equations, but virtually useless at tasks
requiring intelligence.

However, events suggest that the next generation of computers will have to be 
taught good behaviour before they can be given responsibility.

Mr Adrian Rogers, another member of the team, said: "Neural nets are a little
unruly sometimes. We don't know enough about them to put them in charge of,
say, a nuclear reactor."


Bay Meadows Sued Over Computer Betting Glitch

Peter G. Neumann <NEUMANN@csl.sri.com>
Fri 15 Apr 88 11:03:50-PDT
Peter Frankel, a San Mateo CA real estate investor on 29 June 1987 placed
$9600 in cash at the parimutuel window at Bay Meadows racetrack on a
Pick-Nine, 20 minutes before post time.  The clerk was unable to coax the
computer system to issue a ticket for the bet, in several tries.  However,
the window manager held on to his money and computerized betting card.  HE
PICKED ALL NINE CORRECTLY, but was told he could not collect becuase he did
not have a ticket.  The track lawyers (said his lawer, Monzione) "got cute
on us and said that for them to give Mr. Frankel his money would mean they
were involved in illegal gaming."  He did get his $9600 back, but is now
suing for the expected $265,000 — plus damages for a real estate that fell
through because he was unable to collect.  

San Francisco Chronicle article by Bill Workman, 15 April 1988

   [Apparently the software had rejected the bet as a single transaction.
   Could it be that no one had previously tried a Pick Nine? or that the
   product of the number of horses in each race was greater than some
   programmed limit?  or was there a Trojan horse race?  or did they 
   guess that Frankel was psychic?]


Carl's Jr. alleged inside trading caught "by computer"

Dave Suess (CSL) <zeus@aerospace.aero.org>
Thu, 14 Apr 88 19:13:03 -0700
I just heard a news tidbit on local news about the charges handed out 
today by the SEC accusing Carl Karcher Enterprise insiders (Carl and
family, mostly) of selling significant holdings just prior to the news
of a large dip in quarter earnings being announced.

According to a spokesman (for the SEC?), "our computer detected a [local
flurry of trading just before a significant financial news release]".  The
trading activity was noted back in '85, I think, since the news release 
involved a dip in profits from the previous year during the Olympics in L.A.


DoD simulations

Gary Chapman <chapman@csli.stanford.edu>
Thu, 14 Apr 88 17:28:56 PDT
I received a copy of the GAO report, "DoD Simulations:  Improved Assessment
Procedures Would Increase the Credibility of Results," (GAO/PEMD-88-3, December
1987).  This is a 154-page report on three DoD simulations; two that were done
for the DIVAD air defense gun (the one that had so many problems it was
cancelled) and one for the Stinger missile.  The two DIVAD simulations were
called ADAGE (Air Defense Air to Ground Engagement) and Carmonette; the Stinger
simulation was called COMO III (COmputer MOdel).

I won't go through the entire list of conclusions from this report, but
the following points are worth passing on:

  "One consistent weakness in all three simulations that potentially
  poses a major threat to credibility is the limited evidence of efforts
  to validate simulation results by comparing them with operational
  tests, historical data, or other models. . . .

  "Validation can be difficult, but it must be dealt with if simulation
  results are to be credible. . . .

  "Some of the results of the simulation analysts to show that the models
  we examined closely represent reality were very limited.  Some
  validation was not even attempted.  In general, the efforts to validate
  simulation results by direct comparison t data on weapon effectiveness
  derived by other means were weak, and it would require substantial work
  to increase their credibility.  Credibility would also have been helped
  by better dcoumentation of the verification of the computer program and
  by establishing that the simulation results were statistically 
  representative. . . .

  "In commenting on a draft of this report, DoD generally found the report to 
  be technically correct and concurred with GAO's two recommendations. . . ."

Another interesting section of the report is a fairly long technical
description of how "ground battle" is simulated in DoD simulations. 
This description includes some fairly sustained criticism of the models
studied, but it also offers quite a bit of information on what model
builders are supposed to take into consideration.

Here's an interesting example of what went wrong with one of the models:

  ". . . The ADAGE does not model direct attacks by aircraft on the DIVAD
  itself, since it does not model duels.  Instead, the attrition of the weapon
  was played in the Campaign [a subset of the simulation], which uses
  expected-value equations to calculate the probability of damage to ground
  targets by class from air attacks and assumes a random selection of targets
  within one target class.  Similar procedures were used to assess damage to
  DIVAD weapons in the ground war.

  "This approach led to a problem in which the DIVAD was labelled 'the immortal
  DIVAD.'  ADAGE results implied that it took 10 times the number of
  air-to-ground missiles indicated by the Carmonette model to kill one DIVAD.
  Analysis by the study advisory group indicated that classifying the DIVAD in
  a target class by itself caused the ADAGE model to shoot all the helicopter
  missiles at the one DIVAD. . . ."

Gary Chapman, Executive Director, 
Computer Professionals for Social Responsibility


The Israeli virus bet

Y. Radai <RADAI1%HBUNOS.BITNET@CUNYVM.CUNY.EDU>
Fri, 15 Apr 88 17:51:53 +0300
   In RISKS 6.58 Fred Cohen remarked in connection with the virus bet which
was made on Israeli television (described in RISKS 6.55) that he suspects
that "the Israeli defense is useless against most of the viruses we have
done experiments on - I wish I was on the attacker's side of that bet!!!".
I'm sure that there are many others who would also be willing to be on that
side of the bet.  However, before jumping to conclusions it would be wise to
know how the detection program works and what the bet was over.
   First of all, it should be clear that the "defender" does not claim that his
program fixes infected files or prevents infection, or even that given a file,
it can correctly decide whether it contains a virus.  He claims only that if
his program has been used between the time that a file has been created on a PC
disk and the time that such a file becomes infected by a virus, that infection
will be reported by the program.  And the bet was whether the "attacker" (who
was given a copy of the detection program on April 10) can, within two weeks,
create a virus which will not be detected by this program in the sense just
described.  (Actually, the precise terms of the bet have not yet been fixed,
and much depends on how it is worded; more on that below.)
   The program, written by Yuval Rakavy and Omri Mann, works according to a
principle that is not at all new.  (In addition to theoretical work on the
subject, I know of two other already marketed programs for PCs which work
similarly.)  For every file (or for any specified set of files) it computes a
"fingerprint" or "checksum", i.e. a certain function of the bits in the file,
which is sufficiently intricate that even with knowledge of the algorithm, it
would be impossible to alter a program to achieve a specific purpose without
changing the checksum.  Of course, the idea is that if there's a change in the
size, date, time or checksum of a file which wasn't supposed to have been
altered, the file has presumably been infected by a virus.  (In addition to
files, the program also automatically checksums the boot block.)
   It seems to me that whether a program such as this can really "detect any
virus" depends on how one defines "detect" and "virus".  In trying to conceive
of a virus which could avoid detection, I considered the possibility of
creating a situation in which a checksum alteration would be ambiguous.  For
example, suppose software were created which added destructive code to each
executable file which a compiler creates.  Of course the checksum of such a
file would change with each new compilation, but that is to be expected; there
would be no reason to conclude that it contains destructive code.  Would we say
that the program has failed to detect a virus?  True, if such a file were
copied to other disks, it could do damage to them on some later target date.
But the destructive code would be unable to infect other files since that
would cause a check-sum mismatch.  If it is agreed that by definition, a virus
necessarily propagates by altering healthy files in some manner before
performing its most lethal damage, then this is not a virus but a Trojan horse,
and the checksum program would not have failed to detect a virus.
   Of course, Fred Cohen or someone else may think of an idea which neither the
defender, the attacker nor I have thought of.  But given the above information,
would Fred still claim that this defense is useless against most of the
viruses, and would he still be willing to be on the attacker's side of the bet?

   Y. Radai, Hebrew Univ. of Jerusalem, RADAI1@HBUNOS.BITNET


Types A and B: doesn't anyone read CACM? (Re: RISKS-6.54, 59)

Eric Roskos <uunet!daitc!csed-1!csed-47!roskos@rutgers.edu>
Fri, 15 Apr 88 10:02:04 EDT
  : ...  The researcher, Jan L. Guynes, used psychological tests to classify 86
  : volunteers as either Type A or Type B personalities...  She found that a 
  : slow unpredictable computer increased anxiety in both groups equally...

It's been interesting to see all this discussion based on a newspaper article
on "a researcher, Jan L. Guynes," no one citing the fact that this newspaper
article was no doubt derived from a paper published in our field's own journal, 
Communications of the ACM, in the March, 1988 Issue, on page 342!


Incidentally, something I have not seen mentioned in your digest is that the
_New_York_Times_ is currently exploiting computer viruses to sell
newspapers.  An advertisement which runs almost everyday on WBMW, a radio
station in Manassas, VA, shows a man who is impressing a colleague with his
up-to-the-minute news knowledge of facts by saying,

    "Who would imagine that cross-country skiing would be so popular?"

    (His colleague, who obviously doesn't read the _Times_, comments
    that he didn't know that.)

    "Yes, and did you know that now computers have viruses, sneaky little
     programs that make them sick?  And they're even contagious!"  (He then
     goes on to tell about some other timely information; and ends up 
     saying how he learned it all from the _Times_...)

Eric Roskos, IDA (...daitc!csed-1!roskos, or csed-1!roskos@DAITC.ARPA


Accountability

<munnari!ditmela.oz.au!george@uunet.UU.NET>
14 Apr 88 23:13:57 +1000 (Thu)
I think what Henry Spencer said is all too depressingly true, but I also
think its more indicative of a social failure than a true RISK
(actually, so is the whole thread of my argument! whoops!) because it's
about the failure of a chain of command to control the situation.

-That cash is the only effective incentive for producing results is the 
ultimate disaster of our times, and when lives are at stake it really stinks.

However, I'm not trying to suggest only the threat of legal accountability
makes for correct solutions. I do think it's a vital link in the chain.

Actually, the ATM debate & also the 'social consequences of DB' stuff
are (to my mind at least) also less RISK-y than the good old 

      "Japanese robot murders family of 3 on easter outing" 

stories I used to read in ACM RISKS! -The trouble is so few genuinely
amusing RISKS seem to crop up these days.

Ditto to VIRUS' -they all show how when people don't accept responsibility
for their actions (-installing and running an ATM, indiscriminate data
capture in a DB, spreading dirty disks around campus) chaos ensues.

Even if the ATM network or a police DB is completely bug-free, it has social
issues which make me scared of its existence. I'm not scared of a VLSI,
only of the potential for it to be broken! -If AMEX or the LAPD try
to say "its a bug-free system" *THEN* we can stomp 'em!

I still think however there is an unanswered problem for ENGINEERING which
RISKS addresses: when an 'active' component of a 'reductionist' or
mechanistic setup (which I suppose a very formalized chain of command
during a launch sequence could be said to resemble, although I'm trying
to say computer system or program or chip without using those words) 
fails in the system, somebody should bloody well stand up and say 
    "it was my decision to do xyz..." 
-and disclaimers should be banned in law.

Marxists used (do they still?) talk about the "organic content" of capital,
the idea that even in a completely mechanized society the historical human
effort that built the machine (that builds the machines...)  is the endower
of "labour value" as opposed to "use value". I think this is extremely
important for computerized systems, where the human element may be merely
the selection of logic or algorithm. It is *soo* tempting to say:

     "hell... nobody was to blame, the machine did it all itself"

but there will *always* be some 'organic content' in this way. If we ever
get a Turing Testable robot, I'll let it carry responsibility for its actions
but until then I'm afraid the builders in all senses of the word should be
responsible for its behaviour.

More importantly, somebody commissions the system. In the case of Morton
Thiokol "blame" lies across many levels, but outsiders like me tend to lay more
emphasis on the swine who pressurized the engineers into disregarding the
weaknesses, not the engineers themselves.  O-ring failure was forseen, and then
conveniently forgotten.  (That's why I'd argue it was a social or
human-organizational failure and not a RISK in this group's sense of the word).

Rolt was writing about forseeable failure in structural mechanics:  a bridge
that fell down, an embankment poorly sloped, a signal methodology that had
deadlock or was not truly stable. Blame isn't for having a whipping boy --
although all too often that's all that it *is* used for, it identifies where in
the chain of command a bad decision was made *so that it can be prevented next
time round*.

I suppose all I'm saying is that if it was forseeable or deduceably likely a
programmer is in some way culpable when the system breaks down.
                                                                     (yes/no ?)

   [Edited lightly — but not for content - except for the final (non)sentence,
   which I left alone.  By the way, I don't think we've come anywhere near
   "Japanese robot murders family of 3 on Easter outing".                 PGN]

Please report problems with the web pages to the maintainer

x
Top