The RISKS Digest
Volume 7 Issue 46

Wednesday, 7th September 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Airbus vs U.K. MOD development standards
Lorenzo Strigini
Vincennes: Rules of engagement violated by AI heuristic?
Clifford Johnson
Re: Statistical reliability estimation and "certification"
Jon Jacky
A Computer Virus Case Goes to Trial
Joe Morris
Computers and guns
Gary Sanders
Automatic Call Tracing and 911 Emergency Numbers
Gary McClelland
Automatic Number ID: Bad Idea!
Andrew Klossner
Info on RISKS (comp.risks)

Airbus vs U.K. MOD development standards

<PROCIS@ICNUCEVM.BITNET>
Wed, 7 Sep 88 18:31 SET
From "Systems International", August 1988 issue, editorial page: "In his recent
lecture entitled "Should we trust computers?" given to the British Computer
Society, Martin Thomas, chairman of Praxis, said ...... the computer systems
used on the ill-fated A320 at the Paris Air Show were developed using
techniques 'the UK Ministry of Defence would find unacceptable for safety
critical _military_ software in the future'".  Can anyone give a first-hand
account of that lecture, or a more complete citation, or somehow shed more
light on the issue? I am curious about a) how far that future is; b) which of
the new rules would the A-320 development violate; c) how many current systems
would be found to violate those rules, and how many to respect them.

Lorenzo Strigini


Vincennes: Rules of engagement violated by AI heuristic?

Clifford Johnson <GA.CJJ@Forsythe.Stanford.EDU>
Wed, 7 Sep 88 00:00:32 PDT
A recent contribution noted that the Airbus shot down by the Vincennes had been
within binocular range of the ship, and inferred that binoculars were superior
to the Aegis system.  This is invalid.  Reportedly, there was an obscuring
haze, and, besides, even had the plane been identified as an Iranian Airbus, it
would have been shot down, according to the Pentagon's latest report, which
states that the Captain was fully aware that the plane may well have been a
commercial flight, e.g.:  "On the Vincennes, an officer watches the plane
slowly rising.  He jumps to his feet and says 'possible comair,' for commercial
aircraft, to the ship's commanding officer, Capt.  Will. C. Rogers.  The
Captain acknowledges this." (See NYT, Aug.20, for this and the other info. I
report.)

Another contribution citing the Vincennes noted the tendency for computer
output to be definitive, right or wrong.  This analogy is valid.  It was not
the Aegis giving bad data, but it was the Aegis giving a procedurally
*conclusive* categorization, together with the duty-imposed rules of
engagement, that caused what the military now boasts was a "prudent," albeit
automatic, killing of 290 civilians.  Thus:  (1) from the moment of take-off,
the plane was formally characterized as hostile merely because the airfield was
not wholly civilian, and this characterization would be definitively "correct"
until disproven by the flight's obeying the ship's radioed warnings; (2) the
rules of engagement next required that protection of armed-to-the-teeth U.S.
militia have top priority, above protection of defenseless civilians in
transit.  (Since the latter protection was the purported mission of the
Vincennes, this seems to me a code of cowardice rather than a rule of
engagement.) These rules required the shootdown.  The Aegis did its job and the
Captain his mandated duty, and they conclusively saved the Vincennes from the
risk posed by a lumbering Iranian Airbus that would not immediately respond to
radioed warnings.

JCS Chairman Crowe explained that all fault lay with Iran, because it was
"unconscionable" for the Iranians to permit a civilian airliner to take off
amid hostilities (which the air controllers are simply presumed to have known
about) and to ignore warnings.  According to the NYT, Crowe asserted that the
plane would have been shot down IN ANY CASE given lack of proof that it was not
hostile.  Such "shoot-on-suspicion" rules of engagement Crowe claimed to be
wise policy.  (To me it is chilling that the U.S.  calls the shootdown a
commendable "We'd-do-it-again" preprogrammed procedure, rather than a wildly
mistaken massacre; this kindles memory of Reagan's ire after the KAL007
shootdown:  "Shooting down a plane, even one with hundreds of innocent men,
women, children, and babies, is part of their normal procedure.")

The Pentagon's support of the shootdown as a prudent necessity fails to address
the official notice provided by the U.S. re its rules of engagement in the
Gulf, which stated:  "United States Navy ship captains realize that not all
commercial aircraft transmit their proper IFF code or remain in the proper
airways and will take this into account when they encounter such an aircraft."
So it seems a post-facto revision of the rules of engagement to assert that
failure to respond to warnings is per se sufficient cause for deadly force
*until proven otherwise*.  That is, Rule Of Engagement number (1) above was in
violation of the declared Rules Of Engagement.  The U.S. should have informed
the airlines that all planes taking off from Bandar Abbas were presumed hostile
until proven otherwise, instead of informing them that no such presumption
would apply even if such a plane strayed from its corridoor and failed to
broadcast civilian codes, let alone if it was within its corridoor and did emit
civilian codes.

One natural question naturally not commented on in the Pentagon's report is the
applicability of the word "panic," although it notes:  "At every opportunity
when the ship's internal communication link is silent, an officer known as the
tactical information co-ordinator calls the attention of the other officers to
his belief that the plane is accelerating and descending.  His computer
terminal, like others on the ship, actually shows the aircraft rising...
'Towards the end,' wrote Gen. George B. Crist, 'it is reported he was yelling
out loud.'" By not even reprimanding this officer, and by ultimately blaming
inadequate but correctable video displays, the Pentagon is materially
announcing that misreading computer consoles is an accepted, large risk that
higly-trained crewmen cannot be expected to avoid, and which absolves Captain,
crew, and computer from all responsibility.

Interest has been expressed in the numerical/logical algorithms whereby
computerized sensors declare a detection as hostile.  The above illustrates
that declaration of hostility is not merely a simple sensor in-/de-duction, but
as much an "IF-THEN" heuristic/rule-of-thumb, e.g.:  "IF TAKE-OFF FROM NOT NOT
MILITARY AIRFIELD AND ALERT-LEVEL ABOVE 2, UNTIL AFFIRMATIVE RADIO RESPONSE
THEN BLIP IS HOSTILE THEN SHOOT ON APPROACH." What is ordinarily construed as
objective inference, is in fact a mandated conditional *definition*.
(Likewise, it is linguistically predefined that the United States is "under
attack" — which triggers and authorizes retaliation — if a nuclear attack
warning level exceeds a certain threshold, euphemistically dubbed "the
President's Launch Under Attack threshold".)

Re purely statistical sensor detection, I recommend "Data Fusion" in Defense
Electronic's first annual C3I Handbook (1986).  It provides a comprehensive
table of techniques, which include Bayesian, frequentist, maximum likelihood,
evidential, pattern-matching, associative, syntactic, and heuristic
methodologies.  A basic division is into "hard" sensors, that declare an attack
in binary form (yes/no), and "soft" sensors, that provide a probability
estimate that a detection is hostile.


Re: Statistical reliability estimation and "certification"

Jon Jacky <jon@june.cs.washington.edu>
Wed, 07 Sep 88 08:43:03 PDT
Postings by Brian Randell, Bev Littlewood and others responding to my COMPASS
trip report suggest that some clarification may be required.  I am confident 
that I quoted Cullyer, Leveson and others accurately; I took careful notes 
on the spot.  However ---

- I should emphasize that skeptical comments regarding statistical reliability
estimation were limited to the context of *a priori predictions* of the
reliability of *software* - that is, predicitions of software reliability made
prior to experience in the field.   Regarding their opinions on statistical
reliability estimation and life in general, I cannot say.  I did note that
Cullyer and others did remark that a priori estimates could be useful for
*hardware*  systems, where failure histories for the components were known.

- - There seems to be a misunderstanding regarding the term "certification" - in
particular, Cullyer's remark that "you either certify (a product) or you don't
- - one or zero."   Apparently some readers understood "certification" in this
context to refer to some formal validation technique, which Cullyer was
claiming was "perfect" in some sense.  I believe that was not the intended
meaning.  It is necessary to distinguish *validation* from *certification*.
Validation is the technical process of determining whether a product conforms
to its requirements. Nobody at COMPASS claimed that any validation technique
was perfect, although people did claim that some techniques were better than
others. Certification is the administrative act of releasing a
potentially hazardous product for sale or use. Certification IS one or zero.  
The necessity for basing a yes-no decision on less-than-totally-conclusive 
technical information is the certifier's dilemma.

- Jonathan Jacky, University of Washington


A Computer Virus Case Goes to Trial

Joe Morris (jcmorris@mitre.arpa) <jcmorris@mitre.arpa>
Wed, 07 Sep 88 13:05:09 EDT
From the _Washington_Post_, 7 September 88, page C-1 (without permission):

  JURY SELECTION IN 1ST `VIRUS' TRIAL BEGINS (AP)

Fort Worth, Sept. 6 — Jury selection began today in the criminal trial 
of a 40-year-old programmer accused of using a computer "virus" to sabotage
thousands of records at his former work place.
The trial is expected to last about two weeks.

Donald G. Burleson faces up to 10 years in jail and a $5,000 fine if convicted
in the trial, a first for the computer industry.  Burleson was indicted on
charges of burglary and harmful access [sic] to a computer in connection with
computer damage at a securities firm, said Nell Garrison, clerk of the state
criminal district court in Fort Worth.  Through his lawyer, Jack Beech,
Burleson denies the charges but has declined further comment.

The firm has been awarded $12,000 in a civil lawsuit against Burleson.
Pretrial motions were scheduled to be heard today, followed by jury selection,
Garrison said.

Burleson is accused of planting a piece of computer software known as a
virus in the computer system at USPA&IRA Co. two days after he was fired.
A virus is a computer program, often hidden in apparently normal computer
software, that instructs the computer to change or destroy information at
a given time or after a certain sequence of commands.  [Trojan horse???]

USPA officials claim Burleson went into the comapny's offices one night and
planted a virus in its computer records that would wipe out sales commissions
records every month.  The virus was discovered two days later, after it had
eliminated 168,000 records.


computers and guns

Gary Sanders <gws%n8emr%osu-cis@pyramid.com>
7 Sep 88 02:47:32 GMT
    A funny thing happened on the way to the data call..

I was sitting at home one cool evening, flipping through the channels on the TV
not much on, even with cable...  Every once in a while I would hear my modem
dial out out the one of the many news feed sites, and hear the many machines
and men calling in.. I was about ready to nod off (again), but someone
was knocking rather rudely on the door.

I jump up and answer the door briefly (no pun) forgetting that I only had only
my boxers on.. Well I crack the door open and it was a nice man in blue...  Yes
the police office stop by, ? to say hi? NO!, to collect for the policeman's
ball (...), NO! Someone had called 911, in fact they called 911 three times in
a row. I assured them that I didn't call but they wanted to look around and
make sure I did have any dead bodies lying around so i ran in and put some
pants on and unhooked the chain on the door.

They checked out the living room, then headed to the bed rooms.  One bedroom is
a bed room and one is a computer center, radio room (ham) and electronic scrap
room (my play room).  After one pulled his guns out, I got a little worried.
Why did they have their guns out, I had forgotten that I had 2 UZI water guns
hanging on the wall in my play room, that along with the radio, flashing lights
and other terrors looking electronic gimos in the room, It must have spooked
them a little.

Well they finally figured out that the guns were plastic and that
I didn't have any real bombs in the room, they put away their guns.

Now they wanted to know why I called 911 three times, I told them that
I had not, but they were not convinced, Well I ask them what number
the call came from they said xxx.yyyy, hey that's not my number, it's 
zzz.aaaa then it came to me, the other number was my data line...
I have no phone on the line, so it must have been the computer
calling someone. Have you ever tried to convince a police officer that
your computer was calling 911 by itself.. It doesn't work... They said
that the dispatcher had called back but I had hung up on them, actually
my modem was very polite and answered the phone and only became rude
when it heard a human, then it hung on them..  Well, They left and told 
me (and my computer) to be careful and not dial 911 unless it's a real 
emergency... I say ok, and close the door.

I still wasn't sure why my system was calling 911, I didn't have 911 in the
Systems file... or did I.. I check it out and found the problem. I call a site
with a phone number of 891-11xx and from the logfile I had called the site 3
times a short time before the police arrived. It looked like MA Bell had take a
little to long to give dialtone and the first digit was dropped. So If you want
to save yourself some trouble check out your Sys files and hide your water
guns...

    This did happen several months ago, GUNs and ALL.. 
Every one in a while, like tonight I get a visit from the local PD... 
I give them the story and they look around say code 4 to the dispatcher
and leave..... Oh well life and data goes on...

Gary W. Sanders             HAM/SWL BBS 614-457-4227
(uucp) gws@n8emr            (uucp) osu-cis!n8emr!gws
(packet) N8EMR @ W8CQK          (cis) 72277,1325

    [This one could become a classic like the Israeli bugspray-in-the-toilet
    story, which resurfaced after previously appearing in an old-yarn book.  
    We have had a variety of cases just like this in the past.  But it serves 
    as another reminder of how easily it can happen.  PGN]


Automatic Call Tracing and 911 Emergency Numbers

<MCCLELLAND_G%CUBLDR@VAXF.COLORADO.EDU>
Tue, 6 Sep 88 22:41 MDT
Our local county government just worked a deal whereby for a small fee added
to each customer's phone bill, the county's centralized 911 emergency
switchboard would be provided with a display of all incoming phone numbers
and addresses.  I'm rather glad that the next time I call 911 all that
information will be communicated automatically (but I hope it will still be
verified orally whenever possible).  However, I suppose that once we pay for
the installation of the necessary technology the local telco will be able to
sell it as a service to other businesses.  As previous notes have suggested,
there are many privacy issues to consider here but there are benefits that
also need to be considered as well.
                                               Gary McClelland

   [911 ANI in LA noted by  paulb@ncc1701.tti.com paulb@ttidca.TTI.COM 
   (Paul Blumstein).] 


Automatic Number ID: Bad Idea!

Andrew Klossner <andrew%frip.gwd.tek.com@RELAY.CS.NET>
Tue, 6 Sep 88 11:00:22 PDT
[This discussion has gotten pretty far from RISKS.]

    "I consider an unsolicited phone call to be an invasion of my
    privacy. If you feel you have the right to call me and refuse
    to identify yourself, then I maintain I have the right to come
    to your front door and refuse to identify myself."

This is the wrong analogy.  Consider a world in which, when you wonder
into a shop with an idle question, the shopkeeper can, without your
permission, divine your identity.  There's a world of difference
between "Good afternoon, what's your name? If you won't tell me, get
out" and "Good afternoon, I have recorded your name and there's nothing
you can do about it."
                          [Also remarked upon by Hugh Pritchard.  PGN]

    "Anonymous is also making the assumption that the people who
    a[c]quire your number via ANI will automatically abuse the
    information. This is mostly false."

This is a Pollyanna attitude.  I have worked for telephone/junk-mail
solicitors (in my starving student days) who would drool at the thought
of abusing this information.  As an example of privacy abuse, consider
Radio Shack's policy of demanding full identification, even of cash
customers, for purposes of composing a mailing list.

  -=- Andrew Klossner   (decvax!tektronix!tekecs!andrew)       [UUCP]

Please report problems with the web pages to the maintainer

x
Top