The RISKS Digest
Volume 7 Issue 48

Friday, 9th September 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

COMPASS 88
Bev Littlewood
Safety Engineering
WHMurray
Technical naivete revealed by responses to VINCENNES incident
Jon Jacky
Vincennes: Rules of engagement violated by AI heuristic?
Clifford Johnson
ANI Response
Patrick A. Townson
Proposed ANI Enhancement
Rob Boudrie
ANI blocking defeats purpose
Bob Philhower
Credit Card Loss Woes
Clay Jackson
Info on RISKS (comp.risks)

COMPASS 88

B.Littlewood <sd396@CITY.AC.UK>
9 Sep 1988 16:05:35-WET DST
Nancy says she is going to let me have the last word in this saga.  Unfort-
unately, it is not clear whether my last comments represented this 'last
word': after all, Nancy only responded with a mere two page reply - this
probably doesn't count!

She is right that we agree on more things than we disagree.  But it is the
disagreements that are much more interesting to discuss.  So here goes once
more . . .

I ended my previous note by asking Nancy how she would "reduce catastrophic
failure rates to acceptable levels and demonstrate the achievement of such
levels".  Her reply falls far short of answering this question.  Indeed, it
is largely a recital of elementary 'good practice' (in England we say "teaching
your granny to suck eggs").

Let me spell it out again.  First of all, by the phrase "catastrophic failure
rate" (to which Nancy takes exception) I meant merely the rate at which her
"catastrophic failure modes" show themselves in operational use.  It is this
rate determines in a formal way how we can talk about safety in a quantitative
way for some systems (it is not appropriate for all systems).

Even quite unsophisticated members of society can appreciate concepts like
this when they are presented appropriately, so it represents part of a fromal-
ism which also has intuitive appeal.  A safe system is one that does certain
specified nasty things SUFFICIENTLY INFREQUENTLY.

In my earlier note I agreed with Nancy that we only want "an acceptable
level of safety".  We seem to be in dispute about what this means, how we
can get it, and how we can assure ourselves that we got it.  I think there are
three stages to this:

1.  We need to decide what are the nasty undesirable events (e.g loss of life,
or loss of airframe, in civil aviation).

2.  We need to decide how frequently we can tolerate these events (e.g. 10**-7
per hour for some airliner events)

3.  We need ways of ACHIEVING such an "acceptable level of safety", and of
DEMONSTRATING ITS ACHIEVEMENT in each particular context.

As I understand it, Nancy does not wish to define "acceptable level of safety"
in a way akin to 1 and 2.  I remain puzzled, therefore, as to what she does
mean by such a phrase.  It is difficult, therefore, to know whether her
claims to be able to achieve an "acceptable level of safety" by "other
solutions" should be given any credence.

Certainly, the methodology in her latest note (whilst being good practice and
probably necessary) falls woefully short of satisfying 3 above.  I am prepared
to accept that use of these techniques is better than not using them: they
are likely to improve safety/reliability.  Knowing that they will increase
safety, however, is far short of knowing that their use will be sufficient
to achieve a particular goal of "acceptable safety" as defined in 1 and 2.
They do not assist at all in telling us what level has been achieved.

No, it seems to me that Nancy is claiming that certain "good practice" is
a solution to our problems.  I agree that her "good practice" is a lot better
than most ACTUAL practice, but I remain sceptical about its efficacy.

In case this sounds merely academic (must stop using that pejoratively), a cynic
for civil airliners is even worse than Nancy's suggestions.  As I understand,
the A320 fly-by-wire was certificated against RTCA Do178A.  This appears to
have no definition of "acceptable level of safety" and, worse, lays down
only very minimal "good practice".  To give them their due, Airbus Industrie
seem to have been sufficiently embarassed by this state of affairs that they
got embroiled in 10**-9 and all that.  The system is certificated in Europe,
the thing is carrying passengers, yet, I believe, it cannot be asserted in
any scientifcally meaningful way that it has an "acceptable level of safety".


This brings me to Peter Neumann's elucidation of John Cullyer's original
remarks at COMPASS.  Now that they are clarified they seem ever more appalling!
Certification seems to merely mean that a certain formal test (e.g. conformance
to Do178A procedures) has been passed.  This test might even relate only to
"good practice", I suppose, and need not involve any evaluation of product
behaviour (as is the case for Do178A).  Yet Cullyer suggests that such a
certification should be used instead of evaluation of achieved safety/
reliability.

It is clear that certification of this kind will not assure us that a particular
product will be sufficiently safe.  Could John tell us how he would go about
getting such an assurance?


Finally, a slightly mischievous plug for the probabilistic/statistical approach.
I suppose one of the most extreme problems which gives rise to the difficulty
of the assurance problem lies in the correctness (or not) of the specification.
And one of the most difficult problems there concern omissions from the
specification: things that should have been thought of, but weren't.  I think
it is clear that none of the techniques suggested by John Cullyer or Nancy
Leveson can attempt to quantify the inpact of such omissions on system
reliability (although they may help to identify some of them).  Only the statist
for a certain time without revealing the effects of such omissions, we can
estimate their contribution to its unreliability.  This if the flip side of the
excellent work of Doug Miller described at the COMPASS meeting.  OF COURSE, this
reliability (Miller'spoint).  OF COURSE, it occurs too late in the life
cycle (we want the assurance at a time when we can do something about impending
problems).  Even so, I don't think any of the other approaches do anything
about evaluation here.

Bev Littlewood
Centre for Software Reliability
London EC1V 0HB


PS  I'm out next week.  Since I'm betting Nancy can't resist breaking her
uncharacteristic vow of silence, a reply might take some time.  A relief to
everyone, no doubt . . .


Safety Engineering

<WHMurray@DOCKMASTER.ARPA>
Fri, 9 Sep 88 08:54 EDT
Nancy Levenson's latest was a breath of fresh air.  It put some
rationality into what was becoming a silly discussion.

In it she comments:

>..again, the goal is to require as many independent failures as
>possible for a hazard to occur.    

Before anyone gets too carried away with that strategy, I would simply
point out that their are limits to its effectiveness.  There is a point
at which adding additional safeguards and redundancies begins to add
complexity and failure modes of their own.  The great northeastern
blackouts are examples of what happens when this strategy is carried too
far.

William Hugh Murray, Fellow, Information System Security, Ernst & Whinney
2000 National City Center Cleveland, Ohio 44114                          
21 Locust Avenue, Suite 2D, New Canaan, Connecticut 06840                


Technical naivete revealed by responses to VINCENNES incident

Jon Jacky <jon@june.cs.washington.edu>
Fri, 09 Sep 88 09:36:54 PDT
Last night (Thurs. Sept 8, 1988) I heard another story about the VINCENNES
/Iran Air incident on the NPR radio news program, "All Things Considered."
The occasion was a presentation by the Navy to the Senate Armed Services
Committee.  No new information was presented, but several comments by the
participants and by the commentators was quite revealing of attitudes about
computing among lay people.  First there was a tape of a Navy official (whose
name I did not catch) telling the Committee (this is a close paraphrase;
I took notes immediately after hearing the story):

"We have determined that the Aegis radars and computers functioned 
correctly and that the misidentification of an Airbus airliner as an F-14
was due to human error induced by combat stress.  ... The operator interpreted
a display indicating the Airbus was at 12,000 feet and flying level as
indicating it was
at 7,500 feet and descending toward the ship ... However, we are looking
at the user interface - what we show on the displays - there may be some
room for improvement there, to make it even more user-friendly than it
is now..."

The interesting bit in this passage is calling the mininterpretation of the 
display "operator error" rather than "design error."

Even more interesting was an interview with retired Navy Commander James
Meachum (I'm unsure of the spelling of the last name).  Meachum is now the
defense reporter for THE ECONOMIST, a highly regarded British weekly news
magazine.  The interviewer asked if Meachum agreed with the "human error/
combat stress" explanation for the incident.  Meachum replied:

"There's an aircraft out there, what's its heading and speed?  That's 
a very straightforward problem and I can't believe the system could have
gotten that wrong ... It's been very thoroughly tested.  It is possible that
some other part of the system might have failed, but I doubt it..."

This statement reveals two very common misconceptions about computer systems
that I encounter all the time.  The first is, "if it seems simple to me, then
the computer must get it right."  The other is, "if the system passes a test,
then it will also perform correctly on other cases that seem similar to me."

Both of these attitudes are simply anthropomorphic superstitions, but they
are very deeply seated in lay people.  I have found 
myself trying to convince my
colleagues at work that results from a program that I wrote were probably
in error and should be investigated, while they maintain the results "must
be right" because "we tested a case just like that."

The effect of these misconceptions is to discourage thorough investigations
of possible problems.  I now doubt the frequently heard assertion that
the Vincennes actually did correctly identify the altitude and heading of
the Airbus.  This assertion is supposed to have been proved by examining
"files" or "tapes" from the Vincennes.  Does anyone know if these records
include videotapes of what was actually shown on the displays at the time of 
the incident?  If not, why do they think they know what the operator saw?
Or, do the tapes actually capture data from some point 
nearer the signal source,
"upstream" from the displays?  Have the investigators assumed that the
displays "certainly must have" shown images consistent with the data on the
tapes?  I have always thought the operator's report of the altitude and
heading sounded a bit too specific to explain away as the result of stress.

Why is it that people "just can't believe" that the computers might have 
scrambled this up, but can easily believe that the operators did scramble it 
up?

- Jonathan Jacky, University of Washington


Vincennes: Rules of engagement violated by AI heuristic?

"Clifford Johnson" <GA.CJJ@Forsythe.Stanford.EDU>
Fri, 9 Sep 88 09:28:27 PDT
>     "Not transmitting proper IFF will be taken into account"
>              AND
>     "Not remaining in proper airways will be taken into account"
>              IMPLIES
>     "Not responding to warnings will be taken into account"
> doesn't hold.

What I said does hold, if the declaration is to be given a reasonable
construction.  Note my use of the word "seems":

        So it seems a post-facto revision of the
        rules of engagement to assert that failure to respond to
        warnings is per se sufficient cause for deadly force *until
        proven otherwise.*

The U.S. declaration that Captains would take into account the fact that
commercial planes might behave *very* irregularly, a fortiori implies they
would strongly credit that such a plane behaving only *slightly* irregularly
(and ignoring or delaying response to warnings was in fact usual) would be
construed as commercial.  Thus, the rule that conclusively defined every plane
taking off from Bandas Abas as hostile until *proven* otherwise squelches this
promised caution.

True, there was no declaration that the U.S. would not shoot down a plane
thought probably or possibly commercial, but if this were not the case, then
the declaration would be purposeless, and misleading at best.


ANI Response

<sun!portal!cup.portal.com!Patrick_A_Townson@unix.SRI.COM>
Thu Sep 8 17:48:01 1988
Recent correspondents in RISKS have challenged my comment 'no good reason to
conceal telephone number'. Examples of 'good reasons' include calls to hot
lines, counseling services, police officials, and others.

Here in Illinois, the law which enabled 911 Service, and required its
implementation in all communities in the state also required that every Police
Department have a seven digit administrative telephone number to receive
non-emergency calls and calls made 'in confidence' by the caller. The Chicago
Police Department & Fire Department can be reached through the main centrex
number for the City of Chicago Offices: 312 - PIG - 4000. To reach individual
police officers, etc, just dial PIG and the desired 4 digit extension. And
not that I would expect everyone to know it, but you *can* override ANI on
911 calls in most cases by knowing which *seven digit number* 911 is translated
into by your local phone office. Here in Chicago it is (or was) 312-787-0000.
Calling that number reaches 'Chicago Emergency' just as surely as 911, and
with only a blank screen for the dispatcher to look at in return. Apparently
when you dial 911, your central office translates it into a seven digit number
and sends encoded information containing *your number, and address* to the
dispatcher when it puts the call through to the ACD (automatic call
distributor) at the police station.

Since posting my original article a couple days ago, I have researched this
a bit further and find the general thinking among folks I have contacted at
Illinois Bell is that there will be specific exemptions in the tariff for
calls to crisis lines, counseling services and similar where those groups will
NOT be permitted to subscribe to ANI signaling service. And those exceptions
mentioned by the writers here do make good sense.

As for stockbrokers and others who are likely to try and make a hard sell,
what do you do now when these people routinely ask for your phone number in
the process of taking your order/giving information? Refuse to give it? Give
a false number? Whatever happened to your spines? Just say NO to the broker.
Just say no to the Operator Who Is Standing By To Take Your Call Now....

Patrick Townson


Proposed ANI Enhancement

<ROB.B%te-cad.prime.com@RELAY.CS.NET>
09 Sep 88 00:30:10 EDT
If digital data is going to be transmitted with phone calls, why not
add a "classification code" (perhaps 3 digits) which may optionally be sent
by the caller.  Add to this legislation which requires all human telephone
solicitors to send a digital class code of "001" with their calls, and all
tape playing sales machine generated calls to carry a class code of "002".
The phone company could then offer a "class selection" service whereby the
subscriber could make his phone inaccessible to selected classes of calls.

This is not without (manual) precedent.  All companies using tape playing sales
machines within Massachusetts are required by law to check the numbers they
will call against a phone company maintained list of subscribers who have
requested not to be bothered by these machines.  This list must really work -
I was on such a list and have only recently begun to receive that form of
harrassment, commencing right after my area code was changed from 617 to 508.

                                        Rob Boudrie


ANI blocking defeats purpose

Bob Philhower <philhowr@unix.cie.rpi.edu>
Fri, 9 Sep 88 10:09:55 EDT
It is naive to think that an ANI system with a blocking feature
(i.e. you prepend the number you dial with something like *21 to prevent
 your own phone number from being available to the party you call) would
have any effects on those who abuse the anonymity of the current system.
Anyone that concerned about his/her privacy would purchase a device to 
sit on the phone line and recognize the first dialed number, delay it, and
send *21 before it.  (If these don't appear immediately, I would certainly
market them myself.)


Credit Card Loss Woes

<microsof!clayj@beaver.cs.washington.edu>
Fri Sep 9 08:49:04 1988
Here's yet another potential problem when one loses a credit card:

Last Monday (9/5) I left my cash machine "Access Card" hanging in an ATM.
Fortunately, this particular machine (a Fujitsu) is smart enough to capture
cards left in it by forgetful users. 

The real problems started when I discovered the card missing about 1900
on Tuesday (9/6).  Since I had no idea where I might have left it, I decided
to be safe, and called the phone number given on the back of my wife's card
for reporting lost or stolen cards. I was greeted at that number by a recording,
which informed me that the issuing bank's offices were closed for the day,
and gave me ANOTHER 800 number to call to report a missing/stolen card.  I
called the second number, and a human answered "National Credit Center", and
took down the expected information about my lost card, including my
work and home phone numbers, address, and mother's maiden name. She did seem
a bit comfused as to the type of card I was reporting as missing, and had
no idea if my wife's Access Card (with the same account number) would be
blocked as a result of this action.

On Wednesday, during business hours (about 1300, or 18 hours AFTER I reported
the card missing), I called the issuing bank's card office, and checked to
see if they had in fact received the report.  They had NOT yet received the
report, and had no other indication that the card was missing.  I gave
them another report, and they "blocked" the card. On a hunch, I called the
branch (of the same bank) where the Cash Machine I had last used was located,
and they told me "Yes, the machine did capture your card, and we destroyed it,
since we were unable to contact you".  I have no idea why they were 
"unable to contact" me, since they have on file (I verified them) there
at the branch (it's our "home" branch) correct phone numbers for me, both
at work and at home, we have an answering machine at home, and my wife was
home all day on both Tuesday and Wednesday!  It also amazes me that the
Card Dept of the same bank had NO IDEA that the card had in fact been
recovered on Tuesday, OVER 24 hours before I called (the bank) to report
the card missing.

The last, and worst, part is that on Thursday, in the space of about 3
hours, we received 4 separate phone calls (3 at home and one at work) from
"Credit Card Protection" services.  They ALL began with some variation
on the theme "We understand you've just lost a credit card" (two of them
knew the NAME of the card we had lost, one just said "credit card" and
one said "MasterCard).  Obviously, SOME organization (either the bank or
the "National Credit Center" had, in less than 48 hours, given (or sold)
our name, phone number, and the fact that we had lost some sort of card
to not one, but FOUR separate companies.

To the bank's credit, when I called their "Corporate Affairs Officer", he
was almost as unhappy as I was, and has promised me a full investigation
and return phone call. He assured me that the selling of that information
was "against corporate policy, and possibly state or federal laws".  I'll
post a followup to this forum with the results of his findings.

Clay Jackson, Microsoft, Redmond, WA    {...microsoft!clayj}

Please report problems with the web pages to the maintainer

x
Top