The RISKS Digest
Volume 10 Issue 38

Friday, 14th September 1990

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

The Weakest Link
Amos Shapir
Relatively Risky Cars
Martin Burgess
Re: The need for software certification
Theodore Ts'o
Re: Expert System in the loop
Steven Philipson
Brinton Cooper
Re: Computer Unreliability and Social Vulnerability: critique
Dan Schlitt
Large analog systems and NSW railroads
David Benson
Analog vs Digital reliability
Bill Plummer
Re: ZIP code correcting software
Bernard M. Gunther
Dave Katz
Info on RISKS (comp.risks)

The Weakest Link

Amos Shapir <amos@taux01.nsc.com>
13 Sep 90 23:13:15 GMT
Tel Aviv, Sep. 13 -

12 private investigators and a few employees of the Income Tax bureau were
arrested on suspicions of bribery and breach of trust.  The investigators have
allegedly bribed income tax employees in return for computer data, which
included details about taxpayers' income and assets.  Under investigation are
also suspicions that some investigators - most of them former policemen - have
used the same means to receive data of police computerized files as well.

As usual, the human factor is the weakest link in any security system.
Luckily, in this case the data was handed as printouts; one can easily
imagine what could have happened if the suspects had had their own computers
and modems to contact the compromised systems directly.

Amos Shapir, National Semiconductor (Israel) P.O.B. 3007, Herzlia 46104, Israel
Tel. +972 52 522255  TWX: 33691, fax: +972-52-558322


Relatively Risky Cars

"Martin Burgess" <burgess@sievax.enet.dec.com>
Fri, 14 Sep 90 04:42:44 PDT
Yesterday a program on British T.V. pointed out (again) that car buyers in
Sweden and the U.S.A. can obtain information about the relative safety of
different models, and that this information is not available in the U.K.

The RISKS forum would seem to be an ideal place to post the details, if the
copyright laws allow.

It would also be interesting to see if there were differences between cars sold
into different markets - for example, does the widespread fitting of air
conditioning in the U.S.A. affect the safety of passengers when there is an
accident ?
                                        Martin Burgess


Re: The need for software certification

Theodore Ts'o <tytso@ATHENA.MIT.EDU>
Thu, 13 Sep 90 22:11:06 -0400
I am against the "certifying" of software professionals.  My objections fall
basically into two areas.  The first is that there is no valid way to measure
software "competence".  How do you do it?  There are many different software
methodolgies out there, all with their own adherents --- trying to figure out
which ones of them are ``correct'' usually results in a religious war.

For example, all computer science students at MIT are required to take
6.170 (also known as Software Engineering) as a graduation requirement.
(I just graduated in June 1990; the last time this topic came up, I was
afraid to air my opinions because I would shortly be applying to
graduate school.)  But in any case, my personal opinion of that course
is that it is so completely dated that it isn't even funny.  For
example, the course is taught in an archaic language, CLU, instead of a
more modern object-oriented language such as C++.  In the class, we're
told that global variables are always evil --- there's no excuse for
them at ever; yet in order to build the linker (which was written in
CLU), the sources turned on a magic flag so that it could have global
variables to store the symbol table.  I suppose the performance hit of
passing the symbol table object to every single procedure in the linker
was too much to handle.

We were told that the One True Way to program involved keeping a design
notebook and not even trying to code until we had sketched out the whole
thing in pseudo-code, which I guess is the current "in" way to do
structured coding.  (Remember when people said that flow-charting was
the only way to write error-free programs?)  When I and my fellow
students in my group took the course, we used an emacs buffer as our
design notebook, and our psuedo code was written in CLU itself.

Surprisingly, version control (such as RCS) was never discussed at all.
I suppose the theory was that if we designed everything in pseudo-code
from scratch, we would never need to rewrite or revise any of it, so
version control was considered important.  I will leave it to the Gentle
Reader's judgement as to whether or not you can teach a reasonable
Software Engineering in today's environment, when several people can be
changing files on a networked filesystem, without at least mentioning
version control.

Our conclusion was that the religion which was preached to us was
developed in the days of teletypes and punched cards, when actually
coding several different algorithms and trying them out was too
expensive; when only one person could modify a file at a time because of
physical limitations, so version control wasn't important; and when
interactive computers were nearly nonexistent, so the only kind of One
True Design Notebook was a spiral bound one.

In any case, we (the students in my programming group) didn't buy any of
it.  So by the deadline where we supposed to have produced a design
document detailing how we would do things (and which would be used to
penalize us if we deviated from it in our final implementation), we
wrote an almost completely working prototype.  We then wrote our design
document from our implementation, instead of the other way around.  We
ended up with one of the cleanest implementations and received one of
the highest scores in the class.  In fact, we received a letter of
commendation saying that we were in the top 5% of the class, and that we
deserved some recognition beyond merely getting an "A" in the class.

The point of all of this?  My group managed to get an A in this class
without absorbing any of the religious tenets of Professor Liskov's
programming methodology.  (This is not to say that everything in the
class was bad; but a lot of it was trash, and I had learned most of the
good parts by being a student systems programmer at Project Athena, so
the class was essentially a waste of time for me.)  So how do you
certify someone?  If required to, I can parrot back all of the ``right''
answers on a written exam.  Those answers would also mean very little
about how I really go about my programming work.  (I won't go into the
flame wars about how my personal style is better-or-worse than the
traditional "top-down", or whatever else is in vogue today.  My style
works for me --- I write generally bug-free code, and I won't dictate to
you how to write bug-free code if you won't dictate to me how I should
write mine.)

The second general objection that I have against the certification of
software professionals is that it might very well become a guild.  In my
mind, there is great danger that once you have the people who are
``IN'', they will try to maintain a competitive advantage and keep most
other people ``OUT''.  Mr. Whitehouse has already granted that a college
degree cannot be used to discriminate those who can program well against
those who do not program well.  I am very much afraid that any system of
software certification will be used to push one person's pet software
methodology and to exclude people who don't agree with him or her.

Worst yet, it could become like many unions today, and be used to protect
mediocrity within the group against people who are actually better qualified,
but who aren't in the appropriate magic group.  This could be extremely
dangerous, if management types were to actually believe that being
``certified'' would mean that the code that person generates is "guaranteed" to
be bug-free, when in fact the code might be much worse than someone who didn't
have the magic blessing.  Knowing human nature this is probably a more clear
and present RISK than the current method which depends on the free market.

                        - Ted


Re: Expert System in the loop (Thomas, RISKS-10.37)

Steven Philipson <stevenp@decwrl.dec.com>
Thu, 13 Sep 90 16:56:27 PDT
   In Risks 10.37, Martyn Thomas <mct@praxis.co.uk> writes about a Feranti
study on the "feasibility of integrating a knowledge-based expert system
into naval command systems, to advise commanders in battle".

   Thomas concludes:

>The commander is unlikely to ignore the advice of the expert system,
>unless it is clearly perverse.  This means that the decision (say,
>to launch a weapon) is being taken, in practice, by the expert system.

   Neither conclusion follows.  First, the proposed system is intended
to "advise commanders".  It is NOT stated that the system is intended
to act on its own or to be blindly followed.  Commanders will be very
likely to ignore the advise of such a system — they tend to be very
wary of automated systems, and regard themselves as experts.  Commanders
often get contradictory advise from their human advisors.  The essence
of their job is to evaluate recommendations and make the best decisions
that they can.

   An advisory system that recommends action that is at variance with
other sources will likely be disregarded UNLESS the system makes a
strong case to support its recommendation.  A system that cannot
justify its recommendations will be of little use, as a commander can
not follow a blind recommendation; he has to know the line of reasoning
and facts on which the recommendation is based, regardless of whether
that recommendation comes from a human or automated source.

   Such a system could be of great value.  IF an expert system were
to detect a trend or correlate bits of information that indicate a
significant development, it could issue a recommendation and support
it with the data points and chain of reasoning that were used to
arrive at that conclusion.  The commander could then review that
recommendation and decide whether or not to act upon it given his
evaluation of the validity of the reasoning.  All this may not be
possible to do in practice, but there is the potential for an advance
here in implementing a more effective and safe decision making process.

>The Aegis system on the USS Vincennes led to the death of
>several hundred people when a civil airbus was shot down, on a scheduled
>flight, in weather conditions where the aircraft would be clearly
>recognisable from the bridge of the Vincennes through binoculars.  That
>tragedy was ascribed to the poor user-interface of Aegis, combined with an
>atmosphere of eager tension on board which made a decision to fire more
>likely. How can we stop people building ever-more-complex decision-support
>systems, and thereby losing their ability to take decisions themselves?

   The facts above are incorrect and/or incomplete.  The Vincennes did
not acquire visual contact with the Airbus nor could it have — visibility
was restricted by haze, and the aircraft was shot down before it entered
visual range.  The term "eager tension" is misleading — the transcripts
of the incident and investigation indicate that there was an atmosphere
of tension *and fear*.  It should also be noted that the Aegis system did
not make a decision to fire — that was purely a human decision based on
available (albeit misinterpreted) information.

   A case can be made that an automated system might have concluded
from the available data that the Airbus was NOT conducting an attack,
and could have *advised* the commander to NOT fire.  We should keep
in mind though that warnings were given by a human operator that the
aircraft might be a commercial flight.  These warnings were not heeded --
the preponderance of information available to the commander indicated
that a decision to fire was necessary for the protection of the ship.

   We all wish to minimize risk, but we must recognize that we can
not eliminate it; there are significant risks in human activities
regardless of how they are undertaken.  There will be grave errors
in military operations regardless of the technology that we use.

   We have good cause to be wary of automated systems in critical applications,
but we should not dismiss them out of hand.  Blind trust is dangerous, but so
is blind distrust.  It's our responsibility as computer professionals to see to
it that any computer technology that is developed is done so in a way that
minimizes risks, and that the end users are cognizant of the limitations and
hazards associated with such systems.
                        Steve Philipson


Re: Expert system in the loop (Thomas, RISKS-10.37)

Brinton Cooper <abc@BRL.MIL>
Fri, 14 Sep 90 10:32:35 EDT
Martyn Thomas reports:

>According to Electronics Weekly (Sept 12th, p2):

>"Ferranti will study for MoD the feasibility of integrating a
>knowledge-based expert system into naval command systems, to advise
>commanders in battle.

He then objects to the use of automated launch systems, asserting:

>The Aegis system on the USS Vincennes led to the death of
>several hundred people when a civil airbus was shot down, on a scheduled
>flight, in weather conditions where the aircraft would be clearly
>recognisable from the bridge of the Vincennes through binoculars. That
>tragedy was ascribed to the poor user-interface of Aegis, combined with an
>atmosphere of eager tension on board which made a decision to fire more
>likely.

In fact, he has contradicted his own assertion that the AEgis system was
responsible by pointing out the shortcomings in human judgement, human
psochology, and human I/O.  The principal (and significant) shortcoming
of AEgis in this scenario is that its database apparently did not
include a readily available schedule of commercial airline flights for
the region in which AEgis was deployed.

If humans insist upon creating conflict situations where decisions
depend upon evaluating large numbers of interacting variables, NOT
to use automated decision support systems would be the tragedy.

He concludes:

> How can we stop people building ever-more-complex decision-support
>systems, and thereby losing their ability to take decisions themselves?

So long as there are ever more complex decisions to be made, we must
either improve the decision maker or give the decision maker some help.

The only other choice is to avoid the complex decisions altogether.

[Because of the nature of this topic, I feel compelled to disclaim any
relationship between what I have written and for whom I work.  These opinions
are my own <...>]
                                                         _Brint


Re: Computer Unreliability and Social Vulnerability: critique

Dan Schlitt <dan@sci.ccny.cuny.edu>
Tue, 11 Sep 90 14:37:29 -0400
Pete Mellor <pm@cs.city.ac.uk> writes in RISKS-10.28:
>The authors do not make it clear initially which of two different meanings of
>'catastrophic' they intend:
>
> a) sudden and unpredictable, 'anything can happen', and
> b) having appalling consequences.
>
>The first is a classification of the effect (which I prefer to call a 'wild
>failure' to avoid confusion), and the second is a measure of the cost. For
>example, when arguing that computers are inherently unreliable because they are
>prone to 'catastrophic failure', they quote the Blackhawk example. The cause of
>this series of accidents was eventually traced to electromagnetic interference,
>as the authors state. While it is probably true that only a *digital*
>fly-by-wire system would exhibit a wild mode of failure in response to EMI, it
>is not until half-way down the next page, where the authors point out that
>digital systems have far more discontinuities than analogue systems, that it
>becomes clear that they are using 'catastrophic' in sense a), and not in sense
>b).
>
It might be well to note that the natural assumption that discrete
systems, such as digital ones, are more prone to wild mode failure
than nice continuous analogue systems is a dangerous one.  The term
"chaos" has become a buzzword and as a result much of the real meaning
has been lost to the world.  One of the important observations was
that, except for some exceptional cases, "wild behavior" is generic
for conservative mechanical systems.  For non-conservative systems the
situation is not too different.  It is only because the most familiar
and most analyzed cases are special that we make the "natural"
assumption of smooth nice behavior.

The unfamiliar but common case is one where a system exhibits smooth
regular behavior punctuated by wild jumps to a wildly different smooth
state.  The timing of the jumps is highly unpredictable because it
depends critically on initial conditions.

>The authors are right to claim that computer systems are too complex to be
>tested thoroughly, if by this they mean 'exhaustively'. It is apparent from
>their example of a system monitoring 100 binary signals that they do mean this.
>
>Exhaustive testing in this sense is well known to be impossible. Even in
>modestly complex systems one can only test a representative sample of inputs.
>Provided the selected sample is realistic, one can, however obtain a reasonable
>degree of confidence that a reliability target has been reached, but *only if
>the target is not too high*.
>
This sort of exhaustive testing is also impossible for general
mechanical systems.  It requires good design to make a mechanical
system which falls in the class of those that are well behaved.  It is
not clear to me why it is not also possible to use similar good design
to construct sufficiently stable digital systems.  The main difference
between the two is the difference in the amount of design experience
we collectively have had in the two cases.

Catastrophic failure in the second sense was not allowed to stop the
development of mechanical systems in the past.  Collapsing cathedrals
the failure of the Tacoma narrows bridge or airplanes falling out of
the sky did not stop architecture, bridge building or aviation
-- and bridges are still known to collapse.  It would be tragic if
these concerns were to halt development of digital systems.

Mellor is right that the challenge is to develop the tools for
testing and analysis that are required.

Dan Schlitt, Manager, Science Division Computer Facility City College of New
York, New York, NY 10031, (212)650-7885  dan@ccnysci.uucp dan@ccnysci.bitnet


Large analog systems and NSW railroads

David Benson <benson_d@maths.su.oz.au>
Fri, 14 Sep 90 11:21:50 +10
  Having just arrived in NSW I have yet to experience the safe railroads — but
certainly recommend visiting Australia.
  David Parnas comments that the railroads are not properly "large analog
systems".  I certainly agree.  But his examples of "large analog systems" seem
rather small scale stuff to those of us who think about spreadsheets, general
ledger, Star Wars, and other large scale systems on or containing digital
computers.
  So perhaps there are no "really large analog systems."  After all, the
747-400 I rode on for the 15 hour overwater flight from LAX to SYD isn't a
"really large analog system", is it?
  Seriously, I doubt that in the bad old days before computers that the systems
in use were as safe, as reliable, or as inexpensive as in the good new days
with computers.  This is certainly the case for manual speadsheets, manual or
semi-manual general ledger, and other inherently descrete systems often
associated with accounting — or anyway, bookkeeping.  Indeed, there is
something rather inhumane about bookkeeeping seems far better for the spirit to
relegate this task to mere computers.
  I assert, without even bothering to review the data, that air travel is
vastly safer post-computer (say, since 1970) than pre-computer.
  And so it goes...

  From this casual review of nineteenth and twentieth century technology I am
going to baldly state that the largest analog systems I can think of are all
basically static — such as bridges under load — and are seriously
overdesigned by the standards of this day.  These standards are manifested in
software, airplanes, etc.
  I hope, by this statement, to generate some discussion and thus perhaps some
enlightenment.


Analog vs Digital reliability

&lummer@DOCKMASTER.NCSC.MIL>
Fri, 14 Sep 90 10:09 EDT
Can somebody provide a clear explaination of the role of delay in analog
and digital circuits?  We can attempt to compare the two by making
associations such as "the voltage on an (analog) capacitor is the
equivalent of the number held in a digital register".  Then in the
digital world we worry alot about when the register gets changed and try
to prove that only one writer can exist at anytime.  Analog systems
don't seem to have the same concerns.  Or do they?  --Bill Plummer, Wang
Laboratories, Inc..


Re: ZIP code correcting software

Bernard M. Gunther <bmg@mck-csc.UUCP>
Fri, 14 Sep 90 11:00:10 EDT
>Does anyone have experience with this kind of software?  My concerns
>would be as follows:
>  1. What is the error rate with this process?
>  2. What happens when additions and changes are made by the Post
>     Office to their tables but the vendor has not yet gotten the
>     updates out to the end user of the software?  Will the software
>     keep "correcting" a ZIP code which is in fact already correct?

I have had two sets of experiences with this sort of problem:

1 - There must exist a tape which has determined that my ZIP code
02141 is in Boston and not in Cambridge.  I give my address with the
the Cambridge town name, and I get letters back with Boston listed
as the town name.  Nothing I do seems to correct this problem.

2 - I have used a PC software package which takes street addresses and town
names and plots the points on a digital map.  In matching a number of bank
branch locations, the success rate of this program without any human guidance
is around 60-65%.  Using a limited amount of inteligence, this goes up to
75-80%.  Achieving 90+% requires calling the branches and getting better
address and cross streets.  The key limiting factors are the quality of the
address given and quality of the source maps.
                                                     Bernie Gunther


Re: ZIP code validation software

Dave Katz <katz@merit.edu>
Fri, 14 Sep 90 10:27:36 EST
I recently stopped into a stereo/appliance retail outlet to pick up
a $15 accessory for which I wanted to pay cash.  Being in a hurry, I
tossed down exact change and gave my thanks to the salesman.  Shortly
thereafter, lights flashed and beepers beeped, and I was told that
in no uncertain terms that I had to wait for the purchase to be entered
into the computer.

My annoyance grew to anger as the salesman fumbled with the computer,
apparently having difficulty even "logging in."  (If it wasn't the
guy's first day on the job, the store is in real trouble.)  He then
asked for my name and address.  Not wanting to be on their mailing
list, and having real objections to giving my vital statistics away
for a small cash purchase, I protested.  "I have to type it in or
the computer won't allow the sale."  Feeling chagrined, I made up
a fake name and address in an obscure town in central Michigan.
The salesman dutifully typed in the phony address.  The computer
beeped and displayed "Invalid ZIP code."  "The computer must have
made a mistake," said I, feeling like an unmasked felon.  The
salesman looked badly confused, I got even more upset, and finally
another salesman told him to enter "00000", satisfying the machine.

After all that, the item had to be taken to another counter where
a sales slip was printed in triplicate, each copy separately filed,
the inventory control bar code read, and the anti-theft device demagnetized.
[The computer then announced that there were -5 of these items left in stock.]

Total elapsed time was nearly 15 minutes.

The risks are twofold--one being a privacy risk, and the other being
the encroachment of inappropriate, excessively complex technology.

 --Dave Katz
   University of Michigan

Please report problems with the web pages to the maintainer

x
Top