The RISKS Digest
Volume 10 Issue 71

Wednesday, 19th December 1990

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Re: "Computer Models Leave U.S. Leaders Sure of Victory"
Marcus J. Ranum
Karl Lehenbauer
Bob Estell
Compass Airlines disrupted by possible computer attack
Sarge
Punny user interface [anonymous]
Process control risks discussed in IEEE Software
Andy Oram
Unexpected effects of PC's
P.J. Karafiol
ark
Missing reference
Jerry Leichter
A semi-folk tale on the risks of being well-known
Daniel P Dern
Re: the risks of being well known
Scott Schwartz
Re: Organizational Aspects of Safety
Charlie Martin
Info on RISKS (comp.risks)

Re: "Computer Models Leave U.S. Leaders Sure of Victory"

Marcus J. Ranum <mjr@decuac.DEC.COM>
Wed, 19 Dec 90 00:26:20 -0500
    I've always been particularly interested in wargaming, and was really
happy on the occasion I managed to game with a guy who also did occasional work
with Navy wargames systems. His descriptions of how gaming parameters are
derived may have been (I hope!) exaggerated, but I don't think I'd place much
confidence in the results the games give. They may reflect a battle - but it's
more likely the game will reflect the relative lobbying skills of the various
groups who set the parameters. I gather that some arguments put forward run
along the lines of:
    "It's impossible to sink a carrier in this scenario," say the carrier
drivers.
    "Well, ok," reply the submariners, "then the probability of
detecting a sub in case X is only Y%".

    The end result is a game - but does it have anything to do with even
the situation it is supposed to be simulating (which does not address the fact
that in a real war, you are *never* presented with the situation you think you
will be).

    Apparently many of the parameters for weapons systems, etc, are drawn
from manufacturer's specs, and the "actual" combat capability is extrapolated
from published MTBFs, etc. This doesn't always take into account little things
like grains of sand that may skew a value.

    There's also the story of the wargame developed ("Firefight" by Jim
Dunnigan I think it was) for the U.S. Army designed to train in infantry fire
fight tactics. Apparently it was almost impossible by game rules for the OPFOR
to win. (The game has been released as a commercial boardgame, with the values
"fixed" - or "unfixed").

    If you're interested in the subject, there's a pretty good book
entitled, "_War_Games_", by Thomas B. Allen that glosses over the military's
flirtation with game theory. It looks like another sad case of trying to make a
"science" out of something, without having any real scientific way of
quantifying the experimental subject. How *do* you make a meaningful statement
about the result of a battle?

    It sure would be rich if we learn years down the road that some
white-tower theoretical soldiers in the Soviet Union were the REAL driving
force behind perestroika because they couldn't get a 100% win in a conventional
war simulation in Europe, because all their specs were drawn from the malarkey
glossies defense contractors give the Pentagon.
                                                            mjr.


Re: Computer Models Leave U.S. Leaders Sure of Victory

Karl Lehenbauer <karl@ficc.ferranti.com>
19 Dec 90 09:34:58 CST (Wed)
No doubt this article is going to generate a lot of responses.  Peter da
Silva finds this reminiscent of the infamous Club Of Rome models that
assigned pollution to a single numerical value and predicted that we'd
all be dead by now.

Rolling the clock back twenty years or so, had the ADE been around
we could be fairly certain that the assigned ADE of US Forces in Viet
Nam far exceeded that of the enemy and that, therefore, a quick victory
would be certain.

For a more recent example, consider that the ADE of the USSR forces
in Afghanistan surely exceeded that of the Afghan rebels, yet they lost.

-- uunet!sugar!ficc!karl (wk)


Risks of believing war game models

"FIDLER::ESTELL" <estell%fidler.decnet@scfb.nwc.navy.mil>
19 Dec 90 10:04:00 PDT
A comment on the RISKS-FORUM Digest item of Tuesday 18 December 1990
RE: "Computer Models Leave U.S. Leaders Sure of Victory" is in order.

1. Computer models are usually naive, or even stupid.  That does NOT
necessarily mean that the one(s) cited in the referenced article were of poor
quality; but it DOES mean that these models do NOT speak for intelligent,
thoughtful PEOPLE - such as Gen. Colin Powell, Sec. Def. Dick Cheney, Senator
San Nunn, etc.

2. How do I know that models are so often naive?  First, I have read much
available literature, from the RAND Corp., the GAO, etc. (e.g., "DOD
SIMULATIONS: Improved Assessment Procedures Would Increase the Credibility of
Results", GAO/PEMD-88-3, Dec. 1987; and "Analysis for Military Decisions", E.S.
Quade, editor, Rand McNally & Co., 1964; and "Handbook of Systems Analysis",
Hugh Miser and E.S. Quade, eds., North-Holland, 1988.  Once you get into these,
follow the trail that the bibliographies give.  Like me, you can focus your
interest, and still read three dozen books and conference papers.)

Moreover, I have used some of the "better regarded" models, including
one that has been "certified" by the Navy.  (I found and fixed several
fatal bugs in the code; the model has not since been recertified.)

Finally, I have designed and written and used my own model; it is as "good"
(based on comparison of results) as many of the better known larger, older
models.  I call it "Ape" because it mimics the others.  It is robust and
elegant, but no genius.  In particular, it is subject to the "garbage in
garbage out" syndrome; i.e., if you tell Ape that your aircraft cannot be shot
down by their tanks, Ape will let you do that.  Mother Nature may not; as the
late Richard Feynman noted, She always enforces all of Her rules, even when we
(scientists et al) forget.

3.  In grad school, one my my profs had worked for Getty Oil, as an Ops
Researcher.  He told of the rivalry between the Getty half brothers.  One had
done a "study" to prove something; the other asked my prof to "... prove my
brother wrong."  They ran "their best model" and it said that the other brother
was right; so, they "tuned" the input data and reran the model; and it still
said the other brother was right; so ...  on the 242nd retry, it finally said
that the other brother was wrong.  Remarkably, when these "results" (sans the
history of 243 runs) were presented to J.P. Getty, he believed them, and
cancelled the first brother's plans.  Does DoD *ever* do things like that?  You
read the Wall Street Journal and Aviation Week articles about SDI, B-2, A-12,
etc. and draw your own conclusions.  My invitation to those so sure of success
is, why not go over to Arabia and lead the troops to victory?  You could be a
hero?
                                        Bob


Compass Airlines disrupted by possible computer attack

Sarge <sarge@batserver.cs.uq.oz.au>
19 Dec 90 06:27:48 GMT
Compass Airlines, a new airline company in Australia (since deregulation
a couple of weeks ago), has reported that there reservation system
was being jammed. On one day alone 25,713 calls had been made.

The Chief Executive suspects a computer was used to repeatedly dial
telephone numbers, which aborted when answered.

Note that they do NOT think any rival airline is involved.


Punny user interface

<[anonymous]>
19 Dec 90
While reviewing the design of an important control system, I came across the
following (slightly edited):

      "The OK flags are used to request operator input for an operation.  If
      bit 2 is set by the operation, the keyboard control task displays a
      message, eg. OPERATION OK? If the operator types Y (yes), bit 1 is set.
      If (s)he types 9 (no - in German) bit 3 is set."

I don't know the full history of this decision to use "9" to represent "no",
but it seems to me that someone with a fondness for puns and a lack of concern
about the user interface managed to get a little joke included into the
design.  Please don't accuse me of lacking a sense of humour; if I had come
across this in a game program I would have laughed heartily.  But the user
interface to a critical control system must be as clear and understandable as
possible.  An operator wanting to abort an operation should not have to
remember that for this particular system, the opposite of "Y" is "9".

The field of computer programming abounds in little jokes like this.  Some can
actually be useful (eg. the little trash can on the Macintosh screen which
bulges when full) but others are tiresome or actually dangerous.  People who
want to use computers to accomplish an important job should not have to learn
all the "in" jokes of the fraternity of programmers.


Process control risks discussed in IEEE Software

Andy Oram <andyo@westford.ccur.com>
Wed, 19 Dec 90 10:37:28 EST
The following recent article should interest all RISKS readers:

    Leveson, Nancy G., "The Challenge of Building Process-Control
    Software," IEEE Software, 7:6, November 1990, pp. 55-62.

Her subject matter touches on almost every application where a computer
interacts with real-life activities — including interaction with a user who
supposedly has final control over the situation.  Compared to some posters on
this forum, her premise is an optimistic one: she takes for granted that
computers should be used to control airplanes, factory production, power
plants, etc.  But she's very open about the difficulties of predicting and
handling events.

The article includes some classic examples of computer-aided processes that
went hay-wire.  More significantly, she carefully tries to distinguish
different levels of human and technological risk, and proposes research areas
for dealing with each one.

I'm sure others have read this article, since it's in a major journal.  Since
I'm still a novice in the issues involved, I'd enjoy seeing comments — please
mail them to andyo@westford.ccur.com.

Andrew Oram, Concurrent Computer Corporation, One Technology Way, Westford, MA
01886 (508) 392-2865                      {harvard,uunet,petsd}!masscomp!andyo

This is not an official statement from Concurrent, but my own opinion;
therefore you can count on having one person who stands behind it.


Unexpected effects of PC's

P.J. Karafiol <karafiol@husc8.harvard.edu>
Wed, 19 Dec 90 11:55:29 -0400
In RISKS 10.59, Jerry Leichter says,

>each [lawyer]  is expected to do most of the word processing for his own jobs.
>The comment from a friend - also a lawyer, but one very involved in the
>future of the profession - is that any lawyer starting out today had better
>learn to use a word processor; it'll be part of his job within a few years
>at most

I find this somewhat hard to believe.  My father is a lawyer at a largish firm,
and, indeed, many of the lawyers do some of their own word processing and are
encouraged to know how to do it *in case of emergency*.  But considering that
the average lawyer there bills in the vicinity of $150/hr (if not more!) and
the secretaries bill $30/hr to the client, most clients would rather the firm
use more secretarial help than have lawyers spend precious time and money word
processing.  Since these billing schedules are even more skewed in the large NY
firms, I hardly think that self-wordprocessing lawyers are going to be a major
trend in the legal profession.
                            == pj karafiol


Unexpected effects of PC's

<ark@research.att.com>
Wed, 19 Dec 90 15:17:14 EST
I have a lawyer friend who told me that she is forbidden to use a computer,
word processor, typewriter, or any other similar labor-saving device.  The
firm's rationale is that they charge by the hour.


Missing reference

Jerry Leichter <leichter@lrw.com>
Wed, 19 Dec 90 09:09:12 EDT
An article of mine in RISKS 10.69 referred to another article in the same
issue, pointing out the difficulty of tracking down an apparently-solid
bibliographic reference given in the latter.  However, the second article
didn't include the reference!  Apparently it was lost in editing.

For completeness, that reference was:  Business Week, 17 Dec 90, Page 96C/

                            — Jerry


A semi-folk tale on the risks of being well-known

Daniel P Dern <ddern@world.std.com>
Tue, 18 Dec 90 22:05:56 -0500
Jerry Leichter's (leichter@lrw.com) story on many printers within a network
sharing the same name calls to mind a story I heard back when I was at BBN.
This was in the period when the company was just starting to market/sell packet
networks to commercial customers (i.e., private nets that weren't part of the
ARPA/Internet).  To initialize the IMP (older name for packet switch), the
network installer made a tape from an active ARPAnet machine, and loaded that
into the customer's single node.

I could all but hear this poor lonely packet switch waking up, as it were, and
soliliquizing, "OK, I guess somebody had to restart me.  Let's see who else is
out there, and tell them I'm awake again.  Hello?  Hello?  Is anyone out there?
Hey, where is everybody?  Dagnab, I'm a stub again... say, you don't suppose
that war broke out and I'm the only one left do you?  Ah, here's packets to
handle!  That's a relief.  Yo! Guys!  Anybody home! ..."

Sure, it worked.  But how cruel...

Daniel Dern


Re: the risks of being well known (Leichter, RISKS-10.69)

Scott Schwartz <schwartz@groucho.cs.psu.edu>
Tue, 18 Dec 90 23:37:53 EST
How many machines are there out there named, say, "vax1"?  We have one here.
There is one at the University of Delaware.  I think Digital used to have one.
Now of course these all have their names qualifed so that they are unique, but
if you happen to have a friendly terminal server that caches recently accessed
hosts, and also does name completion, then ``telnet vax1'' may not get you
where you expect to be.

On a related note, psuvax1 hasn't been a vax for many years, but the name has
stayed the same: the machine gateways lots of mail and the system adminstrators
wanted to avoid any surprises.


Re: Organizational Aspects of Safety (Hoffman, RISKS-10.69)

Charlie Martin <crm@cs.duke.edu>
Wed, 19 Dec 90 10:22:19 -0500
Lance Hoffman noted the SCIENCE article pointing out that the cost of fixing
structural problems in drill rigs is two orders of magnitude greater than the
cost of engineering reviews to catch the problems before construction.  It's
interesting that this is about the same cost saving reported in software
engineering (errors caught early in design versus errors caught during O,E&M).

It looks like "it cost 1/100th as much to do it right as it costs to do
it over" may be a general rule.

Does anyone know of similar cost data in other engineering fields?

Charlie Martin (...!mcnc!duke!crm, crm@summanulla.mc.duke.edu)
O: NBSR/One University Place/Suite 250/Durham, NC  27707/919-490-1966

    [By the way, I noticed yesterday that the ENTIRE Contents section was
    omitted from the distributed version of RISKS-10.69.  I added it to
    the CRVAX archive copy.  Sorry.  PGN]

Please report problems with the web pages to the maintainer

x
Top