The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 12 Issue 31

Thursday 12 September 1991


o Re: Export controls on workstations
Neil W Rickert
Brinton Cooper
Haakon Styri
o "Checkless society"
Daniel B Dobkin
o Re: Multinational Character sets
Dik T. Winter
Robert Ullmann
Hugh Davies
o Re: +&*#$
Mike Morris
o Re: M16 and James Fallows' "Two Weapons"
Jon Jacky
Tom Faller
o Junk Mail -- In memoriam, Dave Sharp
Peter Mellor
o Risks of assumptions?
R. Cage
o The seriousness of statistics mistakes
Jeremy Grodberg
o Risk Assessment: a specific experience
Justine Roberts
o Re: risk analysis
Victor Yodaiken
o Averages and distributions
Jerry Leichter
o Info on RISKS (comp.risks)

Re: Export controls on workstations (Markoff, RISKS-12.30)

Neil W Rickert <>
Wed, 11 Sep 91 13:09:29 -0500
This proposal should properly be referred to as the "full employment for
people in Singapore, Taiwan, Hong Kong and Japan" bill.

Neil W. Rickert, Computer Science, Northern Illinois Univ., DeKalb, IL 60115

Export controls on workstations (Markoff, RISKS-12.30)

Brinton Cooper <abc@BRL.MIL>
Wed, 11 Sep 91 16:40:24 EDT
The real absurdity here is the chauvinistic attitude in DoD that US-made
computer workstations are the only "inexpensive but powerful" products on the
world market or that such US-made products are even cost-competitive.  The net
result of such export controls may be one more nail in the coffin of US-based
manufacturing and is likely to do absolutely nothing to thwart terrorism.

The computer-based RISK here is based upon permitting morons to make decisions
about computers.

Re: Export controls on workstations (Markoff, RISKS-12.30)

Yu No Hoo <>
Thu, 12 Sep 91 12:23:59 BST
Why not? The european computer industry probably need something like this to
get a comeback. To have DoD creating a niche in the market sounds like a nice
thing to me. The end result for the DoD paper pushers will probably be *less*
                    Haakon Styri

"Checkless society"

Daniel B Dobkin <>
Thu, 12 Sep 91 10:30:28 EDT
The 11Sep91 New York Times carried an article on the first business page about
the growing use of imaging systems by banks: instead of returning cancelled
checks to the customer, they return scanned images (sometimes as many as
eighteen on a page).  This form of confirmation has been familiar to American
Express customers for some time now.

To be sure, the technology offers some advantages: for example, the images
can be reduced, or they can be enlarged for sight-impaired customers; they
are reproduced on standard cut-sheet paper, which can be drilled for use in
a ring binder.  (The banks' marketing people see even more advantages:
offering free binders to new customers, printing on drilled paper, printing
marketing messages between the checks; the list goes on and on.)

Many banks offer reduced fees to customers who choose this option; indeed,
the article reports that further fee reductions are offered to customers
who don't want any checks (scanned images or otherwise) returned to them at
all.  If a cancelled check becomes necessary (as proof of payment, in case
of a dispute with a credit card company, etc.), the bank will provide an
image free of charge.

There was some discussion here lately of the ease with which bogus checks
can be created by use of relatively cheap technology (laser printers and
desktop publishing software).  It seems that as the imaging technology
gains more public acceptance, and as the banks push it more aggressively
to reduce the costs of check processing, that there is a further RISK: if
the scanned images are acceptable proof of payment, can the use of
the same cheap technology to create bogus records be far behind?

Re: National characters on car plates

Dik T. Winter <>
12 Sep 91 01:55:00 GMT
Torsten Lif writes about the possible risk because Finnish car plates from the
A*land Islands (to follow his spelling) have a national character, and wonders
what problems that might give in other countries.  I think the problem is
moot in this case, as there are (as far as I know) no duplicates in Finland,
whether you leave off the ring or not.  There is however a serious risk for
people from Yugoslavia.  One of their national characters is S with hajek
(an upside down circumflex).  There are indeed cars where the single
distinction between two number plates is that hajek.  E.g. cars from Sarajevo
always start with the letters SA, while cars from Sabac also start with SA,
but there the S has a hajek.

By the way, in Torsten Lif's own country (Sweden) until recently no national
characters were used on car plates (A ring, A diaeresis and O diaeresis).
With the introduction of vanity plates these are allowed, which again might
result in confusion.

dik t. winter, cwi, amsterdam, nederland

re: Universal Character Set

Robert Ullmann <Ariel@Relay.Prime.COM>
11 Sep 91 14:11:16 EDT
What Hugh Davies and Kim Greer write on character sets (RISKS 12.30) was mostly
correct, but is now out of date.

An ISO working group is working on a new version of DIS10646 (to eventually be
IS10646) a multi-byte code set that attempts to be comprehensive, and will be

The Unicode Consortium and the ISO WG have agreed to merge their efforts, to
create one (draft) standard. (The previous DIS10646 failed in the balloting
for, among other things, not addressing Unicode: several NO votes, including
the U.S., stated that having two different codes, 10646 and Unicode, was not

One of the proposed representations of the code set is "upward compatible" with
ASCII-7, and useable in mail (with 8-bit support).  (Send a message to
ISO-Char-Subscribe@List.Prime.COM to subscribe to a demonstration list.)

Two other points: there is a (set of) 8 bit sets defined by IS8859, which
"solve" the substitute character problems that Davies laments (created by
ECMA-35). 8859 is used by (among other things) Xwindows and Postscript version
2. (Of course, lots of people still use ECMA-35)

The ASCII/EBCDIC problem is "solved": SHARE (the IBM users group) has defined
an invertible (reversible) mapping table, used by BITNET to Internet gateways.
(I will supply a copy to anyone who wants it)

Robert Ullmann, Prime Computer, Inc.                    +1 508 620 2800 x1736

Re: Multinational Character sets

Thu, 12 Sep 1991 02:21:37 PDT
Firstly, an apology, related to the topic under discussion. In my last posting,
I used a dagger character to reference the footnote on the XCCS. I am writing
this on a Xerox 6085 workstation, which uses the XCCS character set, and I
forgot that the dagger would not be translated correctly by our mail gateway,
so a string of weird characters appeared in the digest. A case in point, as if
we needed one.

Secondly, (Kim Greer) writes;

>  Perhaps we have overlooked the risk of forgetting the origin of words and
>what an acronym *originally* meant.  "ASCII", as we all remember, stands for
>American Standard Code for Information Interchange, the key word being
>"American".  Would it not be stretching things a bit to expect
>non-"American" language nuances (like umlauts) to automatically fit in?

This would be entirely true, except that American computer manufacturers
cheerfully exported their computers all over the world, without making any
changes for the local language. It was ASCII or nothing. (Or EBCDIC or
nothing!). They also didn't (and in most cases, I suspect still don't)
translate their manuals into the local language. You don't read (American)
English? Tough.

Perhaps this should be considered as another RISK that I hadn't considered?
What happens when a standard is applied well outside it's original area? In the
case of ASCII, the shambles we have today. The fact that the 'A' in ASCII
stands for "American" is irrelevant today. I suspect there are far more ASCII
based computers outside America than inside it, and it's about time that we all
realised that it is quite simply not good enough to expect a customer to learn
a foreign language in order to use a product. You might also like to think
about the fact that the majority of the people in the world don't speak English
anyway. Does *your* computer "do" Pin-Yin and Cyrillic? (Mine does!)

Incidentally, the designers of ASCII wrought better than we might think. The
ESCAPE character is supposedly intended to allow a system to insert non-ASCII
characters (to "escape" from the ASCII set). Pity it's never used that way.

Hugh Davies, Rank Xerox, Multinational Customer & Service Education- Europe,
Welwyn Garden City, Herts. England.

Re: +&*#$ (Moore, RISKS-12.21)

Mike Morris <>
Thu, 12 Sep 1991 08:26:10 GMT
This is true in California - which has a 7-character plate format.  My amateur
radio callsign has 6 characters (note that ham calls can be from 4 to 6
characters).  _Almost_ all the dispatchers know that a plate of less that 7
characters includes a trailing space by default.  If you run my callsign plate
on the state DMV (Dept of Motor Vehicles) computer as WA6ILQ or WA6ILQ<space>
it comes up just fine.  If you run it as <space>WA6ILQ, or WA<space>6ILQ, or
any other combination, it comes up with "Record not on file".  This has caused
me serious problems.  Once I was pulled over by a cop who was as fascinated as
I was when my plate wouldn't come up and we spent some time with his patrol car
terminal discovering this quirk.  You can imagine the reaction I get now when I
tell the cops "Tell the dispatcher to run it as 'WA6ILQ<space>'".  And it

Mike Morris   WA6ILQ   PO Box 1130    Arcadia, CA. 91077  818-447-7052 evenings

Source for Fallows' "Two Weapons"

Wed, 11 Sep 1991 12:02:29 -0700 (PDT)
"Two Weapons" (about the M16 and F16) is a chapter in James Fallows' book,
NATIONAL DEFENSE, Vintage Books, 1982.  I think the hardcover edition was from
Random House, 1981, but I'm not sure.

Much of Fallows' book is a critique of technically complex weapons systems,
which many RISKS readers would find interesting.  Another excerpt from the
book, describing Fallows' boyhood visit to a SAGE installation, appeared in
RISKS a few years ago.

- Jon Jacky, University of Washington, Seattle

Re: M16 and James Fallows

Tom Faller <>
Thu, 12 Sep 91 09:10:10 CDT
James Fallows article "Two Weapons" is actually a chapter of his book "National
Defense".  The book discusses the perceptions used if forming a national
defense policy, shows where these conflict with reality, and how the average
person mistakenly perceives military life and its tools, and discusses trends
in future military policy.  The book just went through a revised edition, I

Other good books on this subject include James Dunnigan's "How to Make War",
and a book called "The Great Rifle Debate", by an author whose name I forget,
but who does an excellent job of showing how the military armorers mind works.

The tie-in with computers is that most of these books include examples of
sloppy war-gaming, over-reliance on favorable models, and a "if it's got more
electronics, it's got to be better" attitude. A little-discussed fact is
brought out; our own electronic Maginot Line, electronic, "smart", warfare.
One thing nobody wants to admit too loudly is that we may be back to
rifle-based warfare real soon if attacked with a nuclear weapon, due to the
Electro-Magnetic Pulse (EMP) given off by a nuclear explosion. There are
estimates that one good nuke, exploded in near-space over Kansas could fry most
of the missile controls, computers, radios, phone switches, smart weapons,
late-model automobile engine electronics, and other items this country depends
on, nearly coast-to-coast. Nobody's really sure how serious this is, although a
lot of testing and "hardening" goes on. And it's a losing game trying to keep
ahead by shielding, a bigger bomb is just a lot cheaper than building defenses
against it. There's some concern that any nuclear war will only last until the
first few shots, as they will screw up the rest of the system, and any other
missiles in the air. It kind of acts as a deterrent if you know that you only
get one shot at it, and then you have to rebuild your arsenal from the chassis
                                  Tom Faller

Junk Mail -- In memoriam, Dave Sharp

Pete Mellor <>
Wed, 11 Sep 91 19:52:16 PDT
My apologies if this is of marginal relevance to the main subject matter of the
lists to which I have mailed it.

UK readers might be interested to watch the forthcoming edition of Equinox on
Channel 4 this coming Sunday, entitled "Junk Mail".

It was originally scheduled for 14th Oct., but was announced last week to
be broadcast on 15th Sep. (I forget the time at which it will go out.)

The blurb on the advertising postcard reads: "How much do direct marketers
know about us and how do they get our names? Why would they want to put a
brightly coloured fish in our mail?". (Photo on reverse of strange-looking
man holding the fish in question over a glass of water.)

The programme was produced by Orlando Television Productions Ltd., for WGBH
Boston in association with Channel 4.

Orlando was essentially Dave Sharp.  As well as being a very good friend of
mine, he was an extraordinarily talented film maker, and his one-man company
established an excellent reputation for scientific (and other) documentary

"Junk Mail" promises to be a very witty and thought-provoking piece of TV
journalism. It is one of the last films Dave completed before his untimely
death in the collision between a 737 and a private aircraft at Los Angeles in
February this year.

My thanks to those who responded to my e-mail request for information about
the accident.

Peter Mellor, Centre for Software Reliability, City University, Northampton
Sq.,London EC1V 0HB +44(0)71-253-4399 Ext. 4162/3/1 (JANET)

Risks of assumptions? (Re: Chase, RISKS-12.28)

R. Cage <fmsrl7!>
11 Sep 91 21:58:31 GMT
>People don't compute the crash-safety of new automobiles (well, I'm sure that
>they do at some early stage), they run them into walls to see what happens.

As it turns out, this is almost exactly backwards.  Running a car, especially a
hand-built prototype car, into a wall is horrendously expensive.  Exercising a
FEA model inside a Cray is very cheap in comparison, and it takes a lot less
work to reconstruct a computer model after a crash, or modify it to work

About the only crash-testing we do these days is to confirm the results of the
computer models.  The sanity-checking is done; we have no chance of GIGO
resulting in bad products getting out.  The effectiveness of the models is a
result of a great deal of work in building and testing them.  It's a good thing
that the properties of sheet metal are not very difficult to determine.

Having people just assume that climate models, or drug models, or
population models are just as reliable is, IMHO, a big RISK.

Russ Cage

The seriousness of statistics mistakes

Jeremy Grodberg <>
Wed, 11 Sep 91 20:33:05 PDT
RISKS-12.28 contained two instances of statistical fallacy, the less important
of which was corrected in 12.29, and for which the moderator referred us back
to earlier threads to which I had contributed.  Lest you think I whine about
people's misuse and misunderstanding of statistics just so I have something to
complain about, I want to point out that the second statistics mistake was
truly a life-or-death decision.

In RISKS-12.28, Mark Fulk writes:
>The Maternal Serum Alfa-fetoprotein (MSAFP) test is administered to pregnant
>women in order to screen for a broad range of congenital defects of the fetus
>[which I will simply call "the disease" -- JG][...]

Let's presume Mr. Fulk's base data is correct, that the MSAFP has a 10%
False Positive, is confirmed by amniocentesis which carries a 1% chance of
inducing abortion, and let's also say that the chance that someone
taking the test actually has the condition tested for is 1 in 10,000, which
the high end of the risk range he gives.

I believe he made the wrong decision about having the test based on an
incorrect analysis of the data.  He claims a .1% chance of the MSAFP leading to
a inadvertent abortion of a healthy fetus.  I can only guess that his reasoning
was: 10% of people taking the test have healthy babies but will test positive,
and 1% of that 10% will lose their babies because of the amnio, and 1% of 10%
is .1%, so there is .1% chance of killing a healthy fetus.  Unfortunately, this
analysis is wrong, because of an important, less common error (which is
becoming more common as people deliberately try to mislead with statistics):
misunderstanding the definition of a statistic.

Mr. Fulk made his mistake when he assumed a 10% False Positive rate meant that
10% of the people taking the test get positives that are really negative.
However, it actually means that 10% *of the positive results* are really
negative.  Putting this together with the 1 in 10,000 chance for a True
Positive, we come up with a 1 in 90,000 chance of taking the test and getting a
False Positive (90,000 / 10,000 is 9 True Positives, which generates 1 False
Positive), or a 1 in 9,000,000 chance of the MSAFP test leading to the death of
a healthy fetus.  Thus the test will detect 900 afflicted babies for every 1
healthy one it harms.  This is the real decision making criterion, and speaks
much more highly for the utility of the test.

Let me quickly add that the above analysis is inaccurate (but close), because I
don't have all the necessary data.  False Negatives need to be factored in
correctly (which can be tricky), and there may be other data which is a better
basis for predicting the possibility that a specific individual (such as a 29
year old healthy woman) will have a false positive versus having the disease.
Also, a True Positive refers to someone who has the disease and tests positive,
which is a subset of the people that have the disease, although the above
analysis assumes that they are one and the same (no False Negatives), which Mr.
Fulk tells us is not correct.  My point is that the above analysis brings us a
lot closer to the best information than Mr. Fulk's did, because of one simple
mistake he made.

There are a number of other interesting aspects to this story which I
want to point out, in no specific order.  Medical professionals have
difficulty with these statistics, too.  I asked a few people who have
been involved with clinical drug testing (where the data for such statistics
is gathered and analyzed), and none of them were sure off the top of their
head of which of the two versions of % False Positive was correct, although
they all knew where to look it up, and most made the right guess.  Clearly
the people Mr. Fulk talked to were not conversant enough with the statistics
to correct his mistake.

What is worse, for some reason (which I leave to the reader to wonder about),
Mr. Fulk did not find it unbelievable that his doctor would recommend a test
which was 10 time more likely to kill his fetus than the disease was (.1% or 1
in 1,000 by Mr. Fulk's analysis, vs. 1 in 10,000 for the disease), and 1,000
times more likely to give an erroneously positive result than it was to detect
the disease (10% or 1 in 10 vs. 1 in 10,000).  I'm surprised and dismayed that
he did not notice this and check further to find his mistake.  Although we in
the General Public have problems with statistics, our medical and scientific
establishment, through researcher care, peer review, and governmental
regulation have a very good record on handling the statistics carefully and
correctly before the medical public policy decision is made.  If the test was
as bad as Mr. Fulk thought, standard practice would have been formulated to
recommend against testing in his case.  For example, because the prevalence of
smallpox is so low, you are now more likely to get it from the vaccine than
from anywhere else, so only people with higher-than-average risk factors (like
people who work around smallpox-infected patients) are given the vaccine.  If
anything, public policy decisions are more likely to deprive you of beneficial
tests because of the monetary cost (e.g. physicals for people in their 20s)
than to suggest spending money on tests with high risk/reward ratios.

So here is another lesson on the risks and dangers of innumeracy.  This is why
I'm on a mini-crusade about statistics.  This stuff *is* hard, and we can't all
be experts on it, but let us at least learn to know when and why we need to ask
the experts, what we need to ask them, and what we can do to check on what they
tell us.  The life you save may be your own.

Jeremy Grodberg

Risk Assessment: a specific experience (Wayner, RISKS-12.29)

Wed, 11 Sep 91 20:12:31 PDT
In RISKS-12.29, Peter Wayner writes that amniocentesis-caused abortions are "a
violation of the Hippocratic Oath. The patient died because the doctor was
curious..." This is a distortion. The amniocentesis procedure is NOT carried
out because a doctor is curious.  It is requested by parents and/or recommended
by physicians because there is reason to believe that there may be a problem
with the pregnancy or the fetus. Any halfway good doctor will inform parents of
the abortion risk which accompanies the procedure, and the parents can then
refuse the procedure if they wish. Wayner seems to assume that abortions are
caused only by human intervention. The percentage of naturally occurring
abortions is much higher than 1-2%.

Justine Roberts, 152 Sycamore Ave., Mill Valley, CA 94941
jroberts@ucsfvm.bitnet      (415) 388 6814

Re: risk analysis

victor yodaiken <>
Thu, 12 Sep 91 08:03:51 -0400
At least some of the Post-Three-Mile-Island nuclear energy risk assessment
literature has a, IMHO, properly humble tone.  Here are 3 examples
(transcription errors are mine, apologies to the authors):

     The formulation of societal risk as an expectation value runs into
     difficulties when the probability of the event is low, but the
     consequence is high if it occurs. In this case, there would be no
     consequence or a very large consequence. Therefore the use of
     expectation value does not adequately reflect the real societal risk
     because the numerical value does not reflect a consequence that would
     actually occur.          [...]

     The criteria recommended in this article have no fundamental
     basis. Indeed, there is no fundamental approach to this issue
     and no way of proving whether any proposed criteria are right or
     wrong except by using them over a period of time and discovering
     whether the costs, risk, and other consequences of their use meet the
     requirements of society.

D.J. Higson, Nuclear Safety Assessment Criteria,
Nuclear Safety 31-32 April-June 1990 193-185

     Finally, and perhaps the most important lesson learned, risk analysis
     helps recognize questions that can be posed in scientific terms but
     cannot be answered by science (page 102)

Paolo F. Ricci in the Brookhaven/EPRI workshop on "Health and Environmental
Risk Assessment" (Pergamon Press, 1985)

[In reference to Probabilistic safety analysis methods]

  The modeling of dependent events, particularly human error and external
  events, is still less advanced. It should be noted that  the qualitative
  aggregate results of PSAs, e.g. probability for core melt, for releases of
  radioactive materials or for health effects on the public should not be
  interpreted as frequencies in a statistical sense, although they are
  expressed in like units. Rather, probability is a numeral measure of a state
  of knowledge, a degree of belief, a state of confidence.

L.V. Konstantinov "On the Safety of Nuclear Power Plants"
Nuclear Engineering and Design 114 (1989) 2 Page 183

Averages and distributions

Jerry Leichter <>
Thu, 12 Sep 91 10:16:34 EDT
A recent RISKS article repeated the old platitude that "by definition, half of
all people have below average intelligence" (or are "below average drivers",
or whatever).  This led to the ritual replies, truncated by the editor,
pointing out that, if by "average" you mean "mean" (the usual case), then this
need not be true.  In fact, it's easy to construct distributions that make it
"as false as you like".

That's all very true, but it's important not to replace one mantra by another.
Many measures of the real world have normal distributions.  Most deliberately
constructed measures have normal distributions, essentially by construction.
For a normal distribution, or anything at all close to it, it is a fact that
half of all measured values will be below the mean.

If you think the "average" in "average intelligence" really refers to "mean",
then you need to have a numeric measurement of intelligence to make any sense
of the remark.  While it's been years since I looked at the literature in the
field, all the various IQ scales I know of have very close to normal
distributions.  (They can't be EXACTLY normal since, if nothing else, an IQ
can't be less than 0, and a normal distribution has infinite tails.)  For IQ's,
"by definition, half of all people have below average intelligence" is true.

If by "intelligence" you mean some vague idea about how bright people are,
then you can only interpret "average" as a qualitative English term.  My
Roget's Thesaurus has the following "cluster" under MEAN:  mean, middle state,
middle ground; golden mean, juste-milieu [F.]; medium, happy medium; average,
balance, normal, rule, run, generality; middle term [logic.], mezzo terrine
[It.].  Another cluster, under GENERALITY, lists:  The generality, average,
ruck, run, general ~, common ~, average or ordinary run.  From this it's clear
that English speakers use average for a cross between "mode" and "median",
depending on context.  Actually, I'll argue that when we say something is
"average", we aren't just picking a sense of "mode" or "median" at random; we
are assuming that the two are roughly the same.  After all, the rough opposite
of "average" is "extreme" or even "unusual".  Think about exactly what you
are saying when you describe something as of "average" quality.

For a purely qualitative "average", a statement about how many items are above
or below the average is difficult to interpret.  In one way, it's pretty
meaningless:  Most things will be "average"; if we don't attempt to sub-divide
those, then we're only talking about the outliers, which are presumably rare.
Saying about half are above and half below the big central block means little.

In fact, however, I suspect most people, if pushed to divide things up into,
say, three groups - below average, average, and above average - will put more
things in average than either of the others, but will put roughly equal
numbers in the "above" and "below" groups.  This seems fundamental to what we
mean by "average".  (This would make an interesting and easy experiment.  Any
social scientists or linguists want to follow up on it?)  To the degree that
my prediction is right, the statement that "half are below average" isn't
quite true, since so many will turn out to BE average; but of the ones that
AREN'T average, it WILL be true.

If I remember right, the place this issue first came up was the statement
that more than half of all drivers (particularly men) believe they are "above
average" drivers.  It would be quite reasonable for a large fraction of
drivers to believe that they are "average or above" - that simply requires a
broad, fuzzy middle ground, typical of qualitative measures.  But it requires
a very bizarre and unlikely measure of driving ability for a large fraction to
actually be ABOVE average:  It requires that some small number of drivers be
EXTREMELY bad.  Not only can I see no plausible evidence for this, I can
instead see plausible evidence for the opposite:  Race drivers and other
professionals are clearly MUCH better than most drivers.

Mathematics is all well and good, but the APPROPRIATE APPLICATION of
mathematics is what's useful!
                            -- Jerry

Please report problems with the web pages to the maintainer