The RISKS Digest
Volume 14 Issue 10

Wednesday, 25th November 1992

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Police and Database [another name confusion] (Stanley
S.T.H.) Chow
Nuclear-plant risks in the US
Alan Wexelblat
Re: Election HW/SW Problems
Bill Murray
Voting-machine humor
submitted by Joshua E. Muskovitz from rec.humor.funny
Re: Smart cars?
Brinton Cooper
Re: Installer problems
Richard Wexelblat
Re: How to tell people about risks?
Richard Stead
John A. Palkovic
Arthur Delano
Phil Agre
George Buckner
Chaz Heritage
Re: Stock price too high?
John R. Levine
Randall Davis
Info on RISKS (comp.risks)

Police and Database [another name confusion]

"Stanley (S.T.H.) Chow" <schow@bnr.ca>
Tue, 24 Nov 1992 13:18:00 +0000
In the Nov 23 (today) edition of "The Ottawa Citizen", there is a story
attributed to the "SouthamStar net". The story talks about the problems of one
Steven Reid - there appears to be two people by that name, with the same
birthday, living in the same city (Montreal). The story talks about the usual
identity mix-up and problems well know to the readers of this forum. The scary
part is the quote attributed to Lt. Gerard Blouin of the Montreal Police:

  "it's up to him to change his name somehow. If he can modify his name,
   just by adding a middle initial or something, it would help him."

For those unfamiliar with Canada, Ottawa is the capital and Montreal along
with Toronto are the two biggest cities in Canada, each with millions of
people. One would have expected the computer system to be able to deal with
this problem; but it would appear that in at least one public institution, the
computer rules supreme.
                              Stanley Chow        (613) 763-2831

BNR, PO Box 3511 Stn C       schow%BNR.CA.bitnet@cunyvm.cuny.edu
Ottawa Ontario Canada  K1Y 4H7 ..!uunet!bnrgate!bcarh185!schow


Nuclear-plant risks in the US (Re: Ilieve, RISKS-14.09)

Hip but Harried <wex@MEDIA-LAB.MEDIA.MIT.EDU>
Tue, 24 Nov 92 14:50:44 -0500
Peter Ilieve gives a wonderful summary of the implementation changes which
led to a near-incident in Britain.  In the US, we used to have a process
which was supposed to avoid this.  Hearings and design reviews were held
before a plant was to be built and then a second set of hearings and reviews
were held after the plant was built before licensing took place.

Unfortunately, and with very little fanfare, the Congress bowed this year to
pressure from the Bush Administration and the nuke industry and passed a law
which (among other measures) eliminates the post-construction hearings and
reviews before licensing takes place.  Thus we now have a system which is
guaranteed to produce unknown flaws from implementation changes such as the
door-interlock changes Ilieve reported on.

--Alan Wexelblat, Reality Hacker and Cyberspace Bard, Media Lab - Advanced
Human Interface Group   wex@media.mit.edu 617-258-9168 wexelblat.chi@xerox.com


Re: Election HW/SW Problems (Mercuri, RISKS-14.09)

<WHMurray@DOCKMASTER.NCSC.MIL>
Wed, 25 Nov 92 13:23 EST
>Note that election equipment does not come under the Computer Security Act,
>and hence it is not required to conform to any Orange Book standards. The
>question that concerned citizens have been asking for years is: WHY NOT?

The implication of Ms. Mercuri's rhetorical question it that the Computer
Security Act should apply to computer-based machines for recording,
tabulating, or reporting votes, that the Orange book would then apply to such
systems and that all of that would be helpful.  (Be careful what you ask for,
you might get it.)

The explicit purpose of the Computer Security Act was to limit the influence
of the Department of Defense over non-defense uses of computers.  While
assuming that NIST would not re-invent the wheel, it provided that standards
for non-defense government and the private sector would be promulgated by
NIST.  However, control over voting procedures is reserved to the states.

The Orange book was developed almost two decades ago to respond to a
Department of Defense problem.  It deals with the protection of national
security classified data in "shared resource" (i.e., multiuser) computing
systems.  It was intended to deal primarily with the operating system on the
assumption that the policies of interest could be most effectively and
efficiently enforced there.  It was not intended to deal with problems, like
those in vote recording, tabulating, and reporting, that are specific to the
purpose for which the computer was being used.

While we have learned a great deal from the Orange Book about how to enhance
the trust in computers, neither the Orange Book nor any of its progeny is
directly applicable here.  Any attempt to apply them would be, at best,
misguided and inappropriate, perhaps, counter-productive or even mischievous.

To a great extent, the Orange Book has been overwhelmed by changes in computer
economics and styles of use.  It was based upon the implicit assumption that
computers were expensive; it did not anticipate cheap computers.  It assumed
that operating systems were dear, limited in number, and practically free from
non-management interference.  It assumed that the operating system could be
made trusted, at least for limited purposes.  It did not anticipate operating
systems of millions of lines of code that sell in the tens of millions of
copies for tens of dollars and that are not under the control of management.

It assumed that the portions of the operating system that were responsible for
controlling access to operating system resources could be separately
identified, isolated, and so limited in scope and complexity that they could
be made effective and reliable and demonstrated to the satisfaction of a third
party (not user or vendor).  It assumed that all of these things could be done
cheaply enough to be covered by the reduction in risk that would result.

It assumed that control of access to the resources known to the operating
system would be sufficient, i.e., that all of the resources of interest were
identified to and known by the operating system.  It assumed that applications
could not be economically be made trusted and that uses that required that
they be so were intractable.

It assumed that preserving the confidentiality of data was the problem of
interest.  While it dealt with the integrity of the operating system, it was
essentially silent on the integrity of the data or of the results produced by
the application.  Of course, these are exactly the problems of interest in
recording, tabulating, and reporting votes.

The requirement for trust in the use of computers for recording, tabulating,
and reporting votes is very high.  Such systems must be developed with
extraordinary, not "standard," rigor and they must be developed in the light
of independent scrutiny.  While the operating system, if any, must protect the
application controls, it is the controls specific to and implemented in the
application that are of interest.  The problem is much more analogous to the
problem of ATMs than the class of systems dealt with in the Orange Book.

These systems should be enabled and adopted under law that is specific to
them, not generalized across more generic problems.  While the Computer
Security Act and the Orange Book have much to teach, they are neither
applicable to nor sufficient for this problem.

William Hugh Murray, Information System Security, 49 Locust Avenue, Suite 104;
New Canaan CT 06840    1-0-ATT-0-700-WMURRAY    WHMurray@DOCKMASTER.NCSC.MIL


A rec.humor.funny post about voting machines

"Joshua E. Muskovitz" <rocker@vnet.ibm.com>
Tue, 24 Nov 92 13:24:32 EST
      A tale from my first experience as a poll worker last Tuesday:

          On Election Day the sash cord broke on one of the voting
      machines in the precinct where I was working as a poll
      worker.  The curtain couldn't be closed to permit a secret
      ballot.  The Judge of Elections took the machine out of
      service and sent for a technician who arrived an hour later
      and spent about 10 minutes working on the machine.  As he
      came out of the polling station another poll worker asked:
      "Well, is the machine fixed?"

          The technician replied as he hurried on to his next
      assignment:  "Now, now, we don't like to use the 'F-word' on
      Election Day.  The word is 'repaired'."

Selected by Maddi Hausmann.  MAIL your joke (jokes ONLY) to funny@clarinet.com
Attribute the joke's source if at all possible.  A Daemon will auto-reply.


Re: Smart cars? (Mestad, RISKS-14.06)

Brinton Cooper <abc@BRL.MIL>
Wed, 25 Nov 92 14:56:35 EST
Steve quotes from the December issue of Popular Mechanics on the installation
of on-board radar to identify obstacles in the path of a vehicle and to
monitor steering, braking, speed, and closing rates.  He reports on work to
link the radar and cruise control, then the radar and the brakes.  He
comments:

   "The RISKS seem obvious enough to me..."

I grant the risk.  Again, however, I make a plea for identifying the risk of
NOT doing something like this.  At present, there simply is no way for
authorities in certain western and southwestern states to know of near
zero-visibility (due to fog, sand, dust,...) on some portion of an interstate
highway.  This deficiency leads to multiple, telescoping collisions involving
dozens of vehicles and severe injury and death.  The risk not doing something
like this is the unabated continuation of such injury and death.
                                                                   _Brint


Re: Installer problems (Thorson, RISKS-14.08)

Richard Wexelblat <rlw@ida.org>
Tue, 24 Nov 92 14:40:02 EST
This is one of those features/flaws that occur in user friendly (ha) systems.
I installed the new MS Windows 3.1 and discovered that the drivers for my
Brand-X video controller didn't work under 3.1.  No problem, temporarily I can
use the system in bigprint mode.  So I call the Brand-X distributor.  No
problem, they'll send me the new drivers as soon as they come in.  "I dunno,
maybe a few months."

Who makes the chips on the Brand-X board?  Trident.  Aha! a reputable company.
I call Trident.  No problem, they'll ship the W/3.1 drivers immediately.

(A few days pass).

The drivers install easily, but guess what, the otherwise quite adequate
documentation doesn't list Brand-X so which of the various options shall I
select?  I call Brand-X.  "Oh, _those_ drivers?  Yeah, we have them."
Documentation?  No problem, they'll ship it to me in a few months...  Customer
assistance?  "Sorry, we don't make that model any more.  Just try whatever
resolution you want."

So I use the hyper-handy install-it-yourself function under Windows.  No
problem, the driver installs fine.  Oops!  It was an unsupported driver, the
screen is 100% rectangular garbage.  How do I back off?  Easy, just delete all
of Windows and install it all over again.  Since I can't read the screen, that
good old install-it-yourself utility is unusable.

Turn sharply left twice in France, Brand-X!

--Dick Wexelblat  (rlw@ida.org) 703 845 6601   This message is not copyright.
  Please feel free to use it in any context, at any time, with or without
  attribution.  Quotes out of context and parodies are encouraged.


Re: How to tell people about risks?

Richard Stead <stead@seismo.CSS.GOV>
Mon, 23 Nov 92 11:30:14 EST
Mike Coleman recommends an analogy to the Richter scale as a means to relate
risks to ordinary people

> communicating statistical risks to ordinary people.  The Richter scale seems
> to be successful in allowing people to think about and compare seismic events

While his proposed risks scale may be useful in this respect, it would not
reflect the success of the Richter scale.  As a seismologist who has attempted
to explain quake magnitude to laymen on countless ocassions, I can vouch for
the fact that it does not communicate well to laymen.  Few understand the
concept of logarithms, and beyond that, they have a very hard time
understanding what is being measured.

The Richter scale was developed as a device to aid in cataloging quakes, based
on analogy to star magnitudes; it was not designed with the public in mind.
It measures the "size" of a quake based on the amplitude of shaking it
can induce (with many constraints).  It does not measure energy, frequency,
duration, etc., (although these can be related to magnitude).  Yet I have
met few layman that understand why uttering "That felt like a magnitude 6!"
after feeling a quake makes no sense.  The notation is difficult, too, without
knowledge of logarithms "If a 6 is 10 times as big as a 5, then is a 5.5
5 times as big?"  I find that it is the single most misunderstood aspect
of seismology.  I suspect that the problems (logs and what value is measured)
would similarly be problems with risks reported this way.

Richard Stead   stead@seismo.css.gov


Re: How to tell people about risks?

John A. Palkovic <john@warped.phc.org>
Sat, 21 Nov 92 21:16:56 -0600
Mike Coleman writes:

>The Richter scale seems to be successful in allowing people to think
>about and compare seismic events (earthquakes); perhaps we can develop
>a similar scale for statistical risks.

It seems to be successful in demonstrating that newspeople are ignorant of
logarithms. Many times I have heard a newsperson say knowingly that an
increase of 1 on the Richter scale represents an increase of 10 in the
strength of the quake. This is incorrect.

The equation defining the Richter scale is log E = 11.4 + 1.5M, where E is the
energy released in ergs and M is the magnitude. Thus a delta M of 1 represents
an increase of 10^(1.5) or about 32 in the amount of energy released. People
who say otherwise do not really know what they are talking about.

John Palkovic      home: john@phc.org || work: jp@ssc.gov


Re: How to tell people about risks?

Arthur Delano <ajd@oit.itd.umich.edu>
Sun, 22 Nov 92 17:58:20 EST
In Risks 14.08, several contributors (Stuart Wray <scw@cam-orl.co.uk>),
(Rob Cameron <cameron@cs.sfu.ca>), (coleman@rocky.CS.UCLA.EDU (Mike
Coleman)) suggest a standardized table of comparative risks.

The problem with presenting potential of risk by analogy is with the
implications of the analogy.

Even though, statistically, one is as likely to be killed in an airplane
collision as to identify a blade of grass and then hit it with a golf
ball, one _seems_ more probable than the other.  Airplane collisions are
real tragedies many people are continuously working to avert, the silly
golfing bet is trivial and may have never happened.  Although these
events suggest extreme differences in emotional content, almost any two
events of similar possibility can carry different implications based as
much on personal experience as on general significance of the event.

Citing the very small odds for an absurd situation (hitting the blade of grass)
as being similar to those of failure for an item one is defending can make the
chance of failure seem absurd.  When the mathematics cannot be fudged, their
relevance can be changed through the context in which they are presented.  The
audience could then determine the significance (to them) of the event
regardless of the odds and decide if the risk is worth taking.
                                                                    AjD


telling people about risks

Phil Agre <pagre@weber.ucsd.edu>
Mon, 23 Nov 92 15:42:57 -0800
RISKS-14.08 contains a number of suggestions about how to inform people about
technological and other risks.  Those interested in the professional literature
about the subject should look at various books and manuals by Peter Sandman and
his colleagues.  The state of the art in implementing such "risk communication"
schemes is (believe it or not) the "CAER" (Community Awareness and Emergency
Response, pronounced "care") program of the Chemical Manufacturers Association.
I personally have grave reservations about this entire field, but it's
certainly much better than what it replaced.

Stuart Wray <scw@cam-orl.co.uk> suggests that we "tell people the odds and
compare them with the odds of various every-day disasters".  This is a very
common approach, and something that Sandman (for example) tends to recommend
against (though not as strongly as he recommends against comparing the odds of
getting cancer from your local factory to getting cancer from eating a peanut
butter sandwich, which tends to infuriate people).  One of the many problems
with numbers like Wray's (for example, "Odds of dying in a car accident in
the next year: 1 in 10000"), is that they're not very meaningful.  They don't
reflect "my risk" of dying in a car, but rather the average across some large
population (Wray doesn't say which one).  If we computed the statistics for
incrementally more closely defined groups (urban, light drinker, no previous
accidents, and so forth), we would get a lot of different numbers, some of
which would probably be more impressive than others.

My impression in talking to people who communicate risks professionally is that
auto-accident statistics (and many others, especially drowning for some reason)
are a form of urban myth.  The numbers circulate among the community of people
who, for one reason or another, tend to think of risks from industrial
activities as minimal, and they serve largely as self-reassurance, since
ordinary people have a strong tendency to reject such statistics as
self-serving nonsense.  (On this subject I strongly recommend the organizing
materials of the Citizens' Clearinghouse on Hazardous Waste, which you can read
about in William Greider's brilliant and highly relevant new book, "Who Will
Tell the People?: The Betrayal of American Democracy", New York: Simon and
Schuster, 1992.)
                         Phil Agre, UCSD


Re: Telling people about risks (RISKS-14.09)

"George Buckner" <GRB@nccibm1.bitnet>
Wed, 25 Nov 92 10:21 EST
On the subject of how to inform the public of risks:

ABC news had a story this week about work in progress to fit automobiles with
computerized roadmaps and sensing/control systems which are intended to
automate car driving -leave even the driving to the computer. They quoted one
man involved (I didn't catch who it was or who he was with) as saying
(paraphrased):

  .a certain. (high) percentage of auto accidents are caused by driver
  error. If we can replace the driver with a computer then we can
  reduce the accident rate accordingly.

Ah yes, the computer to the rescue -again -replacing us fallible humans with
the perfectly functioning machine, and reducing the accident rate due to
operator error (who or whatever that operator may be) to zero.

Perhaps this is best filed under "Risks of assuming non-sequiters".
Regardless of which school of thought we subscribe to re: software quality
measurement/safety assurance, this kind of simplistic and misleading language
-and thinking -should never be heard coming from systems designers.


Risks of fashionable risk-metaphors

<chaz_heritage.wgc1@rx.xerox.com>
Wed, 25 Nov 1992 06:23:02 PST
In RISKS-14.08 Rob Cameron suggests the following risk-metaphors for
"communicat[ing] risk information to the public in a meaningful manner":

>"The risk of your child seriously injuring himself in 3 hours of
playing with this toy is about the same as that of being an automobile
passenger for 4 minutes."<

>"The risk of long-term liver damage from this medication is approximately
the same as the risk of cancer from smoking 2 packs of cigarettes."<

Despite this being essentially a good idea he could not have picked two worse
metaphors. The perception, on the part of the majority of the general public,
of these two activities - smoking and riding in cars - is about as slanted as
it can possibly be by an endless barrage of moral homilies from the health &
fitness weirdos and the environmentalist ban-it-all brigade, who have singled
out the cigarette and the automobile, respectively, as Global Enemy No. 1.

Mr. Cameron seems to be idealistic enough to imagine that people will read the
numbers and possibly even calculate with them. They will not. They *may* see
that the risks of the toy can be compared in some way with those of cars - and
leave the toyshop at once. They *may* see that the risks of the medication can
be compared in some way with those of cigarettes - and sue the doctor (unless,
of course, they are unregenerated and unrepentant smoking drivers like me!).

For myself I do not believe that there is any point in attempting to convey
numerical, let alone statistical, information to a 'general public'
increasingly deskilled, de-educated, aggressive and litigious.

An example of the perceptions of the general public: European food
manufacturers, who are subject to labelling requirements, find it expensive to
produce a different label for each country, translating the complex chemical
names of food additives for each language group.

Solution: give each additive - they are more or less standardised - a European
code number (e.g. "E123"). This can then be looked up in a language-specific
list to provide the chemical name in the required language, if anyone is
really that interested.

Result: the general public, egged on by extremely ill-informed and
sensationalist 'consumer investigative journalists', boycott products that are
now labelled as containing what they call 'E-numbers' - exactly the same
additives as they placidly accepted previously. Indeed, the term 'E-numbers'
is now used by the general public to describe universally vile, infallibly
carcinogenic and, above all, *secret* additives with which mad-scientist
food-technologists try covertly to poison their kiddies.

Consequence: the food manufacturers, shaking their collective heads in
disbelief, are rapidly going back to printing complex chemical names in each
of the European languages, and passing on the cost of this exercise to their
few remaining customers.
                                   Chaz


Re: Stock price too high?

John R. Levine <johnl@iecc.cambridge.ma.us>
19 Nov 92 00:09:17 EST (Thu)
The company with the $10,000 share price is Warren Buffett's
Berkshire-Hathaway.  He personally owns enough stock to control the company
and doesn't want a lot of shareholders, so he refuses to split the stock.
It's been trading in the thousands for years (shoulda bought a share or two at
$7K) so it's pretty stupid if the exchange didn't see a $10K price coming.

For quite a while, its price has been at least an order of magnitude greater
than the next most expensive listed stock and its entry in newpaper stock
price lists, which are invariably generated by computer from data sent by the
Associated Press, is often mangled.

Share prices in the thousands of dollars are quite common in unlisted stocks
and foreign companies.  The Swiss drug company Hoffman-Laroche long ago had a
share price in the $12,000 range.

John Levine, johnl@iecc.cambridge.ma.us, {spdcc|ima|world}!iecc!johnl


Re: Stock price too high? (Wittenberg, RISKS-14.06)

Randall Davis <davis@ai.mit.edu>
Thu, 19 Nov 92 18:13:42 est
[...] Then let's tell everyone the solution so they can avoid this mistake in
the future.  How many bits should the internal representation be?

Seems like all you have to do to answer is predict the future.

Some years ago the most expensive stock on the NY exchange was Superior Oil,
which sold for around $800/share.  Clearly an order of magnitude bigger than
that should be plenty, right?  And it would have, for about 20 or so years.
Now, two decades later, you can point out how terribly nearsighted that design
decision was.

OK, let's fix it once and for all: let's make it two orders of magnitude
bigger than Berkshire is now; 7 digits should work.

Uh oh, what about the growing international market?  What if the NY market
starts providing quotes on Japanese stocks in yen and Italian stocks in lira
(or quoting US stocks like Berkshire in lira)?  So maybe we do need a few more
orders of magnitude.  Ok, let's use 11 digits; surely there won't be a currency
with more than 10,000 units to the dollar.

Ooops, here comes Eastern Europe with 15,000 Polish Zloty to the dollar.

Ok, so maybe we can figure no currency will have more than 100,000 units to
the dollar... and then someone gets hit with hyperinflation....

The point should be clear: hindsight provides perfect vision for criticizing
design decisions.  That's the easy part.  The difficult part is making design
decisions now, attempting to design a system for now and the future.

Please report problems with the web pages to the maintainer

x
Top