The RISKS Digest
Volume 12 Issue 28

Monday, 9th September 1991

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

FAA on 755 thrust reversers
PGN
Inmate, working for TWA, steals credit card numbers
Rodney Hoffman
Re: Salomon Brothers — Database Design
William Dye
Fax machine IDs
Robert Morris
Re: Unusual characters in addresses
Bob Frankston
Failsafe mode for 3.5" Floppies
Don Phillips
Re: The RISKS of Superiority
John Hobson
Re: A Danger ... with Intelligent Terminals
Randolph Bentson
Risk assessment: a specific experience
Mark Fulk
Re: Risk Perception
Geoff Kuenning
Chuck via Phil Agre
David Chase
Dan Drake
Craig Partridge
William P Gardner
Phil Agre
Fred Heutte
Info on RISKS (comp.risks)

FAA on 755 thrust reversers

"Peter G. Neumann" <neumann@csl.sri.com>
Mon, 9 Sep 91 10:23:36 PDT
Today's New York Times notes that the Federal Aviation Administration is
expected this week to require changes to the design of the engine thrust
reversers on some Boeing 757s, based on computer simulations at Boeing that
indicate "the accidental activation of thrust reversers could be a far worse
problem than previously believed."  That is, the plane may not be
aerodynamically controllable, despite previous thinking.  The cause of the
crash in Thailand is still unknown, however.


Inmate, working for TWA, steals credit card numbers

Rodney Hoffman <Hoffman.El_Segundo@Xerox.com>
Mon, 9 Sep 1991 07:58:58 PDT
Writing in the September 8, 1991 `Los Angeles Times' (page A3), Mack Reed
reports that Carl Simmons, a 20-year-old California Youth Authority inmate,
working as a TWA telephone reservation agent, stole dozens of customer credit
card numbers and used them for thousands of dollars of personal charges.  He is
now serving two years in state prison for the thefts.

TWA has used CYA inmates in a special program since 1986.  The story says the
program "has been touted as a way to help young criminals learn a trade and
repay their debt to society.  It has raised more than $500,000 for victims'
restitution and the cost of incarceration.  And the program's 213 graduates,
many of whom now work at airlines and travel agencies, are one-tenth as likely
to commit new crimes as nongraduates, CYA officials said."  [Sure makes ME feel
secure about making airline reservations!]

CYA has tightened security, including more frequent searching of rooms and
occasional strip-searches.  Inmates have always been forbidden from taking pen
and paper into the computer room, and now not even instruction manuals can be
taken out.  But Simmons and another inmate said that won't stop inmates from
stealing card numbers or illegally charging airline tickets.

Fred Mills of the CYA says, "There's always going to be an exception, but 99.9
times out of a hundred in a program you're not going to get that.  For every
person we can keep out of the institution for a year, that's saving the state
about $31,000.  That's the thing we have to look at and balance."

One victim, New Hampshire businessman Phillip Parker, said, "I don't want to
begrudge someone a chance to make it back into a productive life, but giving
them a chance where there's a significant amount of potential for financial
fraud or risk — maybe there's other things that would make more sense."

TWA says it will now re-evaluate the program.


Re: Salomon Brothers — Database Design (RISKS-12.24)

William Dye <wdye@cse.unl.edu>
Mon, 9 Sep 1991 18:14:41 GMT
Jeff Berkowitz writes:

>It is incredible to me how we have moved away from the concept of individual
>responsibility and toward reliance on various societal "mommies and daddies" to
>watch over behavior.

Bravo.  The database programmers made a mistake.  The Salomon traders committed
a crime.


Fax machine IDs

Robert Morris <ram@cs.umb.edu>
Sun, 8 Sep 91 20:10:04 EDT
Recently I faxed highly confidential information to a bank.  Following their
instructions, I telephoned their switchboard and asked for the extension with
the fax machine on it, then connected my fax and sent my material. I telephoned
a few minutes later to verify that my material had arrived and it had. The
arrangement was slightly annoying because my "manual" long distance call had to
wait on hold for several minutes (at my expense) waiting for the fax to become
free. Shortly afterwards, I needed to send additional material to the same fax
machine. Thinking myself quite clever, I simply telephoned the number with
which the first fax identified itself (it was in the right area code and
central office, so I assumed it was really the same machine). My machine
connected to a fax at that number and my new material was transmitted. But it
was never received at the bank! The bank's fax was identifying itself with the
number of another machine (the fax machine vendor who delivered the machine
configured for testing? a fraudulent information thief? I have no idea, but in
retrospect I can see that one machine was identifying itself with the number
"aaabbbccccc" and the other "aaa bbb cccc").

Fortunately for me, my second transmission was not sensitive information. It's
also true that I did not follow the bank's instructions in sending the second
fax. But in any case, as with "automatic replies" to email, it is clear that a
fax sender is at risk sending to a telephone number with which a machine
identifies itself.  And the owner of a fax machine might potentially be liable
for the consequences of a machine mis-identifying itself.

By the way, another small risk became evident in this case. In order not to
bill the owner of the sending fax, I made the call using the local access
number of my long distance calling card. The fax audit report produced by the
Canon fax machine reflected the answering (long distance) fax machine number,
not the local number I actually called.  That number will not appear on the
telephone bill of the originating fax, and it may be difficult to reconcile the
fax audit trail with the telphone bill.

   [Nice opportunity for scams with call forwarding, user settable
   identifications, hidden twin-tailed conferencing, etc.  PGN]


Re: Unusual characters in addresses (Re: RISKS-12.26)

<frankston!Bob_Frankston@world.std.com>
8 Sep 1991 22:37 -0400
The issue of spaces in a name deserves a long (and thoughtful) response because
it gets into serious issues of representation.  But I'll be (relatively) brief.
One cause is the accidental sharing of command line parsers (in particular, the
Unix one) because they work "well enough".  In common programming languages the
values are stored within variables and the names are the handles for the
variables.  In macro languages there isn't a distinct separation between the
handles and the values.  One is supposed to get around this problem using
quoting, but multilayer quoting combined with expanded character sets wreaks
havoc on this approach.  Especially when the simple solution worked for the
first few years.

More to the point, the ability to build a system out of text streams is a very
powerful construction technique and eliminates the need for "professional
maintenance".  The consequences is that when the systems break there is no one
to fix them.  The fact that this systems are cobbled together and the
components "not aware" of their context also means that failures are not
diagnosed.  (Email to the bit bucket)

Instead the problems are solved by layering additional workarounds such as
"sendmail", _,  and %!.  These actually exacerbate the problem by eliminating
the acuteness of the pain and thus forestall solutions.

Now try pushing Unicode addresses through the usenet mail network!  CCITT is
no better, unless you view 10 years for changes as quick turnaround.

To be fair, try extending the North American Number Plan phone numbering
scheme.  You can, but it will take 40 years.

Welcome to the world of Ad Hoc solutions the allow us to be fleet of foot until
we stumble and of standards that allow us to run fast as long we don't try to
turn.

Back to the Story of O, it isn't just naive programmers but those who are
trying to be helpful by adding "smart" heuristics.


Failsafe mode for 3.5" Floppies

Don Phillips <don@blkhole.resun.com>
Mon, 9 Sep 1991 00:00 PDT
Recently, in another newsgroup there was a plea for help from somebody that had
a floppy drive that was writing on write-protected floppies!  After thinking
about the use of opto-electronic sensing mechanisms for write-protect
detection, it seems to me that the position of the plastic tab in the open
position signifying "protected" is backwards from a fail-safe point of view.
If dust prohibits sensing the position, or the detector/light source fails, the
drive will incorrectly assume that the disk should be writable.  In the days of
the 5 1/4" diskettes, the sensing was in the opposite way; an open notch
implied writable, closed implied protected.

Don Phillips, Research Unlimited, Escondido, Calif.     ...!ncr-sd!blkhole!don


Re: The RISKS of Superiority

<spl1!hcfeams!hobs@dale.cts.com>
Thu, 5 Sep 91 14:48:57 GMT
I can give you a specific example of the problems of rushing a weapons system
into combat before it is completely tested.  It also gives a good example of
how political considerations can screw things up.

In the mid to late '60s, I was an infantry officer in the US Army.  At the time
that I went into the army, the standard infantry rifle was the M14.  The M14
was developed by taking a successful rifle, the M1 Garand, and saying "how can
we improve this?"  They made it lighter, increased the size of the magazine from
8 to 20 rounds, made reloading simpler and quicker, and made the gas system more
robust.  There were only two major flaws — it could only be fired
semi-automatically (well, there was the M14A1 which was capable of fully
automatic fire, but they were few and far between), and the stock was made for
people with long arms (at 5'7", with a sleeve length of 32", the stock was
about 3" too long for me).

Well, simultaneously, the US was involved in supplying the Army of the Republic
of Viet Nam (ARVN) with weapons.  (I'm sure you remember the Viet Nam War, it
was in all the papers :-)).  Unfortunately, the average height of Vietnamese
men is about 5'5", so that if the stock of the M14 was too long for me, consider
what it must have been like for them.  The ARVN wanted another rifle, something
a bit smaller.

Some years previously, a Mr. Stoner, working for the Armalite company, developed
a rifle called the AR-15.  This was shorter and lighter than the M14, fired a
5.56mm round (.223 in), as opposed to the 7.65mm (.308 in) round of the M14,
and could be fired either semi-automatically or fully automatically.  The US Air
Force got a few of these for use by its Air Police.  When the ARVN saw these,
they said "We want these rifles."

So, some were given to the US Army to test — I personally was not involved in
the testing and evaluation.  The Combat Arms Development Board has traditionally
suffered from a bad case of the NIH disease — Not Invented Here.  They saw the
AR-15 as a weapon that had been wished on it by others, and even worse, was used
by the AIR FORCE!  Also, the Army Special Forces (Green Berets), loved the
AR-15, and the Green Beanies (the polite nickname) were despised by most of the
Army establishment (while I was an Airborne Ranger, I was never a snake eater
(the less polite nickname)).  Thus, while it was obvious that the AR-15 was
going to be accepted in some form or other by the Army, it had a number of
strikes against it.  So, the CADB OKed the rifle, but put in a long list of
changes that it wanted made.  These changes were made, over the strenuous
objections of Stoner, and the rifle came out as the M16, and immediately rushed
into combat.

Let me tell you of my first experience of the M16 under combat conditions.  My
platoon was dropped off our helicopters into an ongoing firefight — what was
called a "hot LZ".  It was a hot, dry day, and the helicopters were kicking up a
lot of dust.  As soon as I hit the ground, I started firing.  I got off one
burst, then my rifle jammed.

You see, right next to where the bolt and the end of the barrel join, there is
the ejection port, which is a hole in the side of the rifle where the spent
cartridges go out.  In most rifles, there is a small gap between the bolt and
the end of the barrel, called "headspace".  The M16 does not have headspace --
rather, it has lugs on the end of the bolt which fit into matching lugs on the
barrel.  Some of the dust and dirt kicked up by the helicopters had gotten in
through the ejection port and between the lugs on the bolt and the barrel, and
the bolt could not close.

Fortunately, I survived the experience, but I was no fan of the M16.  Within a
few years, an improved version of the M16 — the M16A1 — which was much closer
to Stoner's original design came out.  But we who had to use the old original
M16 had to make sure that it was kept scrupulously clean, not always the
easiest thing to do in a combat situation.  And it was the good old boys of the
CADB, who never took the rifle outside of Georgia and who insisted on all sorts
of design changes apparently more to gratify their own egos than because of any
real combat requirements.

John Hobson, Ameritech Services, 225 W Randolph  HQ 17B, Chicago, IL 60606
312-727-3490                                        hobs@hcfeams.chi.il.us


Re: A Danger ... with Intelligent Terminals (Stachour, RISKS-12.23)

Randolph Bentson <bentson@grieg.UCAR.EDU>
4 Sep 91 04:29:28 GMT
The good products are usually (and rightly) more expensive.  Sometimes that
price inhibits their selection.  Good features that are seen as unneeded, can't
overcome this.

Any risk assessment must factor possible damage by likelihood, but the list of
failures can never be complete.  Initial approximations are often focussed on
the "more likely" failures.  At some point, the effort to get good numbers
becomes insurmountable.  "Disasters" are assigned zero likelihood, or their
costs aren't seriously investigated.  As a consequence, the cost of "normal
use" of the system is used for further price/performance comparisons.

I worked for a time-share/computer service firm in the mid-1970's.  At that
time we were evaluating a Multics system for use by a client currently using a
DECsystem-10.  The '10 could support 50 concurrent users with about 10-15
running jobs.  Our purchasing agent fell out of his chair when he heard that
the comparably priced Multics system could support only six users.

While I appreciate the Multics system (and believe Unix will one day
recover most of the Multics features it had lost), there is often a
cost associated with these features.  In our case, the cost of "doing
it right" was far too costly.  Another DECsystem-10 running Tenex was
added to our machine room.

Before one counters with lists of things wrong with Tenex, remember
that Multics _could_ have had comparable undiscovered failures and
there was no good way to determine the likelihood of their discovery.

    Randolph Bentson                 Colorado State University
    bentson@grieg.CS.ColoState.Edu   Computer Science Department
    303/491-5792                     Ft. Collins, CO 80523


Risk assessment: a specific experience.

<fulk@cs.rochester.edu>
Thu, 05 Sep 91 09:06:28 -0400
The Maternal Serum Alfa-fetoprotein (MSAFP) test is administered to pregnant
women in order to screen for a broad range of congenital defects of the fetus.
It is primarily useful against neural tube defects (spina bifida,
hydrocephaly), secondarily against Down's syndrome.  When my son was on the way
three years ago, our doctor suggested we have the test done.

Unfortunately, the MSAFP has a fairly high false positive rate; about 10%.
(It has a higher false negative rate, but that is not especially germane
here.)  A positive result, false or not, tends to be repeated on retest.
The response is amniocentesis, which has about a 1% probability of inducing
an abortion.  The probability of a 29 year old woman having a child with
Down's or a neural tube defect is quite a bit less than 1 in 10000.

It was very hard for us to assess whether or not we wanted the test.
Certainly, if we didn't intend to have amniocentesis for a positive, we
shouldn't bother with the MSAFP.  Since it was hard to sort things out, I
decided to do some utility calculations, which clearly indicated that the MSAFP
was a loser for us.  This was because of the .1% or so probability that we
would have a false positive, have amniocentesis, and lose the baby to an
induced abortion.  (We wanted the baby, very much.)  That expected negative
utility easily outweighed the expected negative utility of having a baby with a
problem; especially since the MSAFP's false negative rate was so high.  The
result was quite insensitive to the numbers I used, within a factor of 10 or
so; the differences were so large.

So why did the doctor suggest the test?  Why were hospitals and doctor's
offices full of ads for it at the time?

The answer is simple: they, and perhaps society at large, considered induced
abortions as essentially neutral, and did not assign them the large negative
utility that we did.  Of course, they didn't say that in their literature, but
it was not hard to figure out.  I called a genetic counsellor at the hospital
and asked about this.  She was dumbfounded that I had even done the
calculation, and a brief conversation quicky confirmed my explanation.

The point is this: risk assessment depends not only on probabilities, but on
the perceived utilities of various outcomes.  A risk assessment by someone who
doesn't care about spotted owls won't impress a member of Earth First!, simply
because they have different values.  This point is often ignored in risk
assessments!  Nearly all of the published assessments I've seen assume that
everyone shares the same utilities for various outcomes.  The above example is
meant to illustrate the pervasiveness of this phenomenon.
                                                                Mark Fulk


Re: Risk perception (RISKS-12.24)

Geoff Kuenning <desint!geoff@uunet.UU.NET>
Sat, 31 Aug 91 17:06:42 PDT
pagre@weber.ucsd.edu (Phil Agre) writes:

> I tend to be suspicious about any theory that treats ordinary people
  as irrational ...

In a world where tabloids like the National Enquirer have a larger circulation
than any "serious" newspaper, I find this suspicion surprising.  Remember that,
by definition, 50% of the population is of below-average intelligence, and that
even intelligent people are often irrational.  To a first approximation, I'd
take such a theory as true.

My own observation is that many people only believe in risks they have
personally experienced.  A college friend only started wearing his seat belt
after he was thrown to the floor in a minor accident, even though he knew all
of the equations involving inertia and friction.  My firefighting friends are
nuts about fire safety, in sharp contrast to others who have no personal
contact with fires.

Although Mr. Agre's comments about public suspicion of large organizations are
well-considered and valid, I think that this relatively recent phenomenon is
merely an aggravating factor.  In the not-so-early days of nuclear power,
utilities had to fight public association of the word "nuclear" with bombs.  It
is now generally known that, while nuclear plants pose many serious risks,
massive explosions are not high on the list.

My favorite example of public misperception of risks is magnetic
resonance imaging, MRI, which was formerly called nucleo-magnetic
resonance, NMR.  The name had to be changed to keep patients from
getting nervous about the word "nuclear."  Yet many of those same
patients will happily sit down in a dentist's chair and don a lead
apron for a full-mouth X-ray, without giving a moment's thought to the
possible negative effects of the radiation dose on their brain.

So yes, I tend to believe a theory that treats ordinary people as irrational.
All of us are, at least occasionally.

    Geoff Kuenning   geoff@ITcorp.com   uunet!desint!geoff


Corporate vs. individual risk perception (Agre, RISKS-12.24)

<paussav@sc.lafb.af.mil>
Thu, 05 Sep 91 10:08:00
Phil, I read your recent post to RISKS re: risk assessment etc. etc.  This is
an area that has bothered me for some time. I came up with the following
formula you might be interested in:

Corporate perception:

Anxiety        Actuarial      Individual     Fears of individual can
of the     <>  Table      ==> is         ==> be discounted or
Individual     Statistics     irrational     negated through education

Individual perception:

Anxiety    Inconvenience to     Individual     It is necessary for the corp.
toward  == individual to    ==> is         ==> come to an agreement with the
risk       negate the risk      rational       individual on how to lessen
                                               the risk or lessen the
                                               inconvenience.

Example where the customer is not the complaining party:

(let's say) a nuclear power plant. The actuarial risk of a nuclear meltdown or
serious release of radiation is very low (just ask your local power provider.)
But the inconvenience caused to the individual attempting to avoid that
possibility is very high. Plot all nuclear power plants on a map and then move
200 miles away from any plant and 50 miles from the air and water contamination
vectors. You end up outside of most of the U.S. (The customer is the utilities
regulators not the citizen)

Solution: The utilities and government ignore complaints and attempt to educate
the public as to the real risks of a meltdown.

Example where the customer is the complaining party:

Alar contaminated apples. The inconvenience is, you can't eat any apples.
There is no way to tell if the apple is contaminated by looking at it.

Solution: Growers stopped using it, regulations were passed etc. etc.

Note that corporations formulate their response based on actuarial risk. If the
person complaining does not affect the corporation's bottom line then that
person will be ignored. If the complainer can act to reduce the corporation's
profit then those concerns are accommodated.  (Accommodation vs. Ignoration.)

That is because, IMHO, decision makers rarely have the technical knowledge to
rationally evaluate technical risks but, they do have the knowledge to evaluate
monetary risks.

Chuck

   [I think that something like what you say is right.  The puzzle is how to
   make something so technically complicated into a more participatory
   activity, so that people can know what risks they're getting into and so
   forth, and so that they're freely chosen risks and not things that descend
   from the heavens with actuarial labels on them.  Phil]


re: Risk assessment high priesthood

David Chase <David.Chase@eng.sun.com>
Wed, 4 Sep 91 18:19:20 PDT
I think there are some simpler explanations for apparent public distrust of
"risk assessment".

First, sometimes the claims are misleading in that they confuse the average
with the individual.

Second, in those cases where a constant low rate of deaths is compared to
occasional catastrophe, note that the constant stream of deaths provide data
against which the risk assessors must be checked, and provide additional
information that people can use to reduce their own risk of death.  People
don't compute the crash-safety of new automobiles (well, I'm sure that they do
at some early stage), they run them into walls to see what happens.

As an example of individuals and averages, consider the safety of driving to
the airport versus flying in the airplane.  Airplane crash statistics are
fairly generic, and thus here the average makes some sense (some airports are
more dangerous than others, of course, but we can get that data).  However,
auto death/injury statistics are not.  Lumped into the average are those people
driving drunk, those people driving sleepily, tailgaters, lanehoppers, seatbelt
non-users, and Single Young Men Under the Age of 25.  I'm none of those people,
yet I have no doubt that someone advising me on the risks of driving would
quote a figure based on a sample that included them.
                                                         David Chase, Sun


Risk research & risk takers: the aphorism (Kerns, RISKS-12.24)

Dan Drake <autodesk!gilroy!drake@fernwood.mpk.ca.us>
Thu, 5 Sep 91 09:57:06 PDT
Robert W. Kerns <rwk@Crl.dec.com> lists among the characteristics that
make people risk-averse,

  *  Low amount of individual control over individual risk factors.

The importance of this point cannot be exaggerated.  And the
risk-assessment-as-PR people avoid exaggerating it by consistently and
completely ignoring it.

The last word on the subject of the Mobil attitude of "We risk our money, the
world's whales, and your lives" was spoken in the 1930's by the woman who swept
David Low's office.  As he quoted it in a notable cartoon,

The trouble with that Mussolini is that he not only bets his shirt, he bets
everyone else's, too.


Re: Risk Assessment High Priesthood (Kerns, RISKS-12.24)

Craig Partridge <craig@sics.se>
Thu, 5 Sep 1991 09:22:17 GMT
> ...  Whenever the public at large doesn't agree with this, "public reaction"
> is labeled as being irrational....

Adding to this comment, I'd point out that deaths are not the only metric by
which to measure risk. For example, I'm currently living in Sweden, which has a
public-access law which, among other things, permits anyone to pick wildberries
and mushrooms on anyone's property (I'm slightly simplifying the rules).  One
effect of Chernobyl, even now, is that if one does pick some types of
mushrooms, you have to take them to a testing center to check their cesium
levels.  Quality of life issues like these also matter (one might also talk
with the Lapps about the effects of radiation on their reindeer herds and their
economic livelihood).
                                         Craig Partridge


Re: `Risk perception'

"William P Gardner" <wpg1@unix.cis.pitt.edu>
Fri, 6 Sep 91 7:49:00 EDT
Phil Agre's (pagre@weber.ucsd.edu) rejoinder (RISKS-12.24) to my posting
(RISKS-12.22) has three sections.  First, he gracefully qualifies his previous
posting (RISKS-12.21) and makes it clear that he was not suggesting that risk
perception researchers work in bad faith. Next he attributes a belief to me --
``WG's message argues in effect that we can judge `risk' research in isolation
from its social context'' — that I did not state and do not hold.  Finally,
Agre discusses an example that I proposed concerning risk perception and sexual
risk taking among gay adolescents. The example showed how concepts from the
risk perception literature are used by public health scientists doing AIDS
prevention research, as opposed to the pernicious uses that Agre discussed in
his posting and I discussed in mine.

Agre's comments, however, suggest that this research is also pernicious:
``About 1985 the gay community decided that it was not going to wait around
while people with generalized expertise about `risk' and the like designed
studies `that can powerfully discriminate among many competing plausible
explanations' all of them founded in ignorance and likely to be wrong.'' There
is a lot of invective here and a claim that Agre has knowledge not shared by
AIDS prevention researchers.  A couple of sentences later, there is also a
significant risk.

The risk is premature declaration of victory in the effort to prevent HIV
infection. Agre says that this campaign has been ``highly successful'', but
what terrifies a lot of us is that it isn't clear, from the data, whether this
is true.  Recent reviews of the epidemiological and behavioral studies — Phil,
you did read this literature before you disparaged it, right? — show that
there have always been groups of men who have not changed their behavior,
others who have relapsed, and little evidence that AIDS education works with
young men.  The lesson I derive, one that may be relevant to many other risks,
is that both AIDS prevention efforts and the empirical study of their efficacy
must be perpetual.

William Gardner, Law & Psychiatry Research, Department of Psychiatry
University of Pittsburgh School of Medicine (wpg1@unix.cis.pitt.edu)


Re: risk perception (Gardner, this issue)

Phil Agre <pagre@weber.ucsd.edu>
Mon, 9 Sep 91 10:27:22 pdt
I do have some familiarity with the literature on AIDS `risk perception'
though I am not an expert.  My point is not that existing education programs
solve all problems, but that the process by which gay community activists
have developed education programs in the past is a good model for future work.
People should, wherever possible, be studied by their own, using concepts
geared for their particular complicated situation, and not generic concepts
like `risk perception' which only support very crude generalizations.  My
language was no doubt unduly harsh in arguing this view, but the point is
terribly important.
                                   Phil Agre, UCSD


Re: Risk Perception

Fred Heutte <well!phred@well.sf.ca.us>
1 Sep 91 02:21:17 GMT
The research findings referred to by LA Times writer Janny Scott are
valid but hardly 'recent.'  Similar findings were made by Decision Research
(a Eugene, Oregon research firm) and others who did risk assessment
studies for the AEC/NRC and nuclear utilities in the mid-1970s.

See, for example, "Report to the US Nuclear Regulatory Commission,"
Risk Assessment Review Group, NUREG/CR-0400, 1978.

Please report problems with the web pages to the maintainer

x
Top