The RISKS Digest
Volume 12 Issue 3

Monday, 8th July 1991

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Alcor/Email suit pays off!
Henson
Computer based estimation of mortality
Richard I. Cook
On finding a coding bug in the Time Server Daemon
Martin Minow
Animated hieroglyphics on telco operators's terminals
Dan Jacobson
Dutch Phreaks and Chaos Congress 90
Klaus Brunnstein
Risks Forum and Vulnerability
Klaus Brunnstein
Re: Global warming: Not so funny.
Victor Yodaiken
Re: "On the Danger of Simple Answers"
Chuck Karish
The advantages of posting to RISKS
Brian Tompsett
Info on RISKS (comp.risks)

Alcor/Email suit pays off!

<hkhenson@cup.portal.com>
Sat, 6 Jul 91 13:23:56 PDT
The long running Alcor/email case against the County and City of Riverside, CA
was settled out of court in April of this year.  The announcement was delayed
until all parties had signed off, and the check had cleared the bank :-).

The Alcor Life Extension Foundation (a non-profit cryonics organization
--alcor@cup.portal.com) ran a BBS for members and prospective members from
early 1987 through January 12, 1988.  On that day, the BBS computer was removed
under a warrant to take the computer (but no mention of any contained email) in
connection with the investigation into the death of 83-year-old Dora Kent.
(Mrs. Kent was placed into cryonic suspension by Alcor in December of 1987.
During and following the investigation, Alcor staff members were publicly
accused by county officials of murder, theft, and building code violations.  No
charges were ever filed and the investigation was officially closed three years
later.)

In December of 1988 Keith Henson filed a civil suit to force an investigation
of the apparent violations of the Electronic Communication Privacy Act by the
FBI, but the case was dismissed by the now convicted Judge Aguilar.

In early 1990, just before the statute of limitations ran out, Henson and
14 others (of the roughly 50 people who had email on the system) filed a
civil action against a number of officials and the County and City of
Riverside, CA under Section 2707 of the Electronic Communication Privacy
Act which forbids inspecting or denying access to email without a warrant.

Some time after the case was filed, the Electronic Frontier Foundation came
into existence in response to law enforcement abuses involving a wide spectrum
of the online community.  EFF considered this case an important one, and helped
the plaintiffs in the case by locating pro bono legal help.  While the case was
being transferred, the County and City offered a settlement which was close to
the maximum damages which could have been obtained at trial.  Although no
precedent was set because the case did not go to trial, considerable legal
research has been done, and one judgment issued in response to the Defendants'
Motion to Dismiss.  The legal filings and the responses they generated from the
law firm representing the County/City and officials are available by email from
mnemonic@eff.org or (with delay) from hkhenson@cup.portal.com.  (They are also
posted on Portal.)

The Plaintiffs were represented by Christopher Ashworth of Garfield, Tepper,
Ashworth and Epstein in Los Angeles (408-277-1981).  The only significant item
in the settlement agreement was the $30k payment to the plaintiffs.


Computer based estimation of mortality

<cook@csel4.eng.ohio-state.edu>
Mon, 8 Jul 91 19:52:18 EDT
The 12 June 1991 issue of JAMA (the Journal of the American Medical
Association) contains an important article and editorial.

Article: Blumberg Mark S (1991).  Biased Estimates of Expected Acute Myocardial
Infarction Mortality Using MedisGroups Admission Severity Groups.  JAMA 265
(22): 2965-2970.

Editorial: Iezzoni Lisa I (1991).  'Black Box' Medical Information Systems: A
Techology Needing Assessment.  JAMA 265 (22): 3006-3007.

Blumberg evaluated the actual mortality in a group of patients and compared
those values with the predictions of a standardized computer model which is
mandated for use in stratifying Medicare-aged patients with acute myocardial
infarction (AMI, i.e. heart attack) in Pennsylvania and (with modifications) in
other states.  The stratification of patients is important for evaluating the
quality of care.  If your hospital sees mostly older patients with multiple
chronic diseases, your mortality associated with AMI will be greater than that
found in a hospital which treats generally healthier patients.  The use of a
complex model to adjust for the differences in patient populations between
hospitals should, in an abstract sense, leave only those differences which
relate to quality of care factors specific to an individual hospital and allow
people (mainly health care administration and hospital accreditation bureaus)
to rate the hospitals for quality.  In these days of increasing health care
regulation, attempts to define quality by creating models which normalize
mortality (or some other measure) as a means of comparing hospitals (or
doctors, surgical procedures, methods of treatment, etc.) are being introduced
more and more often.

The system Blumberg tested depends on the evaluation of multiple key clinical
factors (KCFs) which are coded by technicians from the medical record and run
through a proprietary computer program resulting in assignment of the patient
to a risk class.  Note that the system is far to complex to permit any
individual to assess the correctness of its predictions.  In addition, the
system is proprietary and so the algorithm and coding scheme is not generally
available.  [Even if it were, it is doubtful that anyone outside the
corporation that developed it would be able to comprehend it.]

Blumberg found that the classification method is statistically biased and
concluded that the bias arose from a number of sources (e.g.  missing KCFs,
KCFs which can vary a great deal in a short time, failure to distinguish
missing from normal data, and incorrect weighting).  This bias has significant
implications for hospitals and physicians and (by extension) for patients.
Since the output of the system is used as a measure of quality, some hospitals
look better than others for reasons unrelated to the quality of care.  There is
an incentive to try to improve the 'numbers', i.e. to restructure care to make
the quality measure appear more attractive [although doing this is made
difficult by the fact that it is hard to determine what controls the evaluation
system's output].

In an accompanying editorial, Iezzoni notes that the system is an example of a
broad class of health care information product "about which very little is
known: the redoubtable 'black box'." She goes on to point out that these
systems "typically derive from fields about which users may have little
knowledge, such as health services research.  Many systems base their
determinations on complicated statistical formulas that are dependent on the
computational wizardry of today's powerful personal computers.  Hence, even if
complete technical information is available concerning the system's internal
algorithm, it often remains an enigma to those whom it affects...  Another
factors fostering the black box mystique is the proprietary nature of some of
these information products, and the consequent reluctance to reveal 'inner
workings' and perceived 'trade secrets'."  She then notes that there are no
applicable standards which require that these systems be safe and effective (in
a manner analogous to the requirements for new drugs) and observes that "[f]ew
independent evaluations have been performed, and little is known about their
validity for use in new policy initiatives, such as statewide hospital quality
assessment.  While most developers are at least rhetorically committed to
improving the health care provision system, this commitment is not enough,
given the tremendous responsibility vested in these systems.  Opening the black
box is only the first step.  We need to know that the information generated is
valid and used in a safe and effective way."

The article and editorial face squarely what has been dealt with in only a
peripheral manner before: whatever the _theoretical_ potential for producing
effective decision support systems using computer models to compare different
groups or organizations, the _practical_ effect of such systems is likely to be
quite problematic.  (1) Because the systems represent such a large computation,
there is no practical way in which users of the systems can understand their
output.  These systems are _oracular_, that is, their output stands apart from
their input in incomprehensible ways (just like the pronouncements from the
Oracle at Delphi: there was no conceivable way to know if it they were
correct).  Some black box systems are comprehensible.  We use a number of
measuring devices whose internal workings are not easily understood (e.g.  the
pulse oximeter used to measure oxygen saturation in the operating room) but
these are mostly black boxes whose operation is meaningfully connected the
observable (and observed) world and whose performance can be understood by
individuals.  When the black box reaches a certain size or when its performance
is determined by lots of data or data over a long period of time, it's output
necessarily becomes oracular.  The new TCASS aviation collision avoidance
system is likely to be a case in point.  (2) Because of the great effort to
required to generate and integrate these systems (many man years, including
massive data collection) it is practically impossible to test them in a
meaningful way.  Blumberg's effort is one example, but he tested just one
medical diagnosis out of many supported by the system; it's hard to think of
someone going on to test all the others.  (3) The systems depend on information
which is easy to acquire (e.g.  review of the patient's chart) but may be only
remotely related to the issues at hand (e.g. quality of care).  The systems
have a kind of self qualifying nature: the choice of data and pragmatics of
acquisition and processing limit the validity of the conclusions in ways which
most of us will agree are unrealistic and misrepresent important features and
characteristics of the real underlying process, yet there are no easy ways to
improve the data.  (4) The systems are always potentially open to successive
refinement but their size and complexity makes such refinement unlikely.  The
makers of such systems will always admit that they are not perfect but that the
imperfections could be reduced through improvements or modifications.  In
reality, however, the system is more likely to remain largely fixed or to track
changes in the world only slowly.  (5) Paradoxically, these systems may obscure
the search for quality by providing an objective, computationally efficient,
quantitative metric which is related to quality in only a complicated and
remote way.  In developing "safety" or "quality assurance" systems, there is a
tendency to redefine the goal in terms of the system's output.

What are the implications of this trend in health care (and other domain)
information systems for professionals developing decision support systems?
When Iezzoni refers to "powerful personal computers" she surely means the
computers and those system designers, integrators, programmers, coders, and
administrators that make these complex systems possible.  What are the
responsibilities of designers of such decision support systems?  After all,
they are well situated to assess the impact of decision support tools on the
performance of the large system.  Do they perhaps have a responsibility to
refrain from creating a large complex information processing system which is
likely to have the qualities of the one investigated by Blumberg?

    Copyright (c) 1991 by Richard I. Cook


On finding a coding bug in the Time Server Daemon

Martin Minow 06-Jul-1991 1428 <minow@ranger.enet.dec.com>
Sat, 6 Jul 91 11:26:50 PDT
While reading through the source code to the Berkeley Time Server (which runs
in the background of a group of Unix workstations and keeps their system clocks
adjusted to "network average time"), I discovered an interesting code sequence
in the networkdelta function.  That function takes a set of time delay
measurements and computes the network average time change.  I.e., it is the
core of the time server algorithm:

    /* this piece of code is critical: DO NOT TOUCH IT */
    ...
        i++
        if (i = j)
            j++;
    ...
Those of you familiar with C programming will recognize the classic error
(I make it frequently) of writing "i = j" (assignment) rather than "i == j"
(equality test) in an if statement. Both are legal in this context: "i = j"
meaning "assign the value of j to i and then test for inequality to zero."

Some reflections:

-- Burying an erroneous statement in a paragraph that says "don't touch"
   makes matters worse. I only found the bug when I went back to 30 year
   old "Math 295" tools of pencil and paper and walked through the
   algorithm one statement at a time to see how it worked.
-- The error will only manifest itself if one or more systems is wildly
   out of agreement with the other systems being served by the time daemon.
   I.e., it is a classic "normal error" in that it is triggered by some
   other error and makes matters worse.
-- The error results in an incorrect calculation of the network average
   time, which will be corrected (if anyone notices it) when the entire
   network is re-synchronized to a standard clock (several dial-up
   time standard clocks are readily accessible from a dial-up modem).
-- If Berkeley (the copyright holder) didn't distribute source code,
   I wouldn't have found the error. Instead, I'd have written my own
   procedure which almost certainly have been a poorer algorithm.
-- Code reviews — having your software carefully reviewed by a competent
   outside consultant — are useful. (What is the computer engineering
   equivalent of a pathologist?)
-- Beware of language constructions, such as C's "if (i = j)" that are
   error prone. Having once tried to add a warning for this to a C compiler,
   I can attest to it being extremely difficult: you want to warn on
   "if (i = j)" but not on "if ((i = j) != 0)"  Ultimately, I decided the
   cure (heuristics in a the compiler) might be worse than the disease.
-- "Beware of language constructions" is a warning to the programmer, and
   one that belongs in a "Manual of Programming Style."  It is an engineering
   statement, not one of "Computer Science."  I.e., it is at a different
   level of discourse than "beware of bubble-sorts."
-- My university sent me to a remedial writing course because I couldn't
   spell or distinguish between "who" and "whom," not to mention "that"
   and "which" — should there be remedial programming courses, taught
   by writing teachers, that concentrated only on style?

It is possible to write quality bug-free software in error-prone languages
(just as it is possible to write poor software in languages that would prevent
"if (i = j)" errors).  However, I am beginning to suspect that this requires
the obsessive attention to detail of a contract lawyer, combined with the grace
of expression of an essayist.
                                                  Martin Minow


Animated hieroglyphics on telco operators's terminals

<Dan_Jacobson@ATT.COM>
Thu, 4 Jul 91 12:40 CDT
>>>>> In <a href="/Risks/12/02">RISKS 12.02</a>, 3 Jul 91 03:29:41 GMT, Fernando Pereira said:

F>   [A subsequent revised version of the AP story summarized above reports
F>   on speculation that the cause of the phone disruptions may be sabotage
F>   originating in the Middle East. The alleged reason for this is the claim
F>   that in most cases the network failures followed the appearance of
F>   animated hieroglyphics on operators's terminals.]

More likely when the big crash came an ESC ( } or ESC ( 0 or the like came over
the line (along with other accidental garbage characters), common ways of
turning on many terminals' "line drawing character set" etc.  Good thing the
terminals didn't also have a alternative Russian character set, or else there
seems a slight RISK that we might be fighting World War III now.


Dutch Phreaks and Chaos Congress 90

Klaus Brunnstein <brunnstein@rz.informatik.uni-hamburg.dbp.de>
5 Jul 91 14:08 +0100
On Chaos Computer Club's last Congress 1990, a Dutch group and few other
phreaks reported on some techniques to "travel inexpensively on international
networks" (see my report in January 1991). Against their usually detailed
description of the content of the respective session, CCCs electronic Congress
newspaper describes the reports and discussion only in general terms; no
details regarding frequencies and computer programs (which meanwhile replaced
the "blue boxes" more flexibly) were given.

According to a report in the ("usually well-informed") German weekly magazine
Der SPIEGEL, the Dutch group HAC-TIC now published a detailed report on how to
"use" special methods, dial-tunes (with frequencies and sequences of operation)
and telephone numbers (in Germany: 0130) in diverse areas of the world to
establish toll-free phone connections via specific programs. As the magazine
reports, HAC-TIC aims with its detailed description to counterfeit some people
who sell (e.g. on AMIGA) such tune-dialing programs for up to 1,000 DM (about
520$ currently).

Comment: In discussing with CCC people about their surprisingly careful
publication behaviour (enough details to warn before developments, but not
sufficient to directly aid in attacking), I found some response to the
international discussion on CCC related attacks; against CCCs behaviour in
earlier years (e.g. selling the NASA attack protocols for 100 DM), this
restrictive policy seemed quite honestly. Now that HAC-TIC has published
details of the seminar discussion, another discussion might well come in CCC
whether such a restrictive approach was well-advised in CCCs spirit.

Klaus Brunnstein, University of Hamburg, FRG


Risks Forum and Vulnerability

Klaus Brunnstein <brunnstein@rz.informatik.uni-hamburg.dbp.de>
5 Jul 91 15:03 +0100
As an enthusiatic reader of the Risks Forum, I strongly appreciate the
information carried and the discussion usually maintained on a high level of
competence.  Moreover, the diversity and actuality of the topics highly
demonstrates that if Risks Forum had not been invented by PGN it must be now.
So, RF is a success!

Unfortunately, in discussing critical developments as well as preparing IFIP
1992 World Conference (Madrid, September 7-11, 1992) substream "Diminishing
Vulnerability of Information Society", I find it ever more difficult to
contribute some essential information which I earlier would have immediately
sent. One example: since some time, we experiment how to detect and prevent
worm accidents; to this end, we had to analyze minimum requirements of worm
mechanisms. Our experiments in NETBIOS and DECNET environment showed clearly
that it will be *extremely difficult* to prevent or even detect worms!  In this
situation, I need start a discussion in a responsive and responsible community
how to proceed from here; such a discussion would need more knowledge to assess
the threat.

At this point, I learn from several discussion in universities and conferences
that Risks Forum is now well-spread and even attracted the interest of scenes
such as Chaos Computer Club or diverse BBS read by youngsters. Not that I know
yet of a single case where actual discussions in Risks Forum has produced a new
threat; but when observing other electronic newsmedia, I find it highly
desirable that a discussion of the limit of electronic discussion of risks be
undertaken.

My examples are from the anti/virus scene: recently a discussion started (in
Virus-L) on how to propagate viruses when invoking commands such as DIR; this
technology is well established in MACviruses but virtually unknown in the MsDos
arena; what is the impact of such discussion between experts on the virus
authors? Another example: When observing recent developments in new threats,
our analysis shows too clearly that their authors know the actual discussion
rather well; example: the TEQUILA virus exhibits several aspects of older, less
distributed viruses which have been described in detail in Virus-L. Therefore,
any constructive discussion in publicly accessible electronic newsmedia must
bare in mind the risk to transfer serious information to the wrong parties.

Evidently, insecurity and insafety is an essential message which Risks Forum
communicates to the public. This mission is an essential contribution of the
ACM committee behind RF (whose chairman Peter is), and as IFIP TC-9 "Computer
and Society" chairman, I heavily favour any such discussion. On the other side,
responsible behaviour also questions whether reports on computer-induced
vulnerability generates, by too much details, new vulnerabilities thus
endangering the developments of methods to counterfeit vulnerability. I
personally think that a responsible message must be more complex than just
arguing: "information technology produces vulnerability; when eliminating IT,
you immediately reduce vulnerability": such as simple suggestion is
in-historic.

My suggestion for a Code of Discourse Ethics:
               1) Concerned experts should agree not to enhance the
                  vulnerability by their discussion.
               2) apart from questions of experiences and basic paradigms,
                  aspects of prevention and countermeasures should even more
                  be discussed.
               3) For critical technical details, successful electronic media
                  are not well suited, even they are counterproductive.

To avoid any misunderstanding: the guarantee of "free flow of information" is
one of the essential values in modern societies, specially IT-based ones. But
the "trust" which I assume my communication partners follow may not simply be
established via electronic media; trust (defined differently from TCSEC
contexts!) is a personal relation to minimize the risk of misinterpretation and
the deduction of unwished consequences. By its very nature, trust is hardly to
be established via email! The medium therefore limits the responsible use of
it. (Example: I personally just received Bill Gates memo on Microsoft's
performance and future problems; highly interesting, no doubt: but I assume
that Bill Gates will not be glad that I had it. I am highly sure that the
community in which I received this information is trustable, and they and I
will not uncover any details; but just the fact that I as a non-Microsoft
employee got it demonstrates the problem!).

If anybody of the highly respected active participants in this discussion feels
this discussion inadequate, I apologize for stimulating it.

Klaus Brunnstein, University of Hamburg, Germany


Re: global warming: Not so funny. (Slade, RISKS-12.02)

victor yodaiken <yodaiken%chelm@cs.umass.edu>
Wed, 3 Jul 91 15:39:55 -0400
>This kind of thinking is, unfortunately, all too common, even in the scientific
>community.  If I disagree with it, it must be wrong.  If it supports what I
>believe, it must be right.

I'm not sure of the relevance of this note to computing risks, but I am
offended by the use of a borderline dishonest debating trick in which one
mis-attributes an obviously absurd opinion to an opponent. Which members of the
"environmental movement" have have exhibited unlimited confidence in weather
forecasting? One can certainly be alarmed at the prospect of global warming
while maintaining a great deal of skepticism about the current state of the art
in weather prediction.  Again, I'm not sure why this material found its way
into the RISKS digest, but I don't want to leave it unchallenged.


Re: (Slade, RISKS-12.02)

Chuck Karish <karish@mindcraft.com>
Wed, 3 Jul 91 09:22:08 PDT
[ To the moderator:
    I don't really get the joke here, and I'm not sure the
    reference to the risk of uncritical acceptance of
    simulation results justifies opening this can of worms in
    this forum, but I don't think it would be appropriate to
    let Reisman's trivialization of real problems go unanswered. --crk ]

In RISKS 12.01, Rob Slade quoted a poster at the University of Michigan
who quoted George Reisman's "The Toxicity of Environmentalism":

>The environmental movement maintains that science and technology cannot be
>relied upon to build a safe atomic power plant, to produce a pesticide that is
>safe, or even bake a loaf of bread that is safe, if that loaf of bread contains
>chemical preservatives.
>
>When it comes to global warming, however, it turns out
>that there is one area in which the environmental movement displays the most
>breathtaking confidence in the reliability of science and technology,
>[ in the area of very-long-term weather prediction ].

Aside from my reservations about Reisman's rhetorical trick of taking examples
from the views of extremists to trivialize a broad-based movement with many
thoughtful adherents, this analysis ignores well-founded concerns.  Goods and
services aren't produced by disinterested scientists in a laboratory.  They're
produced by real-world corporations that have to balance risk against cost.  In
many well-documented instances, the financial stakes have been so high that
real risks have been covered up and safety improvements have been foregone.

The best example of this is the nuclear power industry, in which it's difficult
to make any operational regulation more stringent because to do so would be to
acknowledge that previous regulations had been inadequate and that
currently-operating plants are less safe than is technically possible.  In
cases like this and like global warming where the perceived hazard is extreme,
the public is unwilling to accept grandfather clauses and assurances that "this
subject needs further study; we're not sure what to do yet".

I'll argue that the public concern over global warming is less a result of
trust of simulations than of distrust of the technologies that are being blamed
for the process.  As for the punchline about predicting the weather, I doubt
that the current level of concern over global warming would have arisen except
that the last decade has shown us (at least in North America) the warmest
weather in the past century and a half.  The argument that this warm trend is
not yet statistically significant as evidence of greenhouse-gas-caused global
warming is less persuasive than is personal experience that the weather is
warmer than it used to be.  --

    Chuck Karish        karish@mindcraft.com
    Mindcraft, Inc.     (415) 323-9000


The advantages of posting to RISKS

Brian Tompsett <bct@cs.hull.ac.uk>
Wed, 3 Jul 91 15:47:35 BST
In RISKS DIGEST 12.02 <hollombe@ttidcb.tti.com> (The Polymath) wrote an
anecdote on how an old risks posting had been resurrected for posterity in an
upcoming book. It is precisely these activities that make RISKS one of the most
valued forms of electronic publication. A RISKS posting can find its way from
the electronic media into the paper form (via SIGSOFT notices and CACM) quite
rapidly. These items can then be cited by other workers in the field.

It is this phenomenon that makes RISKS almost accepted as a form of refereed
publication.  It is the efforts of our moderator that makes this so.

   [Included in all modesty...  Keep the good stuff coming.  PGN]

Please report problems with the web pages to the maintainer

x
Top