The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 1 Issue 8

Sunday, 8 Sep 1985

Contents

o Risks of omission
Nancy Leveson
Nicholas Spies
Herb Lin
Dave Parnas
o Hot rodding you AT and the weather
John McCarthy
o Re: Good Risks and Bad Risks
Brint Cooper
o SDI reliability
Herb Lin
o Viruses, Trojan horses, and worms
Herb Lin
PGN
Herb Lin
PGN

Risks of omissions

Nancy Leveson <nancy@uci-icsd>
08 Sep 85 14:58:56 PDT (Sun)
I had not intended to get involved in the RISKS Forum discussions, but
despite my great respect for John McCarthy's accomplishments, I just cannot
let his latest message (RISKS-1.8, Sept. 8) pass without comment.

Some important points that need to be considered:

1) Nothing is completely safe.  All activities and technologies involve
risk.  Getting out of bed is risky -- so is staying there.  Nitrates have
been shown to cause cancer -- not using them may mean that more people will
die of food poisoning.

2) Technology is often introduced for the mere sake of using the "latest,"
sometimes without considering the fact that the situation may not really be
improved.  For example, everybody seems to be assuming lately that machines
will make fewer mistakes than humans and there is a frantic rush to include
computers and "artificial intelligence" in every new product.  Where speed
is the determining factor, then they may be right.  Where intelligent
decision making in the face of unforeseen situations and factors is
foremost, then it may not be true.  Some electro-mechanical devices may be
more reliable than computers.  Since I am identified with the area of
"software safety," I am often consulted by those building safety-critical
software systems.  It is appalling how many engineers insist that computers
do not make "mistakes" and are therefore safer than any other human or
electro-mechanical system.  We (as computer scientists) have often been
guilty of condoning or even promoting this misconception.  Often it seems
that the introduction and use of non-scientific and misleading terminology
(e.g. "intelligent," "expert", "proved correct") has far outstripped the
introduction of new ideas.

3)  Technology introduced to decrease risk does not always result in 
increased safety.  For example, devices which have been introduced into
aircraft to prevent collisions have allowed reduced aircraft separation
with perhaps no net gain in safety (although there is a net gain in efficiency 
and profitability).  There may be certain risk levels that people are
willing to live with and introducing technological improvements to reduce
risks below these levels may merely allow other changes in the system which
will bring the risks up to these levels again.

4) Safety may conflict with other goals, e.g. productivity and efficiency.
Technology that focuses on these other goals may increase risk.

John McCarthy suggests that people ignore the risks of not using technology.
I would suggest that it is not that these risks are ignored, but that they
are known and we have learned to live with them while the risks of using new
technology are often unknown and may involve great societal upheaval in
learning to adapt to them.  Yes, wood smoke may cause lung cancer, but note
that recent studies in Great Britain show that the incidence of prostate
cancer in men who work in atomic power plants is many times that of the
general population.  To delay introducing technology in order to assure that
greater risks are not incurred than are currently borne by the population
seems justifiable.  Yes delay may cause someone's death, but the
introduction may cause even more deaths and disruption in the long-run.

The solution is to develop ways to assess the risks accurately (so that
intelligent, well-informed decision-making is possible) and to develop ways
to reduce the risk as much as possible.  Returning to the topic of computer
risk, citizens and government agencies need to be able to make informed
decisions about such things as the safety of fly-by-wire computer-controlled
commercial aircraft or between computer-controlled Air Traffic Control with
human-assistance vs. human-controlled Air Traffic Control with computer
assistance.  To do this, we need to be able to assess the risks and to
accurately state what computers can and cannot do.

Forums like this one help to disseminate important information and promote
the exchange of ideas, But we also need to start new initiatives in computer
science research and practice.  I have been writing and lecturing about this
for some time.  For example,

  1) we need to stop considering software reliability as a matter of
     counting bugs.  If we could eliminate all bugs, this would work.  But
     since we cannot at this time, we need to differentiate between the
     consequences of "software failures."

  2) Once you start to consider consequences of failures, then it is possible
     to develop techniques which will assess risk.  

  3) Considering consequences may affect more aspects of software than just
     assessment.  Some known techniques, such as formal verification and
     on-line monitoring, which are not practical to detect all faults may be
     applied in a cost-effective manner to subsets of faults.  Decisions may
     be able to be made about the use of competing methodologies in terms of
     the classes of faults that they are able to detect, remove, or tolerate.
     But most important, by stating the "software problem" in a different way 
     (in terms of consequences), it may be possible to discover new approaches
     to it.  My students and I have been working on some of these.  Most
     software methodologies involve a "forward" approach which attempts to
     locate, remove, or tolerate all software faults.  An alternative is to
     take a backward approach which considers the most serious failures and
     attempts to determine if and how they could occur and to protect the
     software from taking these actions.

If using some of these techniques (or despite their use), it is determined
that the software would be more risky than conventional systems or is above
a minimum level of acceptable risk, then we can present decision makers with
these facts and force them to consider them in the decision-making process.
Just citing horror stories or past mistakes is not enough.  We need ways of
assessing the risk of our systems (which may involve historical data
presented in a statistically proper manner) and ways to decrease that risk
as much as possible.  Then society can make intelligent decisions about
which systems should and should not be built for reasons of acceptable or
unacceptable risk.


Risks of omissions

<Nicholas.Spies@CMU-CS-H.ARPA>
8 Sep 1985 12:00-EST
To: JMC@SU-AI
Cc: risks@sri-csl

The question of responsibilities for non-use of computers are largely
meaningless in terms of law unless the dangers of non-use were known to
substantially increase the probability of greater harm. In the case of your
three short examples:

(1) If the ACLU had acted in good faith in seeking to limit sharing of
police information and a court had looked favorably on their argument after
weighing the possible risks, then the court is responsible because only the
judge had the ability to decide between two courses of action. To make the
ACLU responsible would be to deny it and its point of view access to due
legal process. To make it necessary for the ACLU to anticipate the court's
response to its bringing suit would have the same chilling effect on our
legal system.

(2) The same argument applies to the Sierra Club and US 101.  If US 101 had
been built and then some people were killed, one could as easily conclude
that the Sierra Club (or anyone else) might be sued for NOT obstructing the
highway!

(3) The "Split Wood not Atoms" poster-vendor might be sued if it could be
conclusively proven that he was a knowing party to a conspiracy to give
people lung cancer. But we might assume that his motivation was actually to
prevent a devastating nuclear accident that might have given 10,000 people
lung cancer...

Again, a risks-of-computers organization can only present its case to court
and people and, so long as no malfeasance is involved, cannot be held
responsible for its failure to predict future consequences. There are far
more important "unsymmetric" relationships than that of the press vs. the
legal system that pertain to issues of responsibility, namely, that of past
vs. future and known vs. unknown. I feel that you are correct in pointing
out how computer people would do well to apply their expertise to solving
problems of society. In this case the moral imperitives are quite clear.


Risks of omissions (not using some technology)

Herb Lin <LIN@MIT-MC.ARPA>
Sun, 8 Sep 85 15:51:44 EDT
To: JMC@SU-AI.ARPA
cc: RISKS-FORUM@MIT-MC.ARPA, risks@SRI-CSL.ARPA

        The problem with a forum on the risks of technology is that
    while the risks of not using some technology, e.g. computers, are
    real, it takes imagination to think of them....

You raise an interesting point that deserves more discussion.
However, as on

Please report problems with the web pages to the maintainer

Top