The RISKS Digest
Volume 1 Issue 8

Sunday, 8th September 1985

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Risks of omission
Nancy Leveson Nicholas Spies Herb Lin Dave Parnas
Hot rodding your AT and the weather
John McCarthy
Re: Good Risks and Bad Risks
Brint Cooper
SDI reliability
Herb Lin
Viruses, Trojan horses, and worms
Herb Lin PGN
Herb Lin PGN

Risks of omissions

Nancy Leveson <nancy@uci-icsd>
08 Sep 85 14:58:56 PDT (Sun)

I had not intended to get involved in the RISKS Forum discussions, but despite my great respect for John McCarthy's accomplishments, I just cannot let his latest message (RISKS-1.8, Sept. 8) pass without comment.

Some important points that need to be considered:

  1. Nothing is completely safe. All activities and technologies involve risk. Getting out of bed is risky — so is staying there. Nitrates have been shown to cause cancer — not using them may mean that more people will die of food poisoning.
  2. Technology is often introduced for the mere sake of using the “latest,” sometimes without considering the fact that the situation may not really be improved. For example, everybody seems to be assuming lately that machines will make fewer mistakes than humans and there is a frantic rush to include computers and “artificial intelligence” in every new product. Where speed is the determining factor, then they may be right. Where intelligent decision making in the face of unforeseen situations and factors is foremost, then it may not be true. Some electro-mechanical devices may be more reliable than computers. Since I am identified with the area of “software safety,” I am often consulted by those building safety-critical software systems. It is appalling how many engineers insist that computers do not make “mistakes” and are therefore safer than any other human or electro-mechanical system. We (as computer scientists) have often been guilty of condoning or even promoting this misconception. Often it seems that the introduction and use of non-scientific and misleading terminology (e.g. “intelligent,” “expert”, “proved correct”)has far outstripped the introduction of new ideas.
  3. Technology introduced to decrease risk does not always result in increased safety. For example, devices which have been introduced into aircraft to prevent collisions have allowed reduced aircraft separation with perhaps no net gain in safety (although there is a net gain in efficiency and profitability). There may be certain risk levels that people are willing to live with and introducing technological improvements to reduce risks below these levels may merely allow other changes in the system which will bring the risks up to these levels again.
  4. Safety may conflict with other goals, e.g. productivity and efficiency. Technology that focuses on these other goals may increase risk.

John McCarthy suggests that people ignore the risks of not using technology. I would suggest that it is not that these risks are ignored, but that they are known and we have learned to live with them while the risks of using new technology are often unknown and may involve great societal upheaval in learning to adapt to them. Yes, wood smoke may cause lung cancer, but note that recent studies in Great Britain show that the incidence of prostate cancer in men who work in atomic power plants is many times that of the general population. To delay introducing technology in order to assure that greater risks are not incurred than are currently borne by the population seems justifiable. Yes delay may cause someone's death, but the introduction may cause even more deaths and disruption in the long-run.

The solution is to develop ways to assess the risks accurately (so that intelligent, well-informed decision-making is possible) and to develop ways to reduce the risk as much as possible. Returning to the topic of computer risk, citizens and government agencies need to be able to make informed decisions about such things as the safety of fly-by-wire computer-controlled commercial aircraft or between computer-controlled Air Traffic Control with human-assistance vs. human-controlled Air Traffic Control with computer assistance. To do this, we need to be able to assess the risks and to accurately state what computers can and cannot do.

Forums like this one help to disseminate important information and promote the exchange of ideas, But we also need to start new initiatives in computer science research and practice. I have been writing and lecturing about this for some time. For example,

  1. we need to stop considering software reliability as a matter of counting bugs. If we could eliminate all bugs, this would work. But since we cannot at this time, we need to differentiate between the consequences of “software failures.”
  2. Once you start to consider consequences of failures, then it is possible to develop techniques which will assess risk.
  3. Considering consequences may affect more aspects of software than just assessment. Some known techniques, such as formal verification and on-line monitoring, which are not practical to detect all faults may be applied in a cost-effective manner to subsets of faults. Decisions may be able to be made about the use of competing methodologies in terms of the classes of faults that they are able to detect, remove, or tolerate. But most important, by stating the “software problem” in a different way (in terms of consequences), it may be possible to discover new approaches to it. My students and I have been working on some of these. Most software methodologies involve a “forward” approach which attempts to locate, remove, or tolerate all software faults. An alternative is to take a backward approach which considers the most serious failures and attempts to determine if and how they could occur and to protect the software from taking these actions.

If using some of these techniques (or despite their use), it is determined that the software would be more risky than conventional systems or is above a minimum level of acceptable risk, then we can present decision makers with these facts and force them to consider them in the decision-making process. Just citing horror stories or past mistakes is not enough. We need ways of assessing the risk of our systems (which may involve historical data presented in a statistically proper manner) and ways to decrease that risk as much as possible. Then society can make intelligent decisions about which systems should and should not be built for reasons of acceptable or unacceptable risk.


Risks of omissions

<Nicholas.Spies@CMU-CS-H.ARPA>
8 Sep 1985 12:00-EST

The question of responsibilities for non-use of computers are largely meaningless in terms of law unless the dangers of non-use were known to substantially increase the probability of greater harm. In the case of your three short examples:

  1. If the ACLU had acted in good faith in seeking to limit sharing of police information and a court had looked favorably on their argument after weighing the possible risks, then the court is responsible because only the judge had the ability to decide between two courses of action. To make the ACLU responsible would be to deny it and its point of view access to due legal process. To make it necessary for the ACLU to anticipate the court's response to its bringing suit would have the same chilling effect on our legal system.
  2. The same argument applies to the Sierra Club and US 101. If US 101 had been built and then some people were killed, one could as easily conclude that the Sierra Club (or anyone else) might be sued for NOT obstructing the highway!
  3. The “Split Wood not Atoms” poster-vendor might be sued if it could beconclusively proven that he was a knowing party to a conspiracy to give people lung cancer. But we might assume that his motivation was actually to prevent a devastating nuclear accident that might have given 10,000 people lung cancer…

Again, a risks-of-computers organization can only present its case to court and people and, so long as no malfeasance is involved, cannot be held responsible for its failure to predict future consequences. There are far more important “unsymmetric” relationships than that of the press vs. thelegal system that pertain to issues of responsibility, namely, that of past vs. future and known vs. unknown. I feel that you are correct in pointing out how computer people would do well to apply their expertise to solving problems of society. In this case the moral imperitives are quite clear.


Risks of omissions (not using some technology)

Herb Lin <LIN@MIT-MC.ARPA>
Sun, 8 Sep 85 15:51:44 EDT
The problem with a forum on the risks of technology is that while the risks of not using some technology, e.g. computers, are real, it takes imagination to think of them…

You raise an interesting point that deserves more discussion. However, as one who is concerned that the major problem arises from an uncritical acceptance of technology, I strongly disagree with your suggestion that the scales are stacked AGAINST those who are “pro-technology“.  The reason that anyone adopts any given technology is that it provides benefits that he can see; the problem is getting those individuals to see that there are costs as well.  In other words, the bias in the system is towards acceptance of technology, not rejection of it.  It is this general bias that “risks” people are trying to correct.

    … Is a risk-of-computers     organization that successfully sues to delay a use of computers either     MORALLY or LEGALLY LIABLE if the delay causes someone's death?  Is there     any moral or legal requirement that such an organization prove that they     have formally investigated whether their lawsuit will result in killing     people?  As the above examples indicate, the present legal situation     and the present publicity situation are entirely unsymmetric.

There are such precedents; organizations can be held liable for using technology that that is not as up to date as is “generallyapplicable”.On the more general point, it is much harder toestablish liability as the result of someone's inaction as compared to the result of someone's action; this does happen, but it is harder to prove.

    The harm caused by tape-to-tape batch processing as opposed to on-line     systems.

I like this example; let's have more discussion of it.  There was a time when on-line systems were totally unreliable, but I think that time has past.

    Shouldn't computer professionals     who pretend to social responsibility take an interest in an     area where their knowledge might actually be relevant?

This is a cheap shot unworthy of pioneers in computer science.  More than anyone else, computer professionals are the ones who are in the best position to assess the limits as well as the promises of technology.  You once said that the responsibility of the scientist is to take his science whereever it may lead.  I agreed with you then, and I agree with that now.  But science is part of a social and cultural context, and cannot be separated as cleanly as one might imagine.  If it is proper for the developers of science and technology to propose new applications of their work (because they see these potential applications as beneficial to society), it is also appropriate for them to suggest possible consequences of their use as well.


Risks of omissions

vax-populi!dparnas@nrl-css.arpa (Dave Parnas)
Sun, 8 Sep 85 09:06:10 pdt
  1. McCarthy's contribution can be summarized simply,  “Life is  full of tough decisions about when to use technology; we have to consider both sides.”  Those who have been concerned with the risks of computer technology have been saying, “There are risks to using computer technology; we have to consider both sides”.  I sense  agreement on the obvious conclusion.
  2. I can disagree only with one aspect of Weizenbaum's contribution. He says that he would be against SDI even if it would work, but his arguments mainly show even more reasons why it won't “make nucelar weapons impotent and obsolete.“  It is probably useless to argue about how we would feel  about the system if it would work, but I feel the decision would be much  harder to make than it is now.

    While I would prefer not to waste technological resources on such a thing, I would see some truth in the argument that the non-technological solutions  also have a clear risk of failure.  

 Dave


Hot rodding your AT and the weather

John McCarthy <JMC@SU-AI.ARPA>
08 Sep 85  1108 PDT

Assuming, contrary to fact, that putting a faster clock in your AT would cause it to catch fire, the RISKS contributors are quite wrong about who would be sued.  According to the new legal doctrine of  bursae profundae,  it isn't the poor BBOARD operator that would have to pay but rich IBM.  It wouldn't take much of a lawyer to figure out that IBM should have anticipated that someone might do this and warned against it or even somehow made it impossible.

Changing the subject, consider the effect of the court decision that NOAA was liable for not fixing the buoy.  Up to now weather predictionshave been offered as a non-guaranteed service with the user taking responsibility for the extent to which he relies on it.  The court has said that this is no longer possible.  Any institution with “deep pockets” cannot safely offer information on a user responsibility basis.  What if Stanford University has negligently failed to replace a stolen book from its medical library, and someone dies who would have been saved had his doctor found the book?  Stanford's lawyers should advise Stanford to deny access to its medical library to practicing physicians.


Re:  Good Risks and Bad Risks

Brint Cooper <abc@BRL.ARPA>
Sun, 8 Sep 85 11:23:16 EDT

Another example of “good news/bad news” is the use of computerized axial tomography (CAT scans) as a substitute for exploratory surgery in the head and body.

Advantages:
elimination of much surgical risk;
negative diagnoses (disease NOT present) without surgery much lower cost
Disadvantages:
because of the advantages, CAT scans may be over-used;
increased exposure to X-rays which, itself, can be disease-enhancing

Brint


SDI reliability

Herb Lin <LIN@MIT-MC.ARPA>
Sun,  8 Sep 85 15:58:27 EDT

My primary complaint about your otherwise interesting table is that it assumes independent failure modes.  I think it is much more likely that the effects of coupled failures are larger.  In particular, given the failure of one platform, it is more likely that more than one will fail.


Viruses, Trojan horses, and worms

Herb Lin <LIN@MIT-MC.ARPA>
Sun,  8 Sep 85 16:02:40 EDT
… The example of squirreled control characters and escape characters that do not print but cause all sorts of wonderful actions was popular several years ago, and provides a very simple example of how a message can have horrible side-effects when it is read.

But if I have designed my operating system to display only what it receives, how is this possible? 


Viruses, Trojan horses, and worms

Peter G. Neumann <Neumann@SRI-CSLA.ARPA>
Sun 8 Sep 85 13:35:05-PDT

The example of squirreled non-printing characters is of course just  one example; that one was relatively easy to fix once it was recognized.  But until you have discovered a flaw, you are vulnerable.  And you never know how many flaws remain undiscovered.

Sure, you may be smart, but large operating systems generally have many security flaws.  You probably aren't smart enough to design and build a system that has none.  (In fact, your solution is not quite good enough by itself — without some other assumptions on the rest of your system.)

Peter


Viruses, Trojan horses, and worms

Herb Lin <LIN@MIT-MC.ARPA>
Sun,  8 Sep 85 16:40:44 EDT
Sure, you may be smart, but large operating systems generally have many security flaws.  You probably aren't smart enough to design and build a system that has none.  (In fact, your solution is not quite good enough by itself &8212; without some other assumptions on the rest of your system.)

I certainly recognize that operating systems have security flaws in them, having been an OS hacker myself at one time.  But for the problem of messages (as opposed to programs) doing bad things to my system, I guess I never figured out a way of doing that.

My naive model is that I have a special program that intercepts the raw bit stream that comes in from my communications port.  It then translates this into ASCII, and then prints it on my screen.  

If this is all that my program does, I can't see what harm can be done.

Now, if my program writes the message to disk, then I can see potential problems.  So I simply count the characters that are sent to me over the line, and save exactly that many characters on disk.

What am I missing?


Viruses, Trojan horses, and worms

Peter G. Neumann <Neumann@SRI-CSLA.ARPA>
Sun 8 Sep 85 14:47:21-PDT

Herb, Indeed your model is a bit naive, in that you are looking at the problem much to narrowly, and assuming that everything else works fine. Suppose that your special program would work as you wish in intercepting the raw bit stream.  Suppose, however, that there is an operating system flaw that lets me change your special program to do what I wish! (There are many examples of how this might happen.)  Now your program can be effectively bypassed.  The point is NOT whether you can seal off one hole, but rather that you are dealing with the tip of an iceberg and there may be titanic holes you don't even know about.  Besides, as I said earlier, the message squirreling is only one example — and hopefully completely cured.  So I hope you don't reply that you could use seals to guarantee that your program is unchanged.  That would still miss the broader point.  PGN

Please report problems with the web pages to the maintainer

x
Top