The RISKS Digest
Volume 17 Issue 22

Tuesday, 1st August 1995

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

10th anniversary of RISKS
Peter J. Denning
"The Net"
Andrew Marc Greene
Ten years still too soon to tell
Raymond Turney
Which risks to fight first?
Raymond Turney
Where do we go from here? — A Sermon for the Converted
Karl W. Reinsch
Limits to Software Reliability
Dick Mills
Software Development
Dave Schneider
R&D on the dependability of human-computer interfaces
Jack Goldberg and Roy Maxion
Info on RISKS (comp.risks), contributions, subscriptions, FTP, etc.

10th anniversary of RISKS

Peter J. Denning <pjd@cne.gmu.edu>
Thu, 27 Jul 95 17:37:48 EDT

From the perspective of one of the members of the ACM Council that approved the startup of the Risks Forum, I can say that RISKS has succeeded far beyond what we dared to hope. We hoped that it would be an opportunity for people to speak up on issues about risks of computers that concerned them. We hoped that this one forum would bring some organization to the chaotic discussions about risks and would allow for reasonable and constructive conclusions to emerge. At the same time we were somewhat fearful that the forum might be taken over by a small set of people who monopolized the discussions (as was common on BBs), or that the interest in it would die out in a couple of years. But look at what has been accomplished:

  1. RISKS is vibrant, alive, and well after 10 years.
  2. RISKS is taken seriously by many people as a reasonable ongoing discussion of risks. It has made ACM a key player in this aspect of Internet.
  3. RISKS has wide participation.
  4. RISKS has very wide distribution.
  5. RISKS is a good observation post for what is on people's minds about computer-related risks.
  6. RISKS has enabled the documentation of a large number of mishaps, malfunctions, and catastrophes, giving designers a solid base of information about what can go wrong with their systems. This documentation has been summarized and analyzed in Peter's ACM Press book (Computer-Related Risks, Addison-Wesley), and all back issues of RISKS are available to anyone at any time on the Web.
All this is in no small measure due to the gentle but firm hand of Peter Neumann, our moderator, who has brought a clear vision of what a constructive, serious on-line discussion about an important topic can be. And he has made it real.
Peter Denning
[PJD is the "Peter Sellers" of the ACM: at one time or another, he's played all the major roles. He has also been a regular contributor to RISKS, dating back to the very first issue, 1 August 1985. PJD, Many thanks for your kind and thoughtful words. PGN]


"The Net"

"Andrew Marc Greene" <amgreene@mit.edu>
Tue, 1 Aug 1995 10:07:02 -0400

I saw "The Net" last night. For those of you who don't know about this new movie, its basic plot is that a "computer analyst" named Angela Bennett runs into the software equivalent of the "Sneakers" chip — it will break into anyplace. It's an ok movie, but worthy of Risks' attention because it is a popularization of the sort of worries that we discuss all the time.

And it does a good job of that, I think. It's realistic but not too alarmist. The resolution is a bit contrived, but aside from that the technology is believable (and the IP equivalent of a 555-xxxx number is xx.xxx.345.xxx).

But I kept wondering why, since this is set in California, Angela didn't try to call PGN for help.... :-)

- Andrew Greene
[An article in this morning's SanFranChron indicates that the writers were inspired by a Japanese case of purloined identity (which has not yet appeared in RISKS). Evidently, their advice came from elsewhere, although RISKS readers will recall Terry Dean Rogan's misadventures as a primal case of being spoofed. Also, in the movie, Angela sought almost no technohelp from anyone, including law enforcement. But as I watched the film, I was ready, just in case she called. BTW, the folks who brought you WarGames and Sneakers are now working on a new film. Stay tuned. PGN]


Ten years still too soon to tell

<Raymond.Turney@ncal.kaiperm.org>
Fri, 23 Jun 1995 13:11 -0700 (PDT)

The ten years that comp.risks has been in existence is not long enough to get a good grip on trends involving risks in computers and related systems. To see why, consider the history of nineteenth-century railroading. This is another technology the use of which grew explosively and that evolved rapidly. By 1870 or so, which is about the same amount of time after railroading was first developed that 1995 is after digital computers started to be used, it was very evident that there were risks in railroading. One of the most prominent of them was the combination of derailments, wooden passenger car bodies, and oil lamps.

Basically, in a derailment {or other wreck} the passengers would be pinned inside the car by blockage of exits. The oil lamps which had been dislodged would set the body of the car on fire and the passengers trapped inside would be burned alive. In 1870 this was a serious problem that, with the steadily increasing number of rail passengers and passenger- miles, looked like it could only get worse. By 1910 this problem had been basically solved. The solution was steel passenger car bodies, and electric lighting.

An optimist can take comfort from the fact that this risk is now gone. So far gone that only railfans with an interest in train wrecks {like my father who bought a book on train wrecks, and I who read it} are even likely to remember it. Further, the solution was in large part technical, and did not involve massive changes in people or social institutions. As a final point of encouragement the widespread adoption of electric lighting aboard railroad passenger trains was not an evident possibility in 1870. So sometimes new technology does save you.

A pessimist could point out that forty years is a long time to wait for a solution to a problem as obvious as this one. He could also point out that my selected example of a railroading risk was carefully chosen to expose a problem which had been forgotten. If I had focused not on the narrow and specific risk of passengers being burned alive in derailed cars, but had instead focused on the more general problem of derailments, I would be discussing a problem that is still with us. Indeed, derailments could be offered as a problem which has been aggravated by the widespread adoption of a new technology. The technology referred to of course is the much more widespread use of toxic chemicals carried in tank cars. Likewise, collisions between trains, a risk evident in 1840, are still happening and being discussed in comp.risks as of June 1995.

Believers in upgrading and improving systems could use my example risk as support for their position. The adoption of steel passenger-car bodies depended on the development of more powerful locomotives, which in turn required heavier rail and a better quality of roadbed to operate. Thus the solution of an apparently narrow and specific problem turned out to hinge on substantial upgrade of the system as a whole {which was actually done for other reasons}. Supporters of system upgrade have a proposal to mitigate if not solve the problem of derailments, too: track maintenance. A partial explanation for the rise in problem derailments {actually, all derailments are a problem but not all derailments make CNN, which is what I am referring to} is the reduction of maintenance of way expenditures to the barest possible minimum by many railroad companies. The results of this will be predictable to most engineers.

Cynics, of course, can point to this risk as evidence that people worry about the wrong things. Another effect of railroading, not as obvious in 1870 as it is now, is to improve the effectiveness of mass mobilization in wartime. This turned out to be a precondition of World Wars I & II, far more destructive events than all the train wrecks of the nineteenth century combined. But while train wrecks made headlines in the nineteenth century, the impact of railroading on military mobilization did not.

The point of all this is that there has been a historical parallel to the introduction of new risks with the mass adoption of computer technology, namely the introduction of new risks with the mass adoption of railroad technology. Many of the positions now being argued with regard to computer related risk have analogues that could be argued with regard to railroad related risk. Since it is not clear even now which position would have been generally correct in 1870 with regard to railroad related risk, I do not think we have the data to reach accurate conclusions about computer related risk.

The railroad analogy does reinforce some recurrent themes in comp.risks, though. If you want to eliminate a risk, your best bet is to analyze the system as a whole, determine the preconditions of the risk, and change the system to remove them. After all, the proximate cause of most of the train wrecks in which passengers were burned alive was operator error. The elimination of the risk involved going to steel passenger car bodies, though, which is not obviously related to the proximate cause of any particular accident.

Measures necessary to reduce or eliminate a risk are often expensive. Replacing the entire US stock of wooden passenger cars with steel ones was not cheap, and the railroad companies were not happy about doing it.

Social reforms that reduce strain on people can also increase their reliability, and thus the safety of a system. The ten hour day on the railroads {1916 if I remember right}, was an important reform that made things safer for passengers. Perhaps a ten hour day for programmers ... {just kidding, just kidding}.

Finally, extremes of hope and despair should be avoided. If the railroading analogy is any guide, we shall end up in neither perdition nor paradise. Particular risks will be eliminated, risk as a whole will not, and there will still be nice people unhappy about risk in the year 2100.

Raymond Turney


Which risks to fight first?

<Raymond.Turney@ncal.kaiperm.org>
Mon, 10 Jul 1995 12:47 -0700 (PDT)

It has been argued elsewhere that it is too soon to tell which risks are the most significant, and how best to deal with them. Unfortunately, decisions about what risks to fight, how, will have to be made before the verdict of history is in. It would be better if these decisions were made after some consideration of the information available to us, rather than by throwing darts in the dark {though reports are that as regards the stock market, darts are not as bad a means of decision making as one might at first suppose}.

My suggestion is that those who are concerned about computer-related risks should focus their attention on the risks to privacy resulting from massive data collection and analysis using computer systems. A secondary problem area, coming up fast, are the unknown psychological effects of both the Internet and the increasing availability of virtual reality.

There are a number of reasons for focusing on these risks, among the many risks that are out there. The first and strongest reason is that these risks arise from the intentional use of computers working more or less as designed. Thus there is no natural counterforce working to contain these risks.

By contrast, one can consider a widely discussed {at least in RISKS}, risk: fly-by-wire aircraft. Whatever the problems with the design of the A320, or possibly the 777, there are a large number of people working for the manufacturers, airlines and regulators, who are strongly against aircraft crashes. Aircraft crashes are thoroughly investigated and their causes widely reported in magazines devoted to flight, and the air safety community is well developed. Computer related risks as they affect aircraft will be dealt with by the air safety community in the normal course of business. When one also considers that flight is one of the safest means of travel currently available, the conclusion that the total additional risk to our lives and values resulting from the use of computers ... given the continued existence of a strong aviation safety community ... is not major.

Similarly, the problems related to the use of computers in medical equipment are in effect a subset of the problems raised by iatrogenic disease in general. While the impression I get is that the medical safety community is underdeveloped relative to the air safety community {i.e. there are fewer and less powerful people interested in medical safety, and that the medical equivalent of the FAA is much weaker}, it is very doubtful that medical safety would be significantly increased by focusing more of the existing medical safety effort on computer related aspects of the overall risk of iatrogenic disease. Thus, it is not clear that computer people should demand that medical safety effort be reallocated to deal with computer problems would be a welcome thing. After all, in arguing that computer related risks make only a minor contribution to the overall risk, I am merely assuming that the small number of reports in risks of computer related death or injury are a realistic sample of the problem.

It is possible to go through a number of the other risks discussed in comp.risks and make similar arguments. While a knowledge of computer related risks in these areas is valuable, and should be made available to the relevant safety communities, these risks are already being dealt with. It may be that we need to strengthen the medical safety community, but we probably do not want the computer safety community to replace it. Look at the whole system, not merely the computer related portion of the risk.

By contrast, the risks to privacy posed by modern computer technology are nobody else's responsibility. And since they arise from the use of the machines as designed, they will not be reduced by the normal efforts of engineers to make things work better. They will be increased by the normal efforts of engineers to make things work better.

The reason for the interest in the effects of the Internet and VR technology is simply that they are unknown. People are reporting problems with "Internet addiction", and as an old gamer I can see where there might be a problem with people preferring VR flight simulators to their real lives. Not being a member of the media, I will not suggest panic. Some psychological studies, to see if this risk is real and how big it is, do seem appropriate though.

In short, my recommendation is that efforts to increase the use of computers in ways which will invade and reduce personal privacy be resisted with all of the power available to the Computer Science community. The extent of computer related contribution to medical risks should be studied. Psychological studies of the effects of the Internet, and VR, should be supported if the methodology is reasonably sound and the intent of the authors is not sensationalistic. What has been learned about computer related risk should be communicated to members of the appropriate safety communities.

Raymond Turney

P.S. I suspect a lot of readers will question my assumption that sound methodology exists in psychological studies. From a scholarly perspective, I might even agree. But this is about policy, not scholarship, and while proof would be nice, clues will do.


Where do we go from here? — A Sermon for the Converted

"Karl W. Reinsch" <kreinsch@radix.net>
Sun, 23 Jul 1995 02:26:14 -0400 (EDT)

What is the problem that causes computer-related risks today? Is it the technology? The people? Those of us who read RISKS on a regular basis understand that it is people expecting too much from technology.

Why do people expect too much from technology? Sometimes it is lack of understanding. A large part of the problem is that the computer profession has oversold its product. Computers, people are told, can do anything. And if they can't do it now, they will tomorrow. Computers are perfect and full-proof.

Wrong.

Things are better today. In our profession, the awareness of the limitations of computers is at an all-time high. There exist forums such as the Risks Forum, and Peter Neumann's own columns in CACM. A look into bookstores reveals many recent books such as the second edition of Theodore Roszak's "The Cult Of Information", Lauren Wiener's "Digital Woes", Clifford Stoll's "Silicon Snake Oil", and Peter Neumann's "Computer-Related Risks". And many that are more recent. At the university I attended, the reading for my course-work included Frederick Brooks' "The Mythical Man-Month", the ACM TWA case study, Donald Norman's "The Design of Everyday Things", and the ACM Code of Ethics. Awareness of the computer's limitations and risks is certainly being passed on to the next generation of computer professionals. So, why do the risks still exist?

It goes back to the computer having been oversold in the past. People have been told that the computer can do things that it can not or should not do. The government of the United States once attempted to develop the Strategic Defense Initiative (SDI or "Star Wars"). It took a group of computer professionals to come forward and point out how impossible such a project is. SDI is just one of many things that people assume computers can do. The problem is not only in government. The problem is everywhere. Corporate leaders and futurists such as Alvin Toffler tell the public what the computer can, can not, and will do. United States Congressman Newt Gingrich often cites Alvin Toffler's visions of technology and the future. It is my thinking that Congressman Gingrich should balance his reading diet with the works of Neumann and Roszak.

So what do we, as computer professionals, do now? Our primary problem is that we have been preaching to the converted. The computer professional is the primary reader of RISKS and related subject matter. We need to share the message of RISKS. We need to preach to the unconverted. As Justin Wells suggested in RISKS-17.19, we need to push for a risks segment in our evening news. We need television programs and newspaper columns. We need to get the message out to everyone.

The RISKS tidal wave is only beginning to come in. It needs to wash over everything. Happy Birthday, RISKS! Here's to 10 years and hopefully many more.

Karl Reinsch, kreinsch@radix.net


Limits to Software Reliability

Dick Mills <rj.mills@pti-us.com>
Thu, 20 Jul 1995 15:01:32 -0400

For decades I've heard that software can't ever be as safe or reliable as hardware. That makes me feel uneasy, because it rings untrue. Instinct tells me it has nothing to do with software per se, but merely in how we structure it. The following is a little thought exercise. Risks readers can amuse themselves picking it apart.

Complex electric circuits are comprised of networks of connected components. Each component (like a resistor, or a transistor) is very simple.

Suppose we wrote a simulation of one of these simple components, say a resistor. The simulator would have a CPU, memory, and A/D and D/A converters for each of the devices port. It would also have an battery, if we're going to avoid external power connections. Now package the simulator in epoxy with just two external ports, just as if it were a real resistor. Within a limited domain, the simulated resistor ought to be able to pass a Turing test in that one could not distinguish it from a real resistor.

Given bins of identical simulators for the needed basic components, we would have the raw materials to build circuit boards of arbitrary complexity. The question is, how do the hardware and simulator based versions of the same board inherently differ in complexity and reliability?

Granted, a simulator of a resistor is more complex than an actual resistor. That gives hardware a slight edge from the start. However, [and this is the real point], the simulator - hardware complexity gap would *not* grow proportional to the number of components on the circuit board. All instances of the resistor simulator are alike. The complexities of the circuit interconnections are identical for both implementations.

My conclusion is that there is no *a priori* reason why software based systems can not have the same risks as hardware based implementations of the identical external requirements and identical design (i.e. circuit level design). Is that correct?

Dick Mills +1(518)395-5154 http://www.albany.net/~dmills


Software Development

<d_schneider@emulex.com>

Fri, 21 Jul 95 14:36:00 PDT

While thinking about trends shown in the history of the Risks Digest, I also came across an article in _Software Development_, Vol 3 No. 7, July 1995 (a Miller-Freeman publication). I think it has some very good comments in it.

The article is "Project-Level Design Archetypes", by David Bond. The sub-head is, "Choosing the 'best practices' touse for your software development project depends on the type of project you're dealing with." He goes on to say, "If you are tuned into the debate over software development practices, you are aware of how polarized this debate can be. Each side trumpets its view of what development is all about and largely ignores what the others have to say. Each point of view is based on real experiences. Each side mistakenly assumes all software development projects are similar enough that one set of strategies works for all."

He goes on to discuss Constrained Software (and SEI Maturity Levels), Internal Client Software, Veritcal Market Software, and Mass Market Software. In each section, he lists the critical success factors, in approximate order of importance. For instance, under Mass Market Software, the lists is "Marketing, Timeliness, Features, Cost, Ease of Use". Constrained software, as in embedded software or government contracts, is "Adherence to the contract, including schedule and cost, Quality, Maintainability and extendibility".

He further describes the impact of these different lists on choosing a "best practice" strategy (which is why SEI Maturity Levels are under Constrained Software). Since many of the "failure to deliver" anecdotes in Risks relate to Information Systems, the key items in this article are

  1. "If you look at the tradition of systems engineering, requirements changes are considered undesirable."
  2. "On the other hand, most information systems departments have learned...that they must accommodate change."
  3. "One tremendous advantage of systems engineering is that it appears to scale up better. Various studies have indicated a high failure rate of large information systems projects. This is probably because large projects often involve issues beyond just software. Systems engineering is aimed at managing precisely these kinds of projects."
I strongly recommend the entire article.
/dps Dave Schneider, Emulex Corp.

P.S. I have been reading a lot of back issues of the past several months, because you just can't keep up with enough new issues for my appetite. So I have noticed some trends, as with the IS systems mentioned. Another trend is that there is often a lot of discussion of anticipated impacts of regulatory changes before they occur, and rarely any followup after they occur — or if they did occur.

But keep up the good work. I often find pithy things in the digests to share with my coworkers. /dps


R&D on the dependability of human-computer interfaces

Jack Goldberg <goldberg@csl.sri.com>
Mon, 31 Jul 95 10:50:54 -0700

Readers of Risks need no awareness-raising about dependability problems in human-computer interaction. There are too many reports of transportation, power or weapons systems in which operator misunderstanding or abuse of the control interface had tragic consequences. Serious, if not tragic, reports of economic losses or inefficiencies can be traced to ambiguities or opacities in the information supplied to users or in the rules governing system operation. Some reports have attributed failures to the human interface that are really due to system designs that place impossible demands on operators. The maintenance interface is also a source of failures, from small to gigantic. Use of computers by groups introduces problems of human communication to the human-computer interface.

Design of the human-computer interface for dependability has been attended to seriously in numerous applications; for example, several Risks reports have described very diligent attention to human interface issues by aircraft designers, and there are other serious industrial efforts. Most of these are concerned with highly critical applications, and it is not clear that results from one application domain can be used in others.

It would seem appropriate to have some general results about dependability and the human interface that can be applied across different domains and over different degrees of criticality. What are the fundamental issues? How should we characterize problems at the human interface? How can we measure and observe interface risks and errors? How should interfaces be designed for low risk and for tolerating interface faults? How can systems be designed to prevent unreasonable requirements at the interface? Answers to these questions are not evident in current professional communications.

How should this study start? What are the right questions? What good results exist that can be generalized? What would a good testbed contain? What are the right forums for communicating results? Is anyone working on these issues?

Despite many years of research in user interfaces, their dependability doesn't seem to be improving. (Yes, we are aware of all the research done in the CHI community.) This leaves us with the somewhat rhetorical question: how would someone demonstrate that an interface could be depended upon for a certain mission-critical application (such as a display in a hospital operating room)?

Jack Goldberg <goldberg@csl.sri.com>
Roy Maxion <maxion@K.GP.CS.CMU.EDU>
[Please reply to directly to Jack, cc: Roy. Jack will cull the responses for RISKS. One of the key cases that might motivate this effort is the Vincennes-Aegis shootdown of the Iranian Airbus. PGN]

Please report problems with the web pages to the maintainer

x
Top