The RISKS Digest
Volume 14 Issue 25

Monday, 11th January 1993

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Organizational Analysis in Computer Science — PART TWO
Rob Kling
Info on RISKS (comp.risks)

 

   [CONTINUED FROM RISKS-14.24]

System Security and Reliability

In a simplified engineering model of computing, the reliability of products is
assured through extensive testing in a development lab. The social world of
technology use is not perceived as shaping the reliability of systems, except
through irascible human factors, such as "operator errors." An interesting and
tragic illustration of the limitations of this view can be found in some
recent studies of the causes of death and maiming by an electron accelerator
which was designed to help cure cancer, the Therac-25 (Jacky, 1991, Leveson
and Turner, 1992).

The Therac-25 was designed and marketed in the mid 1980s by a Canadian firm
AECL as an advanced medical technology. It featured complete software control
over all major functions (supported by a DEC PDP-11), among other innovations.
Previous machines included electro-mechanical interlocks to raise and lower
radiation shields. Several thousand people were effectively treated with the
Therac-25 each year. However, between 1985 and 1987 there were six known
accidents in which several people died in the US. Other were seriously maimed
or injured.

Both studies concur that there were subtle but important flaws in the design
of the Therac-25's software and hardware. AECL's engineers tried to patch the
existing hardware and (finally) software when they learned of some of the
mishaps. But they treated each fix as the final repair.

Both studies show how the continuing series of mishaps was exacerbated by
diverse organizational arrangements. Jacky claims that pressures for speedy
work by low-skilled machine operators coupled with an interface design that
did not enhance important error messages was one of many causes of the
accidents. Leveson and Turner differ in downplaying the working conditions of
the Therac-25's operators and emphasize the flawed social system for
communicating the seriousness of problems to Federal regulators and other
hospitals. Both studies observe that it is unlikely for the best of companies
to develop perfect error-free systems without high quality feedback from
users. Their recommendations differ: Jacky emphasizes the licensing of system
developers to improve minimal standards of competence. Leveson and Turner
propose extensive education and training of software engineers and more
effective communication between manufacturers and their customers.

However, both studies indicate that an understanding of the safety of computer
systems must go beyond the laboratory and extend into the organizational
settings where it is used. In the case of the Therac-25, it required
understanding a complex web of interorganizational relationships, as well as
the technical design and operation of the equipment. The need for this kind of
organizational understanding is unfortunately slighted in the CS academic
world today. CTF discusses only those aspects of computer system reliability
which are amenable to understanding through laboratory-like studies (Hartmanis
and Lin, 1992:110-111). But cases of safety critical systems, like the
Therac-25, indicate why some Computer Scientists must be willing to undertake
(and teach) organizational analysis.

   [From the title of the above section, I presume Rob had not yet gotten
   around to commenting on system security in complementary context to the
   discussion on human safety.  But it seems that similar conclusions can
   be drawn.  Apologies for the interjection.  PGN]

Worldviews and Surprises about Computerization

These few paragraphs barely sketch the highlights of a fertile and significant
body of research about computer systems in use.  Perhaps the most important
simplification for traditional computer scientists is to appreciate how people
and their organizations are situated in a social world and consequently
compute within a social world. People act in relationship to others in various
ways and concerns of belonging, status, resources, and power are often
central. The web of people's relationships extend beyond various formally
defined group and organizational boundaries (Kling and Scacchi, 1982; Kling,
1992).  People construct their worlds, including the meanings and uses of
information technologies, through their social interactions.

This view is, of course, not new to social scientists. On the other hand,
there is no specific body of social theory which can easily be specialized for
"the case of computing," and swiftly produce good theories for Organizational
Informatics as trivial deductions. The best research in Organizational
Informatics draws upon diverse theoretical and methodological approaches
within the social sciences with a strong effort to select those which best
explain diverse aspects of computerization.

       ORGANIZATIONAL INFORMATICS WITHIN COMPUTER SCIENCE

CTF places dual responsibilities on Computer Scientists. One responsibility is
to produce a significant body of applicable research. The other responsibility
is to educate a significant fraction of CS students to be more effective in
conceiving and implementing systems that will enhance organizational
performance. It may be possible to organize research and instruction so as to
decouple these responsibilities. For example, molecular biologists play only a
small role in training doctors. However, CS departments act like an integrated
Medical school and Biology department. They are the primary academic locations
for training degreed computing specialists, and they conduct a diverse array
of less applicable and more applicable research. In practice, the research
interests of CS faculty shape the range of topics taught in CS departments,
especially the 150 PhD granting departments. CS curricula mirror major areas
of CS research and the topics which CS faculty understand through their own
educations and subsequent research. As a consequence, CS courses are likely to
avoid important CS topics which appear a bit foreign to the instructor.

An interesting example of this coupling can be illustrated by CTF, in a brief
description of public-key encryption systems and digital signatures (Hartmanis
and Lin, 1992:27). In the simple example, Bob and Alice can send messages
reliably if each maintains a secret key. Nothing is said about the social
complications of actually keeping keys secret. The practical problems are
similar to those of managing passwords. In real organizations, people lose or
forget their passwords. Also, some passwords can be shared by a group of with
shifting membership, and the "secret key" can readily become semi-public. In
practice, the management of keys is a critical element of system security.
But Computer Scientists are prone to teach courses on cryptography as
exercises in applied mathematics, such as number theory and Galois theory, and
to skirt the vexing practical problems of making encryption a practical
organizational activity.

Today, most of the 40,000 people who obtain BS and MS degrees in CS each year
in the U.S. have no opportunities for systematic exposure to reliable
knowledge about the best design strategies, common uses, effective
implementation, and assessments of value of computing in a social world
(Lewis, 1989). Yet a substantial fraction of these students go on to work for
organizations attempting to produce or maintain systems that improve
organizational performance without a good conceptual basis for their work.
Consequently, many of them develop systems that underperform in organizational
terms even when they are technically refined. They also recommend ineffective
implementation procedures and are sometimes even counterproductive.

One defensible alternative to my position is that CS departments should not
take on any form of organizational analysis. They should aggressively take a
role akin to Biology departments rather than taking on any instructional or
research roles like Medical schools. To be sincere, this position requires a
high level of restraint by academic Computer Scientists. First and foremost,
they should cease from talking about the uses, value or even problems of
computerized systems that would be used in any organizational setting.
Research proposals would be mute about any conceivable application of research
results. Further, they should make effective efforts to insure that anyone who
employs their graduates should be aware that they may have no special skills
in understanding organizational computing. It would take an aggressive "truth
in advertising" campaign to help make it clear that Computer Scientists have
no effective methods for understanding computerization in the social world.
Further, Computer Scientists would forsake their commitments to subfields like
software engineering which tacitly deals with ways to support teams of systems
developers to work effectively (Curtis, et. al. 1988). Computer Scientists, in
this view, would remove themselves from addressing organizational and human
behavior, in the same way that molecular biologists are removed from
professionally commenting on the practices of cardiologists and obstetricians.
CTF argues that this view would be self-defeating.  But it would be internally
consistent and have a distinctive integrity.

In contrast, CS faculty are often reluctant to wholly embrace Organizational
Informatics. But some CS subfields, such as software engineering, depend upon
organizational analysis (Curtis, et. al., 1988). Further, CS faculty do little
to advertise the distinctive limitations in the analytical skills of our
programs' graduates. Part of the dilemma develops because many CS faculty are
ambivalent about systematic studies of human behavior. Applied mathematics and
other modes of inquiry which seem to yield concise, crisp and concrete results
are often the most cherished. As a consequence, those who conduct behaviorally
oriented research in CS departments are often inappropriately marginalized.
Their students and the discipline suffers as a result.

Between 1986 and 1989, the total number of BS and MS CS degrees awarded
annually in the US declined from about 50,000 to approximately 40,000. The
number of students majoring in CS rapidly declined at a time when
computerization was becoming widespread in many fields. A significant fraction
of the decline can be attributed to many students finding CS programs insular
and indifferent to many exciting forms of computerization. The decline of
military R&D in the U.S. can amplify these trends or stimulate a more
cosmopolitan view in CS departments. The decline in military R&D is shifting
the job market for new CS graduates towards a markedly more civilian
orientation. This shift, along with the trend towards computing distributed
into diverse work groups, is leading to more job opportunities for people with
CS education who know Organizational Informatics.

The situation of CS departments has some parallels with Statistics
departments. Statistics are widely used and taught in many academic
disciplines. But Statistics departments have often maintained a monkish
isolation from "applications." Consequently, the application of statistics
thrives while Statistics departments have few students and modest resources.
Might the status of Statistics indicate a future possibility for an insular
approach to CS?

The best Organizational Informatics research in North America is conducted by
faculty in the Information Systems departments in business schools and by
scattered social scientists (cf. Boland and Hirschheim, 1987; Galegher, Kraut
and Egido, 1990; Cotterman and Senn, 1992; Sproull and Kiesler, 1991). But
Computer Scientists cannot effectively delegate the research and teaching of
Organizational Informatics to business Schools or social science departments.

Like Computer Scientists, faculty in these other disciplines prefer to focus
on their own self-defined issues.  Computer Scientists are much more likely to
ask questions with attention to fine grained technological nuances that
influence designs. For example, the professional discussions of computer risks
have been best developed through activities sponsored by the ACM's Special
Interest Group on Software (SIGSOFT). They are outside the purview of business
school faculty and, at best, only a few social scientists are interested in
them. Generally, technology plays a minor role in social science theorizing.
And when social scientists study technologies, they see a world of
possibilities: energy technologies, transportation technologies, communication
technologies (including television), medicinal drugs and devices, and so on.
They see little reason to give computer-related information technologies a
privileged role within this cornucopia. As a consequence, the few social
scientists who take a keen interest in studying computerization are
unfortunately placed in marginal positions within their own disciplines. Often
they must link their studies to mainstream concerns as defined by the
tastemakers of their own fields, and the resulting publications appear
irrelevant to Computer Scientists.

Further, faculty in these other disciplines are not organized to effectively
teach tens of thousands of CS students, students who are steeped in technology
and usually very naive about organizations, about systems development and use
in organizations. In North America there is no well developed institutional
arrangement for educating students who can effectively take leadership roles
in conceptualizing and developing complex organizational computing projects
(Lewis, 1989).

CTF is permeated with interesting claims about the social value of recent and
emerging computer-based technologies. While many of these observations should
rest on an empirically grounded scientific footing, Computer Scientists have
deprived themselves of access to such research. For example, the discussion of
systems risks in the ACM rests on a large and varied collection of examples
and anecdotes. But there is no significant research program to help better
understand the conditions under which organizations are more likely to develop
systems using the best risk-reducing practices. There is an interesting body
of professional lore, but little scholarship to ground it (See Appendix).

Computer Scientists have virtually no scholarship to utilize in understanding
when high performance networks, like the National Research and Education
Network, will catalyze social value proportional to their costs. Consequently,
many of the "obvious" claims about the value of various computing technologies
that we Computer Scientists make are more akin to the lore of home remedies
for curing illness. Some are valid, others are unfounded speculation. More
seriously, the theoretical bases for recommending home medical remedies and
new computer technologies can not advance without having sound research
programs.

                         WHAT IS NEEDED

CTF sets the stage for developing Organizational Informatics as a
strong subfield within Computer Science. CTF bases the expansion
of the discipline on a rich array of applications in which many
of the effective technologies must be conceived in relationship
to plausible uses in order provide attractive social value for
multi-billion dollar public investments.

The CS community needs an institutionalized research capability to produce a
reliable body of knowledge about the usability and value of computerized
systems and the conditions under which computer systems improve organizational
performance. In Western Europe there are research projects about
Organizational Informatics in a few Computer Science departments and research
funding through the EEC's Espirit program (Bubenko, 1992; Iivari, 1991; Kyng
and Greenbaum, 1991). These new research and instructional programs in Western
Europe give Organizational Informatics a significantly more effective place in
CS education and research than it now has in North America.

The CS community in the U.S. has 30 years of experience in institutionalizing
research programs, especially through the Defense Advanced Research Projects
Agency and the National Science Foundation (NSF). There are many approaches,
including national centers and individual investigator research grants. All
such programs aim to develop and sustain research fields with a combination of
direct research funds, the education of future researchers, and the
development of research infrastructure. They are all multimillion dollar
efforts. Today, NSF devotes about $125K annually to Organizational Informatics
as part of the Information Technology in Organizations program. This start is
far short of the level of funding required to develop this field within CS.

The North American CS curricula must also include opportunities for students
to learn the most reliable knowledge about the social dimensions of systems
development and use (Denning, 1992).  These opportunities, formed as courses,
can provide varied levels of sophistication. The most elementary courses
introduce students to some of the key topics in Organizational Informatics and
the limitations of Systems Rationalism as an organizing frame (for example,
Dunlop and Kling, 1991a). More advanced courses focus on specific topics, such
as those I have listed above. They teach about substantive problems and
theoretical approaches for analyzing them. While many of these approaches are
anchored in the sociological theory of organizations, CS students usually
won't grasp the importance of the theories without numerous computing examples
to work with. They also have trouble grasping the character of computing in
organizations without guided opportunities for observing and analyzing
computerization in practice. Consequently, some courses should offer
opportunities for studying issues of computerization in actual organizations.

Fortunately, a few CS departments offer some courses in Organizational
Informatics. In addition, some CS faculty who research and teach about human
behavior in areas like Human-Computer Interaction and Software Engineering
can help expand the range of research an instruction. Unfortunately, only a
fraction of the CS departments in the US. have faculty who study and teach
about computing and human behavior.

While the study of Organizational Informatics builds upon both the traditional
technological foundations of CS and the social sciences, the social sciences
at most universities will not develop it as an effective foundational topic
for CS. On specific campuses, CS faculty may be able to develop good
instructional programs along with colleagues in social sciences or Schools of
Management.

But delegating this inquiry to some other discipline does not provide a
national scale solution for CS. Other disciplines will not do our important
work for us. Mathematics departments may be willing to teach graph theory for
CS students, but the analysis of algorithms would be a much weaker field if it
could only be carried out within Mathematics Departments. For similar reasons,
it is time for academic Computer Science to embrace Organizational Informatics
as a key area of research and instruction.

                           REFERENCES

Bentley, Richard, Tom Rodden, Peter Sawyer, Ian Sommerville, John
     Hughes, David Randall and Dan Shapiro.  1992.
     "Ethnographically Informed Systems Design for Air Traffic
     Control." Proc. Conference on Computer-Supported Cooperative
     Work, Jon Turner and Robert Kraut (ed.) New York, ACM Press.
Boland, Richard and Rudy Hirschhiem (Ed). 1987.  Critical Issues
     in Information Systems, New York: John-Wiley.
Bullen, Christine and John Bennett. 1991.  Groupware in Practice:
     An Interpretation of Work Experience" in Dunlop and Kling 1991b.
Bubenko, Janis. 1992. "On the Evolution of Information Systems
     Modeling: A Scandinavian Perspective." in Lyytinen and
     Puuronen, 1992.
Cotterman, William and James Senn (Eds). 1992. Challenges and
     Strategies for Research in Systems Development. New York:
     John Wiley.
Curtis, Bill, Herb Krasner and Niel Iscoe.  1988. "A Field Study
     of the Software Design Process for Large Systems,"
     Communications. of the ACM. 31(11):1268-1287.
Denning, Peter. 1991. "Computing, Applications, and Computational
     Science." Communications of the ACM. (October)
     34(10):129-131.
Denning, Peter. 1992. "Educating a New Engineer" Communications
     of the ACM. (December) 35(12):83-97
Dunlop, Charles  and Rob Kling, 1991a. "Introduction to the
     Economic and Organizational Dimensions of Computerization."
     in Dunlop and Kling, 1991b.
Dunlop, Charles and Rob Kling (Ed). 1991b. Computerization and
     Controversy: Value Conflicts and Social Choices. Boston:
     Academic Press.
Ehn, Pelle. 1991. "The Art and Science of Designing Computer
     Artifacts." in Dunlop and Kling, 1991.
Fish, Robert S., Robert E. Kraut, Robert W. Root, and Ronald E.
     Rice. "Video as a Technology for Informed Communication."
     Communications of the ACM,36(1)(January 1993):48-61.
Galegher, Jolene, Robert Kraut, and Carmen Egido (Ed.) 1990.
     Intellectual Teamwork: Social and Intellectual Foundations
     of Cooperative Work.  Hillsdale, NJ: Lawrence Erlbaum.
Greif, Irene. ed. 1988. Computer Supported Cooperative Work: A
     Book of Readings. San Mateo, Ca: Morgan Kaufman.
Grudin, Jonathan. 1989. "Why Groupware Applications Fail:
     Problems in Design and Evaluation." Office: Technology and
     People. 4(3):245-264.
Hartmanis, Juris and Herbert Lin (Eds). 1992. Computing the
     Future: A Broader Agenda for Computer Science and
     Engineering.  Washington, DC. National Academy Press.
     [Briefly summarized in Communications of the ACM,35(11)
     November 1992]
       [[TO BE DISCUSSED AT ACM CSC in Indianapolis, 16-18 Feb 1992... PGN]]
Hewitt, Carl. 1986. "Offices are Open Systems" ACM Transactions
     on Office Information Systems. 4(3)(July):271-287.
Iivari, J. 1991."A Paradigmatic Analysis of Contemporary Schools
     of IS Development." European J. Information Systems
     1(4)(Dec): 249-272.
Jacky, Jonathan. 1991. "Safety-Critical Computing: Hazards,
     Practices, Standards, and Regulation" in Dunlop and Kling
     1991b.
Jarvinen, Pertti. 1992. "On Research into the Individual and
     Computing Systems," in Lyytinen and Puuronen, 1992.
King, John L. and Kenneth L. Kraemer. 1981. "Cost as a Social
     Impact of Telecommunications and Other Information
     Technologies." In Mitchell Moss (Ed.) Telecommunications and
     Productivity, New York: Addison-Wesley.
Kling, Rob. 1992. "Behind the Terminal: The Critical Role of
     Computing Infrastructure In Effective Information Systems'
     Development and Use." Chapter 10 in Challenges and
     Strategies for Research in Systems Development. edited by
     William Cotterman and James Senn. Pp. 153-201. New York:
     John Wiley.
Kling, Rob  and Charles Dunlop. 1993. "Controversies About
     Computerization and the Character of White Collar Worklife."
     The Information Society. 9(1) (Jan-Feb)
Kling, Rob and Lisa Covi. 1993. Review of Connections by Lee
     Sproull and Sara Kiesler. The Information Society, 9(1)
     (Jan-Feb, 1993).
Kling, Rob, Isaac Scherson, and Jonathan Allen. 1992. "Massively
     Parallel Computing and Information Capitalism" in A New Era
     of Computing. W. Daniel Hillis and James Bailey  (Ed.)
     Cambridge, Ma: The MIT Press.
Kling, Rob and Walt Scacchi. 1982. "The Web of Computing: Com-
     puting Technology as Social Organization", Advances in
     Computers. Vol. 21, Academic Press: New York.
Kraemer, Kenneth .L., Dickhoven, Siegfried, Fallows-Tierney,
     Susan, and King, John L. 1985.  Datawars: The Politics of
     Modeling in Federal Policymaking.  New York:  Columbia
     University Press.
Kyng, Morton and Joan Greenbaum. 1991. Design at Work:
     Cooperative Work of Computer Systems. Hillsdale, NJ.:
     Lawrence Erlbaum.
Ladner, Sharyn and Hope Tillman. 1992. "How Special Librarians
     Really Use the Internet: Summary of Findings and
     Implications for  the Library of the Future" Canadian
     Library Journal, 49(3), 211-216.
Leveson, Nancy G. and Clark S. Turner. 1992. "An Investigation of
     the Therac-25 Accidents".  Technical Report #92-108.
     Department of Information and Computer Science, University
     of California, Irvine.
Lewis, Philip M. 1989. "Information Systems as an Engineering
     Discipline."  Communications of the ACM
     32(9)(Sept):1045-1047.
Lucas, Henry C. 1981. Implementation : the Key to Successful
     Information Systems. New York: Columbia University Press.
Lyytinen, Kalle and Seppo Puuronen (Ed.) 1992. Computing in the
     Past, Present and Future: Issues and approaches in honor of
     the 25th anniversary of the Department of Computer Science
     and Information Systems. Jyvaskyla Finland, Dept. of CS and
     IS, University of Jyvaskyla.
Orlikowski, Wanda. 1992. "Learning from Notes: Organizational
     Issues in Groupware Implementation." Proc. Conference on
     Computer-Supported Cooperative Work, Jon Turner and Robert
     Kraut (Ed.) New York, ACM Press.
Sarmanto, Auvo.  1992. "Can Research and Education in the Field
     of Information Sciences Foresee the Future of Development?"
     in Lyytinen and Puuronen, 1992.
Sproull, Lee and Sara Kiesler. 1991. Connections: New Ways of
     Working in the Networked Organization. Cambridge, Mass.: MIT Press.
Suchman, Lucy. 1983. "Office Procedures as Practical Action: Models
     of Work and System Design."  ACM Transactions on Office
     Information Systems. 1(4)(October):320-328.
Winograd, Terry and Fernando Flores. 1986. Understanding
     Computers and Cognition. Norwood, NJ: Ablex Publishing.



                        ACKNOWLEDGEMENTS

This paper builds on ideas which I've developed over the last decade. But they
have been deepened by some recent events, such as the CTF report. They were
also sharpened through a lecture and followon discussion with colleagues at
the University of Toronto, including Ron Baeker, Andy Clement, Kelley
Gottlieb, and Marilyn Mantei.  Rick Weingarten suggested that I write a brief
position paper reflecting those ideas. At key points, Peter Denning and Peter
Neumann provided helpful encouragement and sage advice. I also appreciate the
efforts of numerous other friends and colleagues to help strengthen this paper
through their comments and critical assistance. The paper is immeasurably
stronger because of the prompt questions and suggestions that I received in
response to an evolving manuscript from the following people: Mark Ackerman,
Jonathan P. Allen, Bob Anderson, Lisa Covi, Brad Cox, Gordon Davis, Phillip
Fites, Simson Garfinkel, Les Gasser, Sy Goodman, Beki Grinter, Jonathan
Grudin, Pertti Jarvinen, John King, Heinz Klein, Trond Knudsen, Kenneth
Kraemer, Sharyn Ladner, Nancy Leveson, Lars Matthiesen, Colin Potts, Paul
Resnick, Larry Rosenberg, Tim Standish, John Tillquist, Carson Woo and Bill
Wulf.

                       APPENDIX

            Published Materials about Computer Risks

Unfortunately, there is no single good book or comprehensive review article
about the diverse risks of computerized systems to people and organizations,
and ways to mitigate them. The Internet board, comp.risks, is the richest
archive of diverse episodes and diverse discussions of their causes and cures.
While its moderator, Peter Neumann does a superb job of organizing discussions
of specific topics each year and also creates periodic indices, there is no
simple way to sift through the megabytes of accumulated comp.risks files.

Computerization and Controversy edited by Charles Dunlop and Rob Kling (1991)
includes two major sections on "security and reliability" and "privacy and
social control" which identify many key debates and reprint some key articles
and book excerpts which reflect different positions.  Another major source is
a series of articles, "Inside Risks, which Peter Neumann edits for
Communications of the ACM.

This is a list of this series of articles, to date:
(All articles are by Peter Neumann unless otherwise indicated.)

Jul 90.  1. Some Reflections on a Telephone Switching Problem
Aug 90.  2. Insecurity About Security?
Sep 90.  3. A Few Old Coincidences
Oct 90.  4. Ghosts, Mysteries, and Risks of Uncertainty
Nov 90.  5. Risks in computerized elections
Dec 90.  6. Computerized medical devices, Jon Jacky
Jan 91.  7. The Clock Grows at Midnight
Feb 91.  8. Certifying Programmers and Programs
Mar 91.  9. Putting on Your Best Interface
Apr 91. 10. Interpreting (Mis)information
May 91. 11. Expecting the Unexpected Mayday!
Jun 91. 12. The Risks With Risk Analysis, Robert N. Charette
Jul 91. 13. Computers, Ethics, and Values
Aug 91. 14. Mixed Signals About Social Responsibility, Ronni Rosenberg
Sep 91. 15. The Not-So-Accidental Holist
Oct 91. 16. A National Debate on Encryption Exportability, Clark Weissman
Nov 91. 17. The Human Element
Dec 91. 18. Collaborative Efforts
Jan 92. 19. What's in a Name?
Feb 92. 20. Political Activity and International Computer Networks, Sy Goodman
Mar 92. 21. Inside ``Risks of `Risks' ''
Apr 92. 22. Privacy Protection, Marc Rotenberg
May 92. 23. System Survivability
Jun 92. 24. Leaps and Bounds (Leap-year and distributed system problems)
Jul 92. 25. Aggravation by Computer: Life, Death, and Taxes,
Aug 92. 26. Fraud by Computer
Sep 92. 27. Accidental Financial Losses
Oct 92. 28. Where to Place Trust
Nov 92. 29. Voting-Machine Risks, Rebecca Mercuri
Dec 92. 30. Avoiding Weak Links
Jan 93. 31. Risks Considered Global(ly)
Feb 93. 32. Is Dependability Attainable?
Mar 93. 33. Risks of Technology

Please report problems with the web pages to the maintainer

x
Top