The RISKS Digest
Volume 7 Issue 98

Thursday, 22nd December 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

The Fetzer Paper in CACM
Brian Randell
Computers in mathematical proof
Dale Worley
Teaching students about responsible use of computers
Jerome H. Saltzer
Responsible use of computers
PGN
Info on RISKS (comp.risks)

The Fetzer Paper in CACM

Brian Randell <B.Randell@newcastle.ac.uk>
Thu, 22 Dec 88 12:43:08 WET DST
The following message is intended to serve as a response to the comments by
Nancy Leveson et al. (in RISKS-7.84) on our original brief message in
RISKS-7.61 about Fetzer's paper.  It is extracted from a much more detailed
statement that will appear in the Communications of the ACM.

The furore that the article has aroused in the program verification community
has caused us to go back and re-read Fetzer's paper more carefully, to see
whether it really deserves the opprobrium which the program verification
community has been heaping on it.  Part of the problem seems to us to be the
(unfortunately justifiable) fear that the paper will be misinterpreted by
laymen, particularly those involved in research funding.  We have good reason
to sympathise with such reactions, given the misunderstandings that arose in
some quarters from the Knight and Leveson paper on design diversity.  Indeed,
both papers demonstrate the care that needs to be taken in assessing how
different readerships will react to one's writings.  However we now believe
that there are also significant philosophical issues underlying the present
debate, which is not helped by the terms in which some of the reactions have
been couched.  [In our CACM paper, we go on and analyze one such issue, that
caused by the difference between explanatory and evidential reasoning.]

In essence our view is that the dispute has arisen as a discrepancy between
the program verification community's view of their practice and objectives on
the one hand, and the outside world's view of those objectives on the other. 
It is in fact a public image problem, and public image is not to be corrected
simply by the writing of intemperate letters.  We suspect that a major
underlying reason for the undesired image is that there has been a good
measure of overselling of the concepts.  (This is not of course the first
occurrence of this practice in computer science, nor will it be the last.) A
recent and all too clear example of this overselling can be seen, in the case
of hardware verification, by contrasting exaggerated advertising claims for
the VIPER microprocessor (e.g.  "VIPER is the first commercially available
microprocessor with both a formal specification and a proof that the chip
conforms to it") with the careful and balanced discussion, in Avra Cohn's
report on her (very impressive) work on formal verification of the VIPER
`block model', of what was actually proven, and what was out of scope of the
proving techniques.  (The above advertisement is in fact quoted by Cohn, who
rightly goes on to state that "such assertions, taken as assurances of the
impossibility of design failure in safety-critical applications, could have
catastrophic results".)

If our belief that formal verification has been oversold is correct, then
articles such as `Program Verification - The Very Idea', misconceived though
we now accept it undoubtedly is, might be seen in the longer term as
providing a stimulus for a strategy of corrective measures necessary to
counter the false sense of security that is, we fear, all too commonly
promoted by vendors of verification technology and/or verified programs. 

Finally, to avoid any misunderstanding as to our own views, let us make it
clear that we in no way dispute that program verification can have a
significant role to play in the vital task of clarifying specifications and
removing design faults. That, however, is not the issue.

John Dobson and Brian Randell, 
Computing Laboratory, University of Newcastle upon Tyne  PHONE +44 91 222 7923


Computers in mathematical proof

Dale Worley <worley@compass.UUCP>
Thu, 22 Dec 88 10:43:19 EST
    From: Henry Cox  <cox@spock.ee.mcgill.ca>
    Subject: Supercomputer used to "solve" math problem

    [ Article about use of computer search to demonstrate the non-existence
    of a "projective plane of order 10 deleted. — drw]

    The RISKS are obvious. The willingness of people to accept a computer's 
    answer on faith (whether at the cash register at the grocery store or in
    the university environment) remains disturbing.          Henry Cox

This is replaying a dispute that arose when the Four-Color Problem was solved
via computer search — If no human mind has comprehended the proof, has it
truly been proved?  In the case of the Four-Color Problem, it is generally
believed to be true, since other people (independently) wrote programs to check
the solution.

However, from a broader perspective, this is ignoring the fact that many
published mathematical proofs have significant errors in them, and not
infrequently turn out to be "proving" entirely false statements.  For instance
there is a famous theorem whose proof was 300 pages long, in which 100 or so
errors have been found (and corrected).  Consider how many errors are probably
in the proof of the Simple Group Theorem, which is a compendium of papers by
100 or so mathematicians, totalling about 2000 pages.

In cases not involving humans, no one would say "The willingness of people to
accept other people's answers on faith remains disturbing" — after all, we
can't re-prove *everything* we read.  When computers are involved, often the
disturbing change is not that we take *more* on faith, but that it becomes *so
obvious* we take things on faith.

Similarly, when a human controller screws up and causes a train collision, we
consider this an unfortunate accident.  When a computer control does the same,
we wonder why people would rely on computers...

Personally, I'd rather depend on the cash register to do the addition than the
minimum-wage clerk operating it.
                                                  Dale


Teaching students about responsible use of computers

Jerome H. Saltzer <Saltzer@ATHENA.MIT.EDU>
Thu, 22 Dec 88 15:51:46 EST
There has been some discussion in RISKS recently about what action universities
might appropriately take to instill a sense of ethics in the use of computers
by their students.  M.I.T.'s Project Athena provides some 800 networked
engineering workstations for undergraduates to use in any way they find helpful
to their education.  Accordingly, Project Athena has assumed that one of its
responsibilities is to open a discussion of ethical use with its user
community.  The primary action that Project Athena has taken is the publication
of a set of principles, a copy of the current version of which is attached.

These principles are for the most part general, following M.I.T.'s usual
approach of appealing to basic concepts rather than spelling out many detailed
rules.  There is no claim that publicizing these principles completely solves
any problem nor that it completely answers any question, but it does represent
one organization's attempt to take a step in the right direction.  Some version
of these principles has been posted for about four years, and whenever we have
an incident serious enough to ask a student to talk to the Director, these
principles have provided a very useful starting point for the conversation.

                                         Jerry Saltzer

     ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- 

Principles Of Responsible Use Of Project Athena

  Project Athena is M.I.T.'s computing facility for education.  It consists of
a networked system of workstations and services, and includes communication
features that offer many opportunities for members of the M.I.T. community to
share information.  With that ability to share comes the responsibility to use
the system in accordance with M.I.T.'s standards of honesty and personal
conduct.  Those standards, outlined in the M.I.T. Bulletin under Academic
Procedures, call for all members of the community to act in a responsible,
ethical, and professional way.  This note offers guidelines in applying those
standards to use of Project Athena facilities.

INTENDED USE

  The hardware granted to Project Athena, and the software licensed for that
hardware, are intended for educational use, broadly construed, by members of
the M.I.T. community.  Use of Athena resources by anyone outside M.I.T.
requires approval of the Provost, and the sale of such use is improper.  The
use of Athena resources for immediate financial gain is similarly improper.
Use of Project Athena's facilities for sponsored research activities that
normally would make use of other M.I.T. facilities requires specific
authorization of the Director.

PRIVACY AND SECURITY

  The operating systems used by Project Athena encourage sharing of
information.  Security mechanisms for protecting information from unintended
access, from within the system or from the outside, are minimal.  These
mechanisms, by themselves, are not sufficient for a large community in which
protection of individual privacy is as important as sharing.  Users must
supplement the system's security mechanisms by using the system in a manner
that preserves the privacy of others.

  For example, users should not attempt to gain access to the files or
directories of another user without clear authorization from the other user
(typically that authorization is expressed by setting file access permissions
to allow public or group reading).  Nor should users attempt to intercept any
network communications, such as electronic mail or user-to-user dialog.  A
shared program should not secretly collect information about its users.
Personal information about individuals, which a user would not normally
disseminate, should not be stored or communicated on the system.  Examples of
such personal information are grades or letters of recommendation.

SYSTEM INTEGRITY

  Actions taken by users intentionally to interfere with or to alter the
integrity of the system are out of bounds.  Such actions include unauthorized
use of accounts, impersonation of other individuals in communications, attempts
to capture or crack passwords or encryption, and destruction or alteration of
data or programs belonging to other users.  Equally unacceptable are
intentional efforts to restrict or deny access by legitimate users to the
system.

INTELLECTUAL PROPERTY RIGHTS

  Some software and data that reside on the system are owned by users or third
parties, and are protected by copyright and other laws, together with licenses
and other contractual agreements.  Users must abide by these restrictions.
Such restrictions may include prohibitions against copying programs or data for
use on non-Athena systems or for distribution outside M.I.T., against the
resale of data or programs or the use of them for noneducational purposes or
for financial gain, and against public disclosure of information about programs
(e.g., source code) without the owner's authorization.  It is the
responsibility of the owner of protected software or data to make any such
restrictions known to the user.


Responsible use of computers

Peter Neumann <neumann@csl.sri.com>
Thu, 22 Dec 1988 16:01:42 PST
There are still many would-be contributions in the RISKS pipeline on the
general topic of computer misuse.  Some reiterate points already made several
times before.  Others elaborate on relatively minor points.  Thus, I have been
a more rigorous enforcer of the RISKS guidelines than usual on this topic --
hopefully without unbalancing the representativity of the dialogue.  I am
grateful to Jerry for providing a general, principled view of part of the
problem.

The bottom line is that responsibility is of necessity widely distributed.  We
certainly need better ethics and better laws, and better teaching of their
implications.  We also need a society that respects them.  But our society
needs much more that that; those of us who look at the world only from the
vantage point of our computers are missing much.  (I say this not just in the
spirit of the season.)

Unfortunately, reality suggests that even the best ethics and laws will be
violated by people responding to misplaced incentives (financial or otherwise).
(The drug situation and insider trading are two examples that are difficult to
combat with ethics and laws alone.  They are deep social problems.  And hacking
can certainly become a social disease!) Thus, we also need computer systems
that can enforce security and integrity much more thoroughly, and we need those
systems to be administered and used intelligently.  (That already may be
expecting too much.)  But by now it is generally realized that we cannot depend
only on computer security controls — there are just too many ways to break
them in the systems that we must use.

So, to end the soap-box for this year, security, integrity, software safety
and system safety (e.g., in the sense of Nancy Leveson, Avra Cohn, and
others) are system-wide (and network-wide) concepts.  There are many
life-critical or other critical applications in which we must rely on both
people and technology (especially computers) to perform properly (i.e.,
within tolerances); as we have seen in many cases discussed in RISKS,
deviations may result in catastrophes that are unacceptable to the affected
individuals, if not to society as a whole.  This applies to disastrous
design flaws, programming errors, acts of God, and many other problems in
addition to intentional system misuse.  One rotten apple (human or
technological) can spoil the barrel.  The message for RISKS readers is once
again that we must never blindly trust that computers and people will always
do the right thing, and that we must plan accordingly.  The problems require
a comprehensive approach.  If the real risks (as opposed to the perceived
risks) are too great, then we had better rethink the use of computers in
those applications.  But let's honestly address the REAL RISKS.

Happy holidays!  Peter

Please report problems with the web pages to the maintainer

x
Top