The RISKS Digest
Volume 1 Issue 21

Thursday, 10th October 1985

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Public Accountability
Jim Horning
Peter Neumann
The Titanic Effect
JAN Lee
Databases, Grades, etc.
Brian Borchers
Andy Mondore
Mark Day [twice]
Alan Wexelblat
Ross McKenrick
Randy Parker

Public Accountability

Jim Horning <horning@decwrl.ARPA >
10 Oct 1985 1244-PDT (Thursday)
It has now been more than a year since the ACM Council passed its
resolution on computer systems and risks to the public. Quoting from
RISKS-1.1:

  The second part of the resolution includes a list of technical questions
  that should be answered about each computer system.  This part states that:

    While it is not possible to eliminate computer-based systems failure
    entirely, we believe that it is possible to reduce risks to the public
    to reasonable levels.  To do so, system developers must better
    recognize and address the issues of reliability.  The public has the
    right to require that systems are installed only after proper steps
    have been taken to assure reasonable levels of reliability.

    The issues and questions concerning reliability that must be addressed
    include:

    1. What risks and questions concerning reliability are involved when
       the computer system fails?

    2. What is the reasonable and practical level of reliability to
       require of the system, and does the system meet this level?
  
    3. What techniques were used to estimate and verify the level of
       reliability?

    4. Were the estimators and verifiers independent of each other
       and of those with vested interests in the system?

In the intervening year, I am not aware that the developers of ANY
computer system have made public their answers to these four questions.
Readers of this forum will surely not leap to the conclusion that no
computer system presents risks to the public worthy of discussion.

I would like to start a discussion on how we can change this.
What can we do as professionals to make it the norm for system
developers to present risk assessments to the public?

  * What can we do to make it more attractive to present a risk assessment?

    - Explicitly invite developers of particular systems to publish draft
      assessments here in the RISKS Forum, with the promise of constructive
      feedback.

    - Inaugurate a section in CACM for the publication of refined risk
      assessments of systems of great public interest or importance.

    - Publish the risk assessments without refereeing. This gives the
      developers "first shot," gets the material out more quickly, and lowers
      the barrier to publication, without limiting the opportunity for public
      discussion and debate.

    - Encourage developers to also address John McCarthy's question:
    
        5. What are the risks inherent in the best available alternative
           to the system in question?

    - Encourage the exploration of legal steps that would make
      Risk Assessments as routine as Environmental Impact Statements.
      (This could be useful even if most of the former are as pro forma and
      unenlightening as most of the latter; they provide a starting point.)

  * What can we do to make it less attractive not to present a risk assessment?

    - First, make it very clear that we, as a profession, believe that it
      is incumbent on system developers to present their risk assessments,
      and that delay or refusal amounts to malpractice of a very high order.

    - Periodically publish (and publicize) the status of all outstanding
      invitations to present risk assessments.

    - Bring cases of persistent noncompliance to the attention of ACM
      Council for appropriate action.

I present these suggestions as a springboard for discussion, not as a
definitive program for action. I welcome suggestions for improvement.

Jim H.


Public Accountability

Peter G. Neumann <Neumann@SRI-CSL.ARPA>
Thu 10 Oct 85 13:30:24-PDT
To: RISKS@SRI-CSL.ARPA
cc: Horning@DECWRL.DEC.COM

Perhaps we can learn something from some steps that have been taken by the
National Computer Security Center (although for security and not
reliability).  The NCSC (formerly the DoD CSC) has established a set of
criteria for trusted computer systems, in the form of a range of
increasingly stringent requirements.  They have evaluated various systems
against those criteria.  They have also explored what kinds of applications
should have which requirements.  To date, two systems have been accorded
high rankings:  (1) the highest existing rating [A1] to the Honeywell SCOMP
-- whose kernel design and trusted subsystems have undergone formal design
proofs demonstrating that the kernel specifications satisfy a security
condition of no adverse flow, and that the trusted subsystems satisfy other
relevant properties involving privilege, integrity, and functional
correctness; (2) a somewhat lower rating [B2] to Multics, which has not been
subjected to any formal analysis, but which satisfies certain of the
hardware and software-engineering criteria and which has withstood extensive
penetration attacks.  Other systems have also been evaluated, but given
lesser ratings — implying greater potential security risks.  (The NCSC also
publishes an Evaluated Products List.)  The literature on this subject is
extensive, but I note a few items here on the Criteria and on the SCOMP
proofs.  Other than that, you can look at the IEEE Proceedings for the
Security and Privacy conferences each April and the National Computer
Security Conferences held at NBS roughly once a year (previously called the
DoD/NBS Computer Security Conference).

I of course need to add that the work on secure and trusted computing
systems is only a very small part of that addressing the potential "risks to
the public", which of course also involve reliability in various forms,
human safety, timely responsiveness, etc.  But my point is that some steps
in the right direction are actually being taken in the security community --
although those are still only small steps with respect to the overall
problem.  Nevertheless, something can be learned from the security work
in addressing the ACM goals that Jim has reminded us of once again.

So, let us try to address some of Jim's points specifically in the future.


REFERENCES:

  CRITERIA: Department of Defense Trusted Computer System Evaluation
  Criteria, CSC-STD-001-83.  Write to Sheila Brand, National Computer
  Security Center, Ft Geo G Meade MD 20755.  (SBrand@MIT-Multics)

  SCOMP KERNEL DESIGN PROOFS:  J.M. Silverman, Reflections on the Verification
  of the Security of an Operating System, Proceedings of the 9th ACM Symposium
  on Operating Systems Principles, October 1983, pp. 143-154.

  SCOMP TRUSTED SUBSYSTEM DESIGN PROOFS: T.C.V. Benzel and D.A. Tavilla,
  Trusted Software Verification: A Case Study, Proceedings of the 1985
  Symposium on Security and Privacy, IEEE Computer Society, Oakland CA, April
  1985, pp. 14-31.

(Note: In a proof of design consistency, the proof must show that a formal
specification satisfies a set of requirements, e.g., for security or fault
tolerance.  The difference between requirements and specifications in that
case is generally that the former tend to be simply-stated global
properties, while the latter tend to be detailed sets of constraints defined
[e.g.,] functionally on state transitions or algebraically on inputs and
outputs.)

----------------------------------------------------------------------

Date:     Tue, 8 Oct 85 16:26 EST
From:     Jan Lee <janlee%vpi.csnet@CSNET-RELAY.ARPA>
To:       RISKS@sri-csl.arpa
Subject:  The Titanic Effect

THE TITANIC EFFECT  (Source unknown):

    The severity with which a system fails is directly proportional
    to the intensity of the designer's belief that it cannot.

COROLLARY:

    The quantity and quality of built-in redundancy is directly
    proportional to the degree of concern about failure.


JAN

----------------------------------------------------------------------

Date: Wed, 9 Oct 85 09:43:20 EDT
From: Brian_Borchers%RPI-MTS.Mailnet@MIT-MULTICS.ARPA
To: risks@sri-csl.arpa
Subject: Databases, Grades, etc. 

Here at RPI, The Registrar's database, as well as the Bursar's systems are
on the same machine that we use for academic work.  We've put a lot of
effort into making MTS secure...

   [By the way, some of this issue's discussions on the low risks of putting
    grades on student computers reflect overall a benevolent naivete about
    the system security risks.  I have not tried to comment on each message
    individually, but find them intriguing.  One could turn the students
    loose to see how many flaws they can find, perhaps as a part of a
    course: give them all F's in the database, and let them try to earn 
    their A's by breaking in!  On the other hand, you might find the last
    one who breaks changes some of the A's to F's!  It is dangerous to
    announce in public that you as an administrator believe you have made 
    unauthorized alterations difficult or impossible; knowing how badly
    flawed most of the systems are, that is a very large red flag to wave.
    But then you apparently need to be shown that you are wrong.  Audit trails 
    are great too, but watch out for the penetrated superuser interface.
    (This comment only touches the tip of the iceberg.  Judging from this
    issue's contributions, I imagine the discussion might run for a while.
    Let's get down to specific cases if we can, but not just students' 
    grades — which are only an illustrative problem.)  PGN]  


Databases, Grades, etc.

<Andy_Mondore%RPI-MTS.Mailnet@MIT-MULTICS.ARPA>
Wed, 9 Oct 85 11:36:54 EDT
Hal Murray in RISKS-1.20 asks whether any schools are "brave/stupid enough"
to have students and grades on the same computer.  Well, that is the
situation here.  Administrators and students use the same mainframe.
Administrative here generally use several extra layers of security such as
extra passwords, allowing sign-ons to certain accounts only from specific
terminals, and logging all successful and unsuccessful sign-ons to our
accounts.  So far, we in the Registrar's Office have not detected any
unauthorized sign-ons (and we have never noticed any strange changes to
files) although we occasionally detect an unsuccessful sign-on attempt from
an unauthorized location.  Generally, changing passwords frequently and
using non-words and special characters for passwords seems to take care of
the unauthorized access problem.

One thing that Hal did not mention is security problems when students and
grades are on separate computers but on the same network, perhaps to share a
special device or certain files. It seems to me that there could be almost
as many potential security problems with this configuration as with my
configuration. Does anyone have experience with this type of configuration
and if so, what problems have you had?


Databases, Grades, etc.

Mark S. Day <MDAY@MIT-XX.ARPA>
Wed 9 Oct 85 09:50:12-EDT
To: risks@SRI-CSL.ARPA

I tend to think that stories about crackers changing their grades should be
taken with a fairly large dose of salt.

I once had the occasion to interview a vice-chancellor at my undergraduate
university, and he explained the system there in some detail.  Basically,
they handled grades the way a bank handles money — that is to say, there
were 'audit trails' and paper copies of everything.

Discrepancies between what was in the computer and what was in the paper
records would eventually get caught, even if there was a period of time when
the wrong grades would show up on an official transcript.  Correctly faking
ALL the records so as to escape an audit would have required a great deal of
knowledge (basically, it would have to be an inside job — and they didn't
have students working in these offices).

--Mark


Databases, Grades, etc.

Peter G. Neumann <Neumann@SRI-CSL.ARPA>
Wed 9 Oct 85 11:39:54-PDT
To: MDAY@MIT-XX.ARPA
cc: Neumann@SRI-CSL.ARPA

Mark, I find myself disbelieving some of what you say.  Manual comparisons
tend to get done (if at all) when the data is entered.  Auditing tends to
get slighted, since people often tend to assume the computer is right! (The
Santa Clara inmate who changed his release date did get caught, but perhaps
only because a guard overheard him bragging about how he was going to get
out early.)  Most vice-chancellors (and many administrators) would probably
tell you they are doing things right, and better than everyone else.  That
is the head-in-the-sand view of computing — everything is just fine.  But
audit trails and paper copies of everything are not sufficient, even if they
are diligently used.  "Discrepancies ... would eventually get caught" is
certainly hopeful, at best.  Peter


Databases, Grades, etc.

Mark S. Day <MDAY@MIT-XX.ARPA>
Wed 9 Oct 85 14:51:40-EDT
To: Neumann@SRI-CSL.ARPA

Peter,

I still maintain that the scenario of "student changes grades to A's and
lives happily ever after" seems dubious.  It seems to be one of these
apocryphal stories where everyone knows about someone who's done it, but
there's little evidence floating around about it (sort of like stories about
people putting cats in microwave ovens to dry them).

Your other comments are well-taken.  It certainly requires administrators,
etc., to have a perception of the problem; that was one reason I was
impressed by my interview with the vice-chancellor.  Several years ago,
crackers were not yet in the public eye, so I was pleasantly surprised to
find out that anyone cared about the overall integrity of the grades
database and talked about the need to regularly audit it.

--Mark


Databases, Grades, etc.

Alan Wexelblat <wex@mcc.ARPA>
Wed, 9 Oct 85 12:40:43 cdt
The University I attended had its database on a computer that student
courses used.  However, it was (is) a very well-kept secret.  It is hard
(under the OS of this system) to find out what people *might* log on.  Also,
it seems that hacker-types don't get the work-study jobs in the registrar's
office.  I'm not sure how long they can keep it up, but it's worked well for
at least 5 years.  (I only found out because I got to be good friends with
the facilities people, and one of them happened to mention it.)

I guess this just reinforces the belief that human beings can make a risky
or less-risky depending on how they use it.  --Alan Wexelblat  WEX@MCC.ARPA

------------------------------ 

Date:    Wed, 09 Oct 85 16:12:15 EDT
From:    Ross McKenrick  <CRMCK%BROWNVM.BITNET@WISCVM.ARPA>
To:      RISKS@SRI-CSL.ARPA
Subject: Databases, Grades, etc.

Students and grades can exist on the same machine.  Attempting to partition
the students' and administrations' computing environments is a crude form of
security which eliminates the possibility of providing students online
services such as class lists and registration changes.

We take a two-fold approach to database security at Brown University:
1) prevention and 2) detection.  We minimize our window of vulnerability
by making it nearly impossible to gain unauthorized access to the
programs which update grades, etc, and by making it nearly impossible
to change a grade (even when authorized) without generating audit
records, security log entries, and notifications slips for the professor.

We are not dealing with an EFT-type environment where "smash-and-grab" might
work.  A student is at Brown for four years, and his/her transcript is
maintained by the Registrar forever.  A student could theoretically change
his/her grade for a day or two *in the computer database* (which does not
mean that the grade, which is intangible, was actually changed).  However, a
system of checks and balances, built into and outside of the computer
system, would eventually result in discovery, correction, and punishment.

Suppose I could design a security system which was 90% (OK, 80%) secure.
Then, rather than spend more time making it more secure (point of
diminishing returns), I could spend my time on a recording/ reporting system
which was 80% secure.  Now, I finish by dovetailing the two systems to make
it very unlikely that a hacker could survive both levels.  Meanwhile, the
Registrar, who is rightfully suspicious of computer security anyway, devises
manual checks and balances which adds another level of security beyond the
computer entirely.

Wouldn't it simply be easier for the hacker to forge a grade change slip
from his/her professor?  Now, considering all of this, you've got certain
expulsion and prosecution under Rhode Island State Law hanging over your head.

Why would someone who's got all of this figured out have trouble passing
courses, anyway?

A collegue of mine likens a malicious hacker in a fairly-well-secured
computer environment to a bull in a china shop.  The china is replaceable,
the bull is dead meat.

Now comes the auto-theft-prevention-device philosophy: why pick on
a fairly-well-secured environment when there are so many unsecured
environments to fool with?

Security in computer-recordkeeping is a very serious subject.  But
you must keep it in perspective with the alternatives: manual record-
keeping, locked doors and desks, etc.


Databases, Grades, etc.

Randy Parker <PARKER@MIT-REAGAN.ARPA>
Thu, 10 Oct 85 09:34 EDT
To: RISKS@SRI-CSL.ARPA

Regarding the subject of the risks of computer technology (in general)
and large databases (FBI's NCIC, TRW, NSA, tenant/landlord database in
California, and the Census), there is a reporter from the New York Times
who has done quite a bit of research into it.  His name is David
Burnham, and his results a few years back are published in his book "The
Rise of the Computer State" (paper/hardback).

The book is a call to arms to the millions who don't realize the serious
threat to our freedom that this poses.  Its shortcoming, as such, is
that it is mostly a quite comprehensive series of anecdotal reports on
various errors and abuses (incorrect warrants, Nixon abuse of phone
company records, abuse by politicians, etc) of large informations
stores.  However, if you need convincing that this is a problem, you
will find it in this book.

Burnham spoke here this past Tuesday and updated his list of "horror
stories", as he puts it.  One number he gave, if my notes are right, is
that an FBI audit of their NCIC showed that ~10% of the warrants proposed
by this system are somehow based on incomplete or invalid information.
He also mentioned proposals to increase the NCIC to include a White
Collar Crime Index that would specifically contain the names of SUPPOSED
white collar criminals and their ASSOCIATES.  As Burnham pointed out, if
they can't maintain the actual criminal information properly, what are
the consequences when they start accumulating hearsay and gossip?

A serious problem is that no one has a real interest in keeping this
information very correct, except for the falsely accused citizen, and,
as Burnham pointed out, he doesn't have any way to correct it.  I also
agree quite heartily with Matt Bishop that "the greatest risk is not
from the technological end but the human end."  The technology itself is
not to blame, but we need, especially as computer professionals whose
opinions in the matter would be seriously regarded, to recognize the
growing threat to our liberties and to act accordingly.

Randy Parker
MIT AI Lab
PARKER@MIT-REAGAN

Please report problems with the web pages to the maintainer

x
Top