The RISKS Digest
Volume 3 Issue 27

Tuesday, 29th July 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Whoops! Lost an Area Code!
Clayton Cramer
Comet-Electra (RISKS-3.25)
Stephen Little
Comparing computer security with human security
Bob Estell
Info on RISKS (comp.risks)

Whoops! Lost an Area Code!

Clayton Cramer <voder!kontron!cramer@ucbvax.Berkeley.EDU>
Mon, 28 Jul 86 11:29:10 pdt
I had an interesting and aggravating experience this last Saturday.  The 707
area code billing system failed.  Completely.  For over five hours.

During that time, you could not dial into the 707 area code, dial out
of it, make local calls billed to a credit card, or get an operator.  
The ENTIRE area code.  Fortunately, the 911 emergency number doesn't 
go through the billing system, so I doubt any lives were lost or
threatened by this failure, but I shudder to think of how this could
happen.  My guess is someone cut over to a new release of software
and it just failed.

No great philosophical comments, but one of those discouraging examples
of the fragility of highly centralized systems.

Clayton E. Cramer


Comet-Electra (RISKS-3.25)

S Little <munnari!gucis.oz!edsel@seismo.CSS.GOV>
Tue, 29 Jul 86 15:14:30 est
Initial design studies for a trans-atlantic turbo-jet powered mail plane
were begun during World War II by de Havilland.  Eventually a much larger
airliner, the DH-106 Comet prototype flew in 1949, so that computer
involvement in the design is not an issue.  The test program involved may
have been adequate for forties technologies, but the jet-based mileages and
altitudes obviously revealed a new range of problems which have resulted in
the more stringent certification procedures now applied.

Whatever the source of the disastrous crack propagation (said in one case to
be possibly a radio antenna fixing), the design change to rounded windows
was in response to this danger.  The only square window Comets remained in
RAF service without pressurization for many years (Air International vol.12
no.4, 1977).

Given that computer representation is limited by our understanding of a
design situation, is there a general concern with the performance of, inter
alia, flight simulators, which may accurately represent an inadequate
understanding of the behaviour of the system modelled.  I have been told of
one major accident in which the pilot followed the drill for a specific
failure, as practiced on the simulator, only to crash because a critical
common-mode feature of the system was neither understood, or incorporated in
the simulation.  I highly reccommend Charles Perrow's "Normal Accidents" for
an analysis of the components of complexity in such situations.

I understand that the Shuttle auto-pilot is the source of re-appraisal
including expert systems derivation of responses to the large number
of relevant variables.  What are people's feelings about the induction
of knowledge in such areas, is it felt to increase or decrease risk
via computer ?

Stephen Little, Computing & Information Studies,
                Griffith Uni, Qld, Australia.


Comparing computer security with human security

"143B::ESTELL" <estell%143b.decnet@nwc-143b.ARPA>
29 Jul 86 08:29:00 PST
The question has been raised: Are there significant differences in the
quality of security in computer system, based on elaborate software models
[passwords, access lists, et al], versus having human guards at the door; 
e.g., humans can be bribed, computers can't; but computers can fail.

Hmmmmm... First let me admit a bias: I think the "MIT rule" applies: 
 No system can be better than the PEOPLE who design, build, and operate it.
[I call it that because that's where I first heard in in '68.]

Aside from that bias, there seems to be some assumptions:
(1) People don't "fail" [at least not like computers do]; and
(2) Computer can't be "diverted" in the manner of a bribe.

Seems to me that people DO FAIL, somewhat like computers; i.e., we have
memory lapses [similar perhaps to incorrect data fetches?]; and we make
perception errors [similar perhaps to routing replies to the wrong CRT?]

And computers can be diverted.  Examples:

(1) A malicious agent, only wanting to deny others service on a computer,
    rather than gain access himself, can often find ways to exploit the
    priority structure of the system; e.g., some timesharing systems give 
    high priority to "login" sequences; attacking these with a "faulty 
    modem" can drain CPU resources phenominally.

(2) There are some operating systems/security packages that fail in a com-
    bination of circumstances; I'm going to be deliberatly vague here, in
    part because the details were shared with me with the understanding
    that I not broadcast them, and in part because I've forgotten them,
    and in part because the exact info is not key to the discussion;
    but to continue:

    If the terminal input buffer is overrun [e.g., if the user-id or
    password is VERY long], and if the "next" dozen [or so] bytes
    matches a "key string" then the intruder is allowed on; not only
    that, but at a privileged level.

    In other words, the code gets confused.  But isn't that what a person
    suffers when he trades his freedom, his honor, and all his future earn-
    ings [hundreds of thousands of dollars?] for a few "easy" tens of thous-
    ands of dollars now for one false act?  I'm saying that most "bribes"
    aren't nearly large enough to let the "criminal" relocate somewhere
    safe from extradition, and live a life of luxury ever after; instead,
    most bribes are only big enough to "buy a new car" or pay a overdue
    mortgage or medical bill.

------

OR is the real risk in both cases [human and computer] that the most potent
penetrations are those that never come to light; e.g., the computer "bug"
that is so subtle that it leaves no traces; and the "human bribe" that is
so tempting that authorities [and victims] don't talk about it - precisely
because they don't want folks to know how much it can be worth?

Discussion and comments, please.             Bob

Please report problems with the web pages to the maintainer

x
Top