The RISKS Digest
Volume 2 Issue 50

Thursday, 8th May 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Refocus the discussion, please!
Bob Estell
[Response.] Also, Delta rocket shutdown
Peter G. Neumann
Large systems failures & Computer assisted writing
Ady Wiernik
DESisting
dm
William Brown II
Failure to Backup Data
Greg Brewster
Info on RISKS (comp.risks)

Refocus the discussion, please!

<estell@nwc-143b>
8 May 86 12:44:00 PST
I want to discourage RISKS contributors from discussing at length how Capt.
Midnight jammed the HBO signal - UNLESS there is reason to suspect that
(mis)use of computers was a contributing factor.  Similarly, I want to dis-
courage the continued discussion of the Challenger disaster, unless there
is reason to suspect that computer error - or human error of omission
because of reliance on computers - contributed materially to the failure.

Up to a point, these discussions are relevant; they demonstrate that we can
not trust our lives naively to fully automated systems.  SDI, BART, FAA,
NYSE, etc. must be aware of that.  As computer professionals, we have the
duty of admitting our own humanity, and the frailty of our creations.
Otherwise, the sophisticated technology can fool the public too easily.

Instead, I would encourage RISKS contributors to pursue topics like data
encryption, which appeared recently [RISKS 2.49]; and to wrestle with the
question raised by Dave Weiss in that same issue, viz. CAN Star Wars ever
be made to work?  Kept in technical focus, this question could lead to
research and application of genuine benefit.

It is very easy for us, the readers and contributors, to rely on the moder-
ator to filter our contributions.  But I think it unfair to put him in the
position of sorting lots of interesting items of questionable relevance.
To the extent that these topics (including the ones that interst me) should
be pursued, perhaps that should occur in another electronic forum.  Comment?

Bob


Refocus the discussion, please? Also, Delta rocket shutdown.

Peter G. Neumann <Neumann@SRI-CSL.ARPA>
Thu 8 May 86 20:02:32-PDT
Bob, Thanks.  Contributor self-discipine is greatly appreciated.  However,
when in doubt about a contribution, I have a bias toward the holistic view
-- we are using computers to control physical environments, and relying on
ordinary mortals to do it.  RISKS exists because of the computers and
communications.  But we must not forget the global nature of the problems.

Captain Midnight reminds us again of a type of communication vulnerability
that is vastly more widespread than many of our readers suspect.  The
Challenger disaster (28 Jan) is only the tip of an iceberg, although RISKS
has not had much on it lately — or on the Titan 34D (18 Apr) or the Delta
rocket (3 May).  (We hope that the Atlas-Centaur fares better on 22 May, in
which case it might get dubbed the At-Last-Centaur!  Fortunately, it is
NASA's most reliable, with 43 successful launches dating back to September
1977.)

The type of issue that I raised after the Challenger disaster regarding the
possibility of accidental or malicious triggering of self-destruct
mechanisms in general recurs in a slightly different form in the Titan 34D
failure, in which the rocket's main engine mysteriously shut itself down 71
seconds into the flight — with no evidence of why! (Left without guidance
at 1400 mph, it had to be destroyed.)  The flight appeared normal up to that
time, including the jettisoning of the first set of solid rockets just after
one minute out.  Bill Russell, the Delta manager, was quoted thus: "It's a
very sharp shutdown, almost as though it were a commanded shutdown."  Could
this have been an accidentally generated internal shutdown signal (software
bug or comm interference)?  (There was no evidence of a transmitted
shutdown, so it is was very unlikely that it was maliciously generated.)
Before you answer, recall the local CB interference problem on automobile
microprocessors, the microwave side-effects on pacemakers and other devices,
RF interference on computer buses (an older problem), the alleged Sheffield
communication interference problem, etc...

Peter


Large systems failures & Computer assisted writing.

Ady Wiernik <wiernik@nyu-acf8.arpa>
Thu, 8 May 86 16:36:07 edt
I hope that I'm not contributing to much to the (growing) link between the
risks forum and net.sf-lovers; However, please let me add my two-cent worth
of comments:

1. In his article, Dave Benson <benson%wsu.csnet@CSNET-RELAY.ARPA>
   asked:

> From: Dave Benson <benson%wsu.csnet@CSNET-RELAY.ARPA>
> Subject: Normal Accidents and battle software
>
> >According to
> >
> >   Charles Perrow
> >   Normal Accidents: Living with High-Risk Technologies
> >   Basic Books, New York, 1984
> >
> >we should expect to see large-scale accidents such as the loss of the
> >space shuttle Challenger.  Perrow's thesis, I take it, is that the
> >complexity of current technology makes accidents a 'normal' aspect
> >of the products of these technologies.
> >
> >We may view space shuttles launches, nuclear reactors, power grids,
> >transportation systems, and much real-time control software as lacking
> >homeostatis, "give", forgiveness.  Perhaps some of these technologies
> >will forever remain "brittle".
> >
> >Questions: Does anybody have a good way to characterize this brittleness?
> >To what extent is existing battle software "brittle"?

The question was beautifully answered in a science-fiction book named "Dome"
(I don't remember the Author's name).  In this book, a large fast-breeding
reactor was built in Pittsburgh, and on the day before the ceremonial
opening, it had a meltdown-like accident as result of malfunction in the
control computers caused by human errors. The story contained many other
things, but the interesting point (at least to readers of this forum) is
that in the story a young mathematician had predicted before the reactor
accident that such an accident would happen, (within a predicted time from
the start of operations), based on calculations related to the complexity of
the nuclear power-plant and to the laws of probability theory.  His opinion
was suppressed by the power-company officials (he used to work there).

The "brittleness" is related to the amount of interdependencies between the
various subsystems of the power-plant and the chance of failure of each sub
subsystems. This argument is similar to the argument made in this forum
about the operation of SDI.

2.  In another article, Dave Platt <Dave-Platt%LADC@HI-MULTICS.ARPA>
    (why are there so many Dave's on this forum? Is HAL9000
    responsible? :-) states:

> Date: Tue, 06 May 86 13:10 PST
> From: Dave Platt <Dave-Platt%LADC@HI-MULTICS.ARPA>
> To: Risks@SRI-CSL.ARPA
> Subject: Proofreading vs. computer-based spelling checks
>
>        [Edited out - related to typos in current SF literature]
>
> I seem to recall a passage in "Imperial Earth", by Arthur C. Clarke,
> concerning the pitfalls of cybernetic voice-to-type memowriters about 150
> years in the future.  He wrote that everybody who uses (will use?) such
> systems was careful to proofread the output of the voice-recognition
> modules, as some "hilarious" malaprops had occurred during the early years
> of these systems' availability.

A similar gadget is used in the second book of Issac Asimov's Foundation
trilogy (Foundation and Empire). In this book, the differentiation between
words with similar pronunciation was done using the accenting of the word,
and even then the machine has to be corrected sometimes.

                            Ady Wiernik.
In two weeks: ady@taurus.BITNET or: ady%taurus.BITNET%wiscvm.ARPA


DESisting

<dm@BBN-VAX.ARPA>
08 May 86 14:07:35 EDT (Thu)
There was an article in Science about this several months ago (perhaps it
was just in the proposal stage then, and now is fact, or maybe it was a slow
news day at the Times...).

Since the volume of data transmitted using DES is so large, and the
information protected by it is so valuable (e.g., HBO audio tracks...,
Department of Agriculture Hog reports, electronic funds transfers between
Federal Reserve Banks...), NSA now feels that it is worthwhile for someone
to spend, e.g., $10 billion to build a DES-breaker, because the potential
payoff will be so great.  For that reason, they intend to decertify DES by
1990.

To replace DES, NSA will offer their own little encryption boxes, with
secret encryption algorithms, and possibly protected so that snooping will
destroy the evidence of the encryption algorithm.  They will offer several
different kinds of encryption boxes, using several different algorithms, so
that there won't be so much reliance on a single algorithm.

What about keys?  Well, in decreasing order of security (says NSA,
disingenuously), you can buy them from NSA, I think you can buy instructions
on how to make up your own keys from NSA, or you can make up your own.
Buying them from NSA is more secure because NSA knows the pitfalls of the
algorithms, knows the general pitfalls of key generation, etc.  Of course,
if you buy the keys from NSA, maybe NSA keeps a copy of the keys, and maybe
they'll use their copy to keep tabs on what you're encrypting...


DESisting (RISKS-2.49)

Wm Brown III <Brown@GODZILLA.SCH.Symbolics.COM>
Thu, 8 May 86 13:35 PDT
    RISKS-LIST: RISKS-FORUM Digest,  Tuesday, 6 May 1986  Volume 2 : Issue 49
    From: jon@uw-june.arpa (Jon Jacky)
    Subject: NSA planning new data encryption scheme - they'll keep the keys

My own knowledge of cryptology is limited and mostly theoretical, however there
are some additional bits of information available in public domain literature
which lead me to draw slightly different conclusions from this news item.

    The following excerpts are from a New York Times story "Computer code shift
    expected - eavesdropping fear indicated," by David E. Sanger, April 15,
    1986, pps 29 and 32.  The story described plans by the National Security
    Agency (NSA) to replace the current Data Encryption Standard (DES) with a
    new system of its own design.

    Details of the new system are still unclear.  But ... unlike the Data
    Encryption Standard, the new algorithms will not be publicly available.
    Instead, they will be buried in computer chips manufactured to NSA
    specifications, and encapsulated so that any effort to read the code with
    sophisticated equipment would destroy the chip.

It is a long-standing ground rule of the crypto biz that the adversary
will sooner or later obtain the basic algorithm used in any cypher system.
Traditionally, security is **always** based only on the knowledge of keys,
not on keeping the theory of operation secret.

A system which depends upon the secrecy of its algorithm is effectively a
single-key code.  Eventually it will be compromised and the other side
will be able to read all those tapes of encrypted messages which they have
been saving.  Unless everything ever sent over the system has gone stale by
that time, this is generally an unacceptably large loss.  Not the way to
design a system for long-term use.

By the time such a system is in general use, there will be many thousands of
devices in circulation and hundreds of people who know how it works.  Sooner
or later, the guys in black hats will get hold of one or the other and pry the
top off to find out what's inside.  It may be possible to make the packages
tamper-resistant, but tamper-PROOF is a big order (ask the makers of Tylenol).

    ... By some accounts, under the new system the NSA would distribute the
    keys —  probably limiting them to companies in the United States. ..."

Many recent systems use keys consisting of very large numbers chosen from a
set which is too large to try exhaustively (100 digit primes, cubes, etc.).
This category includes most of the "Public Key" cryptosystems (in which the
encryption and decryption keys are different.)  It seems very possible that
NSA intends to create a subset (still very large) of some such class and then
distribute devices with these individual keys built into them.  Disassembling
such a chip would compromise only one possible key from a large universe, and
few if any humans can remember many such keys, eliminating that source of risk.

One of the fringe benefits (from NSA's viewpoint) is that they would know the
entire universe of assigned keys.  An outsider would have to try all of the
theoretical possibilities, however NSA could exhaustively try every one of
a few millions relatively quickly.


Re: Failure to Backup Data

Greg Brewster <brewster@nacho.wisc.edu>
Thu, 8 May 86 11:25:28 CDT
I must agree that the importance of regular backup of data on
microcomputers is very much underemphasized to many nontechnical
users.  However, in cases where individuals are solely responsible for
particular data files (as in the example of a scholar using a
microcomputer to write a book), I don't believe that incremental backups
are prohibitively difficult.

As Jim Coombs correctly states in RISKS-2.48
> Since a complete backup of a 10
> megabyte hard disk on an IBM XT can take a half-hour, I am sure that backing
> up a 40 megabyte hard disk on a workstation will require more time (and
> diskettes) than the majority of our scholars will invest.

However, there is absolutely no need for any single scholar to be concerned
with a complete epoch dump of a 40 megabyte hard disk.  The data files for
most books will fit on one or two floppy disks.  I believe that, if the
dangers of data loss were emphasized enough, any writer would be happy to
copy each day ONLY the files s/he changed on that day.  If the microcomputer
has a reliable clock and files are marked with modification times, then any
experienced programmer could write a simple command file to back up all
the files changed during the time the current user has been logged in
automatically.

This is a case where the risk of data loss can be decomposed into a
risk of loss of particular data for each system user.  I believe a
reasonable approach then is to require each user to deal with his/her
'individual risk' as s/he wishes.  However, the magnitude of this risk
of data loss must be emphasized to inexperienced users.


Greg Brewster               brewster@nacho.wisc.edu  (ARPA)
University of Wisconsin - Madison   ..ihnp4!uwvax!brewster   (UUCP)

Please report problems with the web pages to the maintainer

x
Top