The RISKS Digest
Volume 3 Issue 61

Sunday, 21st September 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Computers and Ethics
Robert Reed
Autonomous weapons
Wayne Throop
Simulation risk
Rob Horn
Viking software
James Tomayko
Risks of passwords on networks
Bruce
More on digital jets; Sanity checks
Eugene Miya
Info on RISKS (comp.risks)

Computers and Ethics

<bobr%zeus.tek.csnet@CSNET-RELAY.ARPA>
19 Sep 86 13:36:54 PDT (Fri)
In RISKS-3.54 Mark S. Day writes:
> ...people will read mail and data files stored on a timesharing system, even
> though it's unacceptable to rifle through people's desks. [...]

It occurs to me that each of these suggested mechanisms can be interpreted
in different ways which may provide new insights into the problem.

Novelty.  Social conditioning aside, the thrill of adventure in a new
        environment leads many people to explore the system in a
        quest for new understanding about it.  It is perhaps easier
        to lay the moral questions aside when caught in the fervor
        of covering new ground.  In fact the thrill is enhanced by
        doing something slightly larcenous.

Distance.  Certainly the distance between people is greater, but the
        distance between private pathways is shorter.
        Psychologically, I feel closer to your portion of the file
        system than I do to the contents of your desk drawers.
        Especially if working in an environment where limited
        sharing of files is part of the norm, the sense of
        territorial lines is less distinct within such an electronic
        medium

There is a third aspect which is related to the thrill factor, and that is
the threat of being caught.  If I am found in your office with my hand in
your desk, the evidence is pretty compelling and not easy to hide.  Within a
computer system, we are all little "virtual people", moving silently around
the directory tree, and so much less likely to arouse suspicions, so even
when ethical considerations are present, the concern about getting caught is
lessened by the nature of the medium.

Robert Reed, Tektronix CAE Systems Division, bobr@zeus.TEK


Autonomous weapons

<rti-sel!dg_rtp!throopw%mcnc.csnet@CSNET-RELAY.ARPA>
Fri, 19 Sep 86 16:46:17 edt
> eugene@AMES-NAS.ARPA (Eugene Miya)
> Most recently, another poster brought up the issue of autonmous weapons.

It is worth pointing out that we are *currently* using autonomous weapons
and they are *not* smart enough to distinguish signs of surrender.  Give up?
I'm talking about, for example, hand grenades or landmines.  These are
autonomous (after being thrown or burried) and their mission (guided by a
particularly simple "computer") is to saturate their environment with
shrapnel after a suitable delay.  Bombs with proximity fuses, self-guided
missiles, and so on, where there is "intelligence" in the weapon and a
significant time delay between the decision to deploy and the weapon's
effective discharge can all be considered cases of "autonomous weapons".  We
are (in this view) simply trying to make the beasties smarter, so that they
eventually *will* be able to recognize signs of surrender or cease-fire or
other cases of cessation of hostilities.  (Picture land-mines getting up and
"trooping" back to an armory after the war is over... )

Perhaps this is more appropos to one of the "arms" lists, but I think it is
worth noting that we are allowing some *very* simple "computers" to be in
charge of some *very* powerful weapons right now.  It is an interesting
question to ask if we really *want* to make the weapons smarter.  But I
don't think it is a question of whether to use autonomous weapons at all...
we're already using them.

Wayne Throop      <the-known-world>!mcnc!rti-sel!dg_rtp!throopw


Simulation risk

Rob Horn <harvard!wanginst!infinet!rhorn@seismo.CSS.GOV>
Sat, 20 Sep 86 16:11:42 edt
One kind of risk that I have not seen discussed here is the problems posed
by using computer simulation models that are not adequate.  In particular I
am refering to situations where due to either insufficient computer
resources, or insufficient mathematical analysis, the really accurate model
results are not available.  Usually more primitive, inaccurate model results
are available and being used by the ideologues on both sides of an issue.
This places the responsible scientists and engineers in a difficult
situation.  How do you say ``I don't know yet'' and how do you deal with
making recommendations in the absence of adequate information.

I can think of two such situations that have major public decision-making
impact.

The first is the ``nuclear winter'' situation.  I remember many years ago
reading the sensitivity analysis of the one-dimensional and two-dimensional
climate models to solar input.  They were hyper-sensitive, with variations
on the order of measurement error causing massive climate change.  It was
not until recently (1982) that the vectorized Climate Model was analyzed and
shown to be reasonably well behaved.  And even it has some contentious
approximations.  This model requires 15 hours on a CRAY-1 to analyze one
situation for one season.

When the nuclear winter stories came out I had my doubts.  Where did these
people find a solid month (12 seasons x 4(?) test cases) of CRAY time?  Had
they used one of the hyper-sensitive 1 or 2-dimensional models.  What would
the accurate models find?  And how should I respond when I knew that it
would probably be a year or more before that much CRAY time and post-
simulation analysis could be finished?  (Fortunately I only had to handle
party conversation with people who knew that I had worked in that field.)

The same kind of problem occured in the ozone layer issues during the mid
70's.  The more accurate model had two extremely severe problems: 1) it was
unconditionally unstable when phrased as a finite difference problem or
exceedingly temperamental when phrased as an implicit differencing problem.
2) It involved solving extremely stiff differential equations.  In this case
the official answer given was ``we don't know.  It will take several years
of mathematical research effort to make this problem tractable.  The real
answer is anyone's guess.  The published model answers are meaningless.''  A
truthful answer but of little value to decision makers.  (There was a brute
force throw-computers-at-it solution.  Estimated run-time on a CRAY was
about 1,000 years per simulated year.  Easier to wait and see what
happened.)

How often are we placed in a situation where the inaccurate computer
simulation is available, but the accurate simulation unavailable?  
What is an appropriate way to deal with this problem? 

                Rob  Horn
    UUCP:   ...{decvax, seismo!harvard}!wanginst!infinet!rhorn
    Snail:  Infinet,  40 High St., North Andover, MA


Viking software

<James.Tomayko@sei.cmu.edu>
Sunday, 21 September 1986 09:25:25 EDT
The Viking software load for the lander was 18K words stored on plated wire
memory. The Martin Marietta team decided to use a 'software first' approach
to the development of the flight load. This meant a careful examination of
the requirements, a serious memory estimate, and then commitment by the
project team to stay within that memory estimate. The software was developed
on an emulator that used microcoded instructions to simulate the
as-yet-unpurchased computer. Sources for this are a Rome Air Development
Center report that studied software development, later summarized in a book
by Robert L. Glass. The Viking software documents for the orbiter, developed
by JPL, are so good I use them as examples of tracability in my current
software engineering courses.


Risks of passwords on networks

<BRUCE%UC780.BITNET@WISCVM.WISC.EDU>
20 SEP 86 14:57-EST
A few thoughts about networks which ask for passwords to send files.  Take a
computer network with three computers.  Call them computer A, B, and C.
Computer user on A wants to send a file to their account on C through
computer B.  No problem, we invoke the command to send files, supply it with
a password (and maybe a username at computer C) and off the files go.  But,
on computer B, there is a "smart" systems programmer who monitors all
network traffic through his/her node.  How interesting... A file copy
operation with a user name/password attached.

The point?  Just a password is not a good solution.  Maybe one
would need to encrypt the packets through the network (so that
intermediate nodes couldn't read them).

        Bruce


More on digital jets; Sanity checks

Eugene Miya <eugene@AMES-NAS.ARPA>
Sat, 20 Sep 86 11:40:44 pdt
Talk about timing:

In the latest copy of IEEE Spectrum (why didn't anyone else post this?)

%A Cary R. Spitzer
%Z NASA LaRC, Hampton, VA
%T All-digital Jets are Taking Off
%J IEEE Spectrum
%V 23
%N 9
%D September 1986
%P 51-56
%X Covers F-14D, F-16[CD], A-3[012]) airbus, 7J7, MD-11, and
other 1st and emerging 2nd generation digital systems.
Has good basic references.

Added note.  I will be contacting some old Viking friends for a further
detailed description and references as requested (probably next Tu. or We
when they come up here).

On Sanity checks:

I had a similar incident in a Silicon Valley Mexician restaurant
which I reported in a early RISK to the pocket book. This issue
has appeared other news groups like mod.comp-soc on the USENET.
I offer the following reference:

%A Jon L. Bentley
%Z ATT BL (research!)
%T The Envelope is Back
%J Communications of the ACM
%S Programming Pearls
%V 29
%N 3
%D March 1986
%P 176-182
%K rules of thumb, cost, orders of magnitude, quick calculations,
Litle's Law
%X JLB's principles include:
Familiarity with numbers
Willingness to experiment [actively, discussing this one with Denning]
Discipline in checking answers
Mathematics, when you need it
He also gives the "Back of the Envelope" column in the
American Journal of Physics as good reading.

I am reminded of a quote by Eric Shipton, an early English Mt. Everest veteran
who died recently: (paraphased) Never go on an expedition which you can't
plan on the back of an envelope.  I know this is how spaceflight is
frequently done.

--eugene miya
  NASA ARC

Please report problems with the web pages to the maintainer

x
Top