The RISKS Digest
Volume 1 Issue 36

Tuesday, 7th January 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

PLEASE READ Weapons and Hope by Freeman Dyson.
Peter Denning
Wolves in the woods
Jim Horning
Dave Parnas
"Certifiable reliability" and the purpose of SDI
Michael L. Scott
Putting a man in the loop
Michael L. Scott
SDI Testing
Jim McGrath
Jim McGrath
Jim Horning
Dec. 85 IEEE TSE: Special Issue on Software Reliability--Part I
Jim Horning
Masquerading
R. Michael Tague

PLEASE READ Weapons and Hope by Freeman Dyson.

Peter Denning <pjd@RIACS.ARPA>
Tue, 7 Jan 86 10:26:44 pst
A number of correspondents have brought up points analyzed in great detail
and with great clarity by Dyson in WEAPONS AND HOPE.  Examples:  Dyson
argues against building cases for or against particular defense systems by
counting the dead or the living.  The number who may survive or die is
incalculable and arguments based on such calculations are meaningless.
Dyson analyzes in some depth the fact that satellites are easy prey compared
to missiles.  His objections to Star Wars have little to do with the
technical points raised by most computer scientists.  They are based on an
analysis of many possible U.S. strategic strategies arrayed against Soviet
and other strategic strategies.  Dyson believes that there is a place for
nonnuclear ABMs in a nuclear free world, an argument that CPSR should
familiarize themselves with (CPSR has issued position statements against SDI
based on the premise that SDI is nothing more than an ABM system).  Dyson
argues that the ``soldiers'' are the ones to be won over to new pursuasions
about weaponry, which leads to the interesting conclusion that closer ties
between academic researchers and DoD middle managers would be beneficial,
rather than separation as now advocated by some academicians.  I recommend
that Risks Forum correspondents who wish to comment on weapons systems risks
begin by reading WEAPONS AND HOPE before entering the debate.  That way,
some new ground might get covered.

Peter Denning


Re: RISKS-1.34, Wolves in the woods

Jim Horning <horning@decwrl.DEC.COM>
6 Jan 1986 1032-PST (Monday)
Dave,

I'm sure you meant "shooting at wolves" rather than "shooting at woods"?

Jim H.


Re: RISKS-1.34, Wolves in the woods

Dave Parnas <vax-populi!dparnas@nrl-css.arpa>
Mon, 6 Jan 86 13:06:20 pst
    I am glad that you are sure.  I did mean wolves and not woods.  I hope
that everyone else is sure too.  We aim for perfection but we don't do to vel.

Dave


"Certifiable reliability" and the purpose of SDI

<scott@rochester.arpa>
Tue, 7 Jan 86 07:29:21 est
Jim McGrath's comments on reliability worry me quite a bit.  He seems to
understand, as do most of the technically literate, that there is no chance
of protecting population centers with enough certainty to eliminate the
Soviet incentive to own missiles.  Yet this is precisely what the public
thinks SDI is all about, and it is what our president proposed when he asked
us to make nuclear weapons "impotent and obsolete."

At the risks forum at last month's ACM SOSP, someone pointed out that one of
the most important responsibilities of a scientist or engineer is to MANAGE
THE CUSTOMER'S EXPECTATIONS.  It is both dishonest and EXTREMELY dangerous
to proceed with SDI research if we and the public have fundamentally
different understandings of the nature of that work.

It seems perfectly plausible to me that we could build space-based
installations that would disable enough incoming missiles to, say,
significantly increase the survivability of our own land-based defenses.
Such installations seem to be the most likely product of SDI research.  They
may or may not be a good idea on strategic, economic, political, or social
grounds.  Any decision to proceed with their development should be preceded
by public discussion and debate.  Unfortunately, current debate is being
focussed on the entirely different goal of omnipotent nuclear defense.  We
are marketing a product that we have no intention to deliver in order to
build a product that we did not want to market.

Michael L. Scott, University of Rochester  (716) 275-7745
scott@rochester.arpa         {decvax, allegra, seismo, cmcl2}!rochester!scott
scott%rochester@CSNET-RELAY


Putting a Man in the Loop

<scott@rochester.arpa>
Tue, 7 Jan 86 07:03:08 est
I find it remarkable that this suggestion is taken seriously.  Time scales
simply do not permit it.  As has been pointed out by previous contributors,
human intervention during boost phase is out of the question, since it's all
over in a minute or two.  But even for later phases of missile flight, it is
ridiculous to expect competent, cool-headed behavior from human operators
who 1) have presumably been watching their consoles for years with no
action, 2) have the fate of the world in their hands, and 3) have only a few
minutes in which to make their decisions.

Michael L. Scott


Re: SDI Testing

Jim McGrath <J.JPM@LOTS-A> Mon 6 Jan 86 18:24:44-PST
    From: horning@decwrl.DEC.COM (Jim Horning)
    Thanks for your comments. However, I do have to disagree with your
    cheerful assessment of the chances for REALISTIC testing of any
    SDI system or major subsystem. The problem is not that there is no
    testing that could be done, but that any substantial change in the
    system's environment is likely to provoke a new set of unexpected
    behaviors.... (Several very valid points are made about the
    difficulty of realistically testing SDI).

You are, of course, correct.  The problem is that your points could
also be (and are) made about any complex weapons systems (or indeed,
any complex system at all).  It is NEVER possible to fully test ANY
system until it is actually used in battle (and even then it can fail
in future battles).

While the size of SDI makes some testing problems harder, others may
be made easier.  Specifically, SDI operates in a far more predictable
environment that any earth based weapons system.  Enemy
countermeasures, so often cited as a problem, are a problem for ANY
battle system, and once again the possible modes of response by the
enemy need not be correlated to the size of the SDI system.  Thus I
would expect that a "realistic" (i.e. to a certain acceptable degree
of reliability) testing of the Aegis carrier defense system to be as
hard as testing SDI, even though the later is perhaps an order of
magnitude smaller than the former. (note that software and system
testing are not the same thing.)

My point was more that SDI (always excepting boost phase) could be
tested according to the same type of standards we currently use to
test other complex weapon systems (or computer systems, etc...).  That
is, the SDI testing problem is indeed a problem, but not one radically
different from those that have already been encountered (and
"solved"), or those likely to be encountered in the future.  Thus
attention should be focused on HOW to do the tests, not on decrying
that the testing problem is somehow inherently impossible to solve.

Jim [McGrath, that is]

Re: Testing SDI

Jim McGrath <J.JPM@LOTS-A>
Mon 6 Jan 86 18:40:51-PST
    From: vax-populi!dparnas@nrl-css.arpa (Dave Parnas)
    The comments that you reported on testing SDI, proposing that we
    test it by shooting at periodic meteor swarms make me wonder how
    many of the people in our profession have trouble discriminating
    between real problems and arcade games.  Shooting at an easily
    predictable non-evasive meteor has about the same relation to the
    real job of SDI as shooting at a target in a booth at a county
    fair has to shooting at woods in heavy brush from a moving
    airplane.

It is fairly obvious that Professor Parnes did not read my original
message, nor the follow up messages.  The original message dwelt on
the COST of testing, and used meteor swarms as a simple example, not a
serious proposal for exhaustive testing (as was clear in the context
of the message).  Indeed, I specifically stated that the example given
would be good primarily for a basic test of tracking and "hard kill"
capability.

Moreover, in more recent contributions I made clear that my central
point is that testing of SDI is not somehow impossible, if you accept
that we can test such systems as Aegis.  Particularly, I think it
incredibly naive to equate size of code with system complexity.  (see
my previous message on this).

Now, one can dispute that such systems as Aegis are testable.  Or one
can hold SDI to higher reliability standards than systems such as
Aegis.  Both are defendable positions.  But the first removes SDI from
some unique class, and the second is quite debatable (depending upon
your mission definition for each system).

Jim

Re: SDI Testing

Jim Horning <horning@decwrl.DEC.COM>
6 Jan 1986 1911-PST (Monday)
Jim,

If you think that I consider that the kind of testing that is done for
present weapons systems is acceptable for a system of the importance,
power, and risks of SDI, then we have been failing to communicate at a
very fundamental level.

The kind of risk that is acceptable for the loss of an aircraft carrier
and for the loss of all of human civilation are many orders of
magnitude apart. The complexity of SDI and Aegis (and hence the
difficulty of adequate testing) are many orders of magnitude apart. And
the military hasn't done so well at testing in the past (DIVAD, Bradley
Fighting Vehicle, plus the examples you cite) that I am willing to
trust them to decide what testing would suffice here.

I do not consider arcade games to be a suitable model for this
excercise.

Jim H.


Dec. IEEE TSE: Special Issue on Software Reliability--Part I

Jim Horning <horning@decwrl.DEC.COM>
6 Jan 1986 1903-PST (Monday)
[This note is primarily for that small fraction of Risks that may not
be regular readers of IEEE Transactions on Software Engineering.  --Jim H.]

The December 1985 issue of TSE is devoted to software reliability. As
has been noted many times in this forum, unreliable software is one of
the principal sources of risks from computer systems.

A few things that caught my eye in this issue:

  "The occurrence of major software failures is strongly correlated with the
  type and level of workload prior to the occurrence of a failure.  This
  indicates that software reliability models cannot be considered
  representative unless the system workload environment is taken into account.
  The effect of workload on software reliability is highly nonlinear; i.e.,
  there is a threshold beyond which the software reliability rapidly
  deteriorates. ... Although we may intuitively expect to find a higher
  software error rate with higher workload (partly attributable to greater
  execution), there appears to be no fundamental reason why both software and
  hardware should exhibit a similar nonlinear increase in the load-hazard with
  increasing workload."

  "Recognizing the problem is only the first step; knowing what to do to
  achieve high confidence software still remains elusive. This is especially
  true in the area of software reliability, one of the prime factors affecting
  confidence (or lack of it) in software systems.  Current DOD development
  programs are unable to achieve satisfactory software reliability in a
  consistent fashion because of the lack of understanding of what conditions
  truly affect reliability. This situation is compounded when you consider
  that software reliability requirements for future DOD systems will be much
  higher as functional demands on the software become more complex, as
  criticality of the software increases and as system components become more
  distributed."

  "Abstract--When a new computer software package is developed and all
  obvious erros [sic] removed, a testing procedure is often put into
  effect to eliminate the remaining errors in the package."

  "Successful Missions ... Proportion of fault-tolerant runs which
  completed without failing: 56 percent; Proportion of nonfault-tolerant
  runs which completed without failing: 47 percent."

  "An intensity function, called the intensity of coincident errors, has
  a central role in this analysis. This function describes the propensity
  of programmers to introduce design faults in such a way that software
  components fail together when executing in the application environment.
  ... We study some differences between the coincident errors model ...
  and the model that assumes independent failures of component verions [sic].

  "Certain intensity functions can result in an N-version system, on
  average, being more prone to failure than a single software component.
  ... The effects of coincident errors, as a minimum, required an
  increase in the number of software components greater than would be
  predicted by calculations using the combinatorial method which assumes
  independence. ... It is clear we need empirical data to truly assess
  the effects of these errors on highly reliable software systems."

Re: Masquerading

"R. Michael Tague" <Tague@HIS-PHOENIX-MULTICS.ARPA>
Mon, 6 Jan 86 11:39 MST
In Risks Volume 1 Issue 34 it was suggested that a "trusted path" could
be implemented by having trusted terminal driver software recognize "a
combination of keystrokes from the user that would be defined to mean
'enter trusted path'." I would like to suggest that if one were to
implement such a mechanism that one should make the "enter trusted path"
signal be a single keystroke, i.e., one character or the out-of-band
BREAK/ATTENTION key signal, not a "combination of keystrokes".

The reason is that it would be relatively easy for a malicious program
to interfere with a multi-character signal.  For example:  most
terminals can be made to send an answerback string to the host, a
program that is trying to not be defeated by an "enter trusted path"
signal could be constantly be sending answerback requests to the
terminal.  The terminal device driver would then not be able to
recognize the combination of keystrokes that make up the "enter trusted
path" signal due to the interleaving of answerback messages.

I suggest to anyone implementing such a trusted path mechanism that they
use the out-of-band BREAK/ATTENTION key signal for the "enter trusted
path" signal so that the user and applications will still have the full
character set to work with.  Whatever character/signal is used one
should not be able to change or disable it.

                              Tague@CISL

Please report problems with the web pages to the maintainer

x
Top