The RISKS Digest
Volume 1 Issue 37

Thursday, 9th January 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

IEEE TSE Special Issue on Reliability — Part 1
Nancy Leveson
SDI Testing
Nancy Leveson
Dave Parnas
Multiple redundancy
Henry Spencer
On Freeman Dyson
Gary Chapman
Jon Jacky

Re: IEEE TSE Special Issue on Reliability — Part 1

Nancy Leveson <nancy@ICSD.UCI.EDU>
07 Jan 86 19:54:29 PST (Tue)
With regard to the note by Jim Horning about the special issue of IEEE TSE
on Software Engineering - Part 1.  He cites the following:

  "An intensity function, called the intensity of coincident errors, has
  a central role in this analysis. This function describes the propensity
  of programmers to introduce design faults in such a way that software
  components fail together when executing in the application environment.
  ... We study some differences between the coincident errors model ...
  and the model that assumes independent failures of component verions [sic].

  "Certain intensity functions can result in an N-version system, on
  average, being more prone to failure than a single software component.
  ... The effects of coincident errors, as a minimum, required an
  increase in the number of software components greater than would be
  predicted by calculations using the combinatorial method which assumes
  independence. ... It is clear we need empirical data to truly assess
  the effects of these errors on highly reliable software systems."

Jim, in the next issue of TSE (part 2 of the special issue on software
reliability) my paper with John Knight appears in which we describe our
experiment (involving 27 programmers and two universities) which provides
some of this empirical data.  We found that independently produced software
showed much higher coincident failures than would be expected assuming
statistical independence.  In later papers which have looked at the actual
bugs (faults) in the software which was produced in the experiment, we found
that people tended to make very similar mistakes or, at the least, to make
mistakes on the same hard parts of the problem which lead to the programs
failing on the same input data.  In summary, our data supports the use of
the coincident errors model rather than the independent failures model.  The
first paper on this will appear in February, two others are still in
preparation or review.

All experimental evidence that I have seen to date does not support the
conclusion that software with ultra-high reliability can be achieved by
producing independent versions and voting the results although some
reliability improvement may be possible.  The question is whether enough is
gained to justify the enormous added cost.  Or whether the money and
resources could better be spent in other ways.

We are just about to start another experiment which will attempt to get some
data on whether people are able to write self-test statements (acceptance
tests, assertions, exceptional conditions) which will detect errors at
execution time.  We will then be able to compare this type of fault
detection with voting.

Nancy Leveson
University of California, Irvine


SDI Testing

Nancy Leveson <nancy@ICSD.UCI.EDU>
07 Jan 86 20:19:06 PST (Tue)
   >From: Jim McGrath <J.JPM@LOTS-A>
   >I would expect that a "realistic" (i.e. to a certain acceptable degree
   >of reliability) testing of the Aegis carrier defense system to be as
   >hard as testing SDI, even though the later is perhaps an order of
   >magnitude smaller than the former. (note that software and system
   >testing are not the same thing.)

After speaking to some of the people working on Aegis, I would not use
that as an example of a reliable system.  I have heard that Aegis is anything
but reliable.  What is your source that shows it is highly reliable?  

You seem to imply that because we have a system and it has been 
tested that it is reliable.  Unfortunately, my experience has not
supported this hypothesis.  There are some that have argued that our
current ABM and early warning systems are so complex that there is very
little chance that they will ever work if so called upon.  For example,
in "Normal Accidents:  Living With High Risk Technology" by Charles
Perrow, he states about our current launch-on-warning systems:
  "...if there were a true warning that the Russian missiles were coming,
   it looks as if it would be nearly impossible for there to be an intended
   launch, so complex and prone to failure is this system.  It is an 
   interesting case to reflect upon: at some point does the complexity
   of a system and its coupling become so enormous that a system no longer
   exists?  Since our ballistic weapons system has never been called upon
   to perform (it cannot even be tested), we cannot be sure that it really
   constitutes a viable system.  It just may collapse in confusion!"
   [Perrow, pg. 257-258]

It does not seem to be a very convincing argument that SDI can be tested
by citing other systems which have been tested but never used enough to
measure their reliability.  How do we know how effective that testing
has been?  If you could give me a system which exists, is equivalent, AND
has been in use enough to have good information on its reliability,
then I would have to listen to you.

     >My point was more that SDI (always excepting boost phase) could be
     >tested according to the same type of standards we currently use to
     >test other complex weapon systems (or computer systems, etc...).  That
     >is, the SDI testing problem is indeed a problem, but not one radically
     >different from those that have already been encountered (and
     >"solved"), or those likely to be encountered in the future.  

If you have information on the solution to the testing (i.e. software
reliability) problem, please tell me.  I could make a fortune because
no one I know of seems to know that it has been solved or knows the
solution.  I have been involved in research in the area of software reliability
and safety for many years because I thought it was an unsolved problem.   
Could you provide some references to the solution? (I will split my first 
million dollars in consulting fees with you.)

     >Moreover, in more recent contributions I made clear that my central
     >point is that testing of SDI is not somehow impossible, if you accept
     >that we can test such systems as Aegis.  Particularly, I think it
     >incredibly naive to equate size of code with system complexity.  (see
     >my previous message on this).

Of course you can test any system.  That is not the point.  The question
is whether the testing is complete and effective.  For that, we need to
know not whether Aegis was tested, but what is the reliability of Aegis.

     >Now, one can dispute that such systems as Aegis are testable.  Or one
     >can hold SDI to higher reliability standards than systems such as
     >Aegis.  Both are defendable positions.  But the first removes SDI from
     >some unique class, and the second is quite debatable (depending upon
     >your mission definition for each system).

I can believe that Aegis is not testable without removing SDI from the unique
class.  I don't understand your logic — one has nothing to do with the
other.  No one is arguing that SDI is unique solely because it is
untestable.  Most complex software systems can not be made ultra-reliable
using our current technology and, more important, there is no known way
to verify or measure that these systems are ultra-reliable even if they
are.  That does not mean that they do not involve unique problems or 
that they are of equal difficulty to build.  They may all have
different reliability — but none of them can be proven to be ultra-reliable.

Moreover, I am not concerned about holding SDI to higher reliability
standards than systems such as Aegis or others.  I am concerned about just
meeting those standards.  My work is in the area of safety-critical systems 
where reliability requirements range from 10^(-5) to 10^(-9).  There exist
no current software technologies which will guarantee that the software
meets these requirements or can be used to accurately measure software
reliability in this range.

Nancy Leveson
University of California, Irvine


SDI Testing

Dave Parnas <vax-populi!dparnas@nrl-css.arpa >
Tue, 7 Jan 86 17:40:32 pst
    I regret that I gave Jim McGrath the impression that I had not read
his many submissions to RISKS before I responded to his suggestion that 
meteors could be used to test SDI-like systems.  I wish to assure him
that I had read them twice.  

    Given the importance of discrimination between legitimate warheads
and decoys in a "Star Wars" battle, I would consider the defensive system
to have failed if it fired at meteors.  The capability to shoot down 
meteors could be considered, by normal engineering standards, an overdesign
as well.

    It is a mistake to consider "testable" as a predicate.  Some
systems are easy to test, others are hard to test.  Systems like Aegis
are difficult to test, but SDI would be more difficult.  Further, the 
SDI requirements are stricter.


Multiple redundancy

<ihnp4!utzoo!henry@ucbvax.berkeley.edu>
Thu, 9 Jan 86 12:14:34 PST
Advocates of multiple redundancy through independently-written software
doing the same job might be interested in an incident involving complete
failure of such a scheme.

During the development of the De Havilland Victor jet bomber, roughly a
contemporary of the B-52, the designers were concerned about possible
problems with the unusual tailplane design.  They were particularly
worried about "flutter" — a positive feedback loop between slightly-flexible
structures and the airflow around them, dangerous when the frequency of the
resulting oscillation matches a resonant frequency of the structure.  So
they tested for tailplane flutter very carefully:

    1. A specially-built wind-tunnel model was used to investigate the
    flutter behavior.  (Because one cannot scale down the fluid properties
    of the atmosphere, a simple scale model of the aircraft isn't good
    enough to check on subtle problems — the model must be carefully
    built to answer a specific question.)

    2. Resonance tests were run on the first prototype before it flew,
    with the results cranked into aerodynamic equations.

    3. Early flight tests included some tests whose results could be
    extrapolated to reveal flutter behavior.  (Flutter is sensitive to
    speed, so low-speed tests could be run safely.)

All three methods produced similar answers, agreeing that there was no
flutter problem in the tailplane at any speed the aircraft could reach.

Somewhat later, when the first prototype was doing high-speed low-altitude
passes over an airbase for instrument calibration, the tailplane broke off.
The aircraft crashed instantly, killing the entire crew.  A long investigation
finally discovered what happened:

    1. The stiffness of a crucial part in the wind-tunnel flutter model
    was wrong.

    2. One term in the aerodynamic equations had been put in wrongly.

    3. The flight-test results involved some tricky problems of data
    interpretation, and the engineers had been misled.

And by sheer bad luck, all three wrong answers were roughly the same number.

Reference:  Bill Gunston, "Bombers of the West", Ian Allen 1977(?).

                Henry Spencer @ U of Toronto Zoology
                {allegra,ihnp4,linus,decvax}!utzoo!henry


Reply to Dr. Denning on Freeman Dyson

Gary Chapman <PARC-CSLI!chapman@glacier>
Tue, 7 Jan 86 18:07:26 pst
To: glacier!RISKS@SRI-CSL.ARPA

I have considered Dr. Peter Denning's thoughts on strategic defense with
some care, and I have appreciated the thought he has put into the issue.  I
also have a great deal of respect for Freeman Dyson, whose book, Weapons and
Hope, Dr.  Denning recommended we all read, and particularly those of
connected with the policies of CPSR.

I would like to respond to some of those comments.  First, the concept of an
anti-ballistic missile defense in a "non-nuclear" world is contradictory.  If
nuclear weapons and ICBMs have been eliminated, there is no need for an ABM
system with all its expense and uncertainties.  Dr. Dyson says in his book that
he favors a strategic defense only after a radical cut in offensive nuclear
weapons on both sides.  Very few critics of the SDI would quarrel with this,
including me.  But we are so far from the kind of cuts that Dr. Dyson and Dr.
Sidney Drell, who also favors this goal, feel are required before strategic
defense is considered that it is hardly worth considering at all at this point.
It is particularly dangerous to consider upsetting the strategic balance by
developing and deploying a strategic defense before those cuts are made.

What CPSR has said is that the software problems inherent in the development
of an SDI-type ABM battle management system contribute to strategic
instability, and therefore the deployment of the SDI will leave us worse off
than without it.  An ABM system that is deployed AFTER achievement of
substantial and trustworthy strategic stability would not have the same
mission as the SDI.  It would be an added insurance of a stability already
achieved by other means, it would be a safeguard against ICBM accidents and
against third-party powers.  It would not be a radical change in the
strategic balance the way the SDI would be if it were to be deployed in the
current strategic situation.

Despite the good will of the President and his honorable intentions, the SDI is
clearly a war-fighting program.  I have never gotten the impression from Dr.
Dyson that he is in favor of such an effort.  CPSR has opposed the SDI for the
program that it is, not for the program that it could be.

                        Gary Chapman
                        Executive Director, CPSR


Peter Denning on WEAPONS AND HOPE

Jon Jacky <jon@uw-june.arpa >
Wed, 8 Jan 86 20:56:23 PST
> (Peter Denning writes: )
> ...CPSR has issued position statements against SDI based on the premise that
> SDI is nothing more than an ABM system...

I think the point of those statements was that systems with limited 
capabilities, such as defense of hardened missile silos, fall far short of
the sort of comprehensive defense implied by the SDI's stated goal of 
"eliminating the threat posed by nuclear ballistic missiles." 

> Dyson argues that the 'soldiers' are the ones to be won over to new 
> pursuasions about weaponry, which leads to the interesting conclusion
> that closer ties between academic researchers and DoD middle managers
> would be beneficial, rather than separation as now advocated by some 
> academicians.

I am not aware of any academicians who say that soldiers and scholars should
not try to understand each other.  Some do argue that developing capabilities 
for future weapons systems should not be the primary motivation for funding
scholarly research, and that it is incorrect to portray such projects as 
basic research.

Jonathan Jacky

Please report problems with the web pages to the maintainer

x
Top