The RISKS Digest
Volume 3 Issue 84

Tuesday, 28th October 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Risks of using an automatic dialer
Bill Keefe
Re: Missing engines & volcano alarms
Eugene Miya
False premise ==> untrustworthy conclusions
Martin Harriman
USN Automated Reactors
Dan C Duval
Keep It Simple as applied to commercial nuclear power generation
Martin Harriman
Works as Documented
Martin Minow
Re: Editorial on SDI
Michael L. Scott
Risks from Expert Articles
Herb Lin
Stealth vs. ATC / SDI Impossibility? / Missing Engines ?
Douglas Humphrey
Info on RISKS (comp.risks)

Risks of using an automatic dialer

Bill Keefe <keefe%milrat.DEC@decwrl.DEC.COM>
Wednesday, 22 Oct 1986 10:09:16-PDT
I wonder if it's significant that they are willing to talk about payment 
for aggravation but not for lost business.  Unfortunately, it was not 
reported whether the failure was due to a hardware or software problem.

  Computerized Sales Call Gets Stuck, Ties Up Phone for Three Days

     GREENWICH, Conn. (AP) - A shipping broker who does all his work
  on the phone says he lost at least one deal because a computerized
  sales pitch called him nearly every two minutes for 72 hours, tying
  up his lines.
     The voice-activated computer message bedeviling Joern Repenning
  was shut off Monday after he had complained to New York Telephone's
  annoyance bureau, the Better Business Bureau, AT&T, police and the
  state attorney general.
     The problem was in a computer at Integrated Resources Equity
  Corp. in Stamford, said William Banks, an employee of the company.
  The repeated calls blocked all other incoming calls to Repenning's
  office with a busy signal.
     ``There was no way we could conduct business,'' Repenning said.
  ``We can't shut off our telephone. That's our business.''
     He said he lost at least one deal because he could not reply by a
  certain deadline on a shipping-cargo transaction.
     Integrated is willing to talk with Repenning about payment for
  aggravation he suffered, Banks said.


Re: Missing engines & volcano alarms

Eugene Miya <eugene@AMES-NAS.ARPA>
Wed, 22 Oct 86 09:34:51 pdt
Martin Ewing gives an example of "absence of signal" as an indication
that something maybe wrong.  He concludes by precisely indicating
the problem but glossing over with "What you do with this data is another
matter."  This last statement is unacceptable completion of the argument
for aircraft manufacturers.

This is precisely the problem with planes, spacecraft, and other
highly constained systems.  How do we adequately know something,
almost as bad: how do we know our instrument is not malfunctioning?
Do we perenially "tap" the instrument?  Designers of aircraft prefer
"indicator/effector" systems, not to put just "indicators" into planes.
"Great, my wings fell off" so what are you going to do?

There is a wind tunnel across the street from where I lunch.  This tunnel
has a set of sensor wires which enter a plate.  This struck me as
the nerve system of the wind tunnel when I first saw it.  How inadequate
this appears.  The metal hull of the tunnel isn't a sensory tool like
our skin (able to sense, heat, pressure, and other things to a much better
precision).  Some day perhaps.

On the posting on the safety of Stealth aircraft: I was visiting a friend
on the day of the recent non-crash of the non-existent F-19.  We were
assured (not-assured?) by authorities [friend lived within a few miles
of the non-site] that, since the non-F-19 only flew at night, it ALWAYS
flew with a radar detectable chase plane (not a non-plane).

--eugene miya


False premise ==> untrustworthy conclusions

Martin Harriman <"SRUCAD::MARTIN%sc.intel.com"@CSNET-RELAY.ARPA>
Wed, 22 Oct 86 14:54 PDT
There seems to be a misconception floating around in RISKS regarding the
degree of automation in Navy and civilian nuclear reactors.  Civilian
reactors are not significantly different than Navy reactors in this
respect; both types of reactors have a single form of automated control.
Both Navy (propulsion) and civilian (electric power generation) reactors
have a reactor protection system--a system rather like a circuit breaker
that automatically shuts the reactor down if some parameters (such as
reaction level or temperature) exceed defined limits.  If you've ever
seen the reactor jargon "scram" or "trip" (as in, "we had three unplanned
trips this year"), that's what is being referred to.

Everything else is manual, in either system.

At least in civilian systems, this system is tested regularly (planned
trips), and the reactor's responses noted.  I am not sure if the Navy
has planned trips; I know they have unplanned trips often enough to annoy
the reactor operators (the scram alarm is a *very* loud klaxon in a
*very* small compartment).

Reactors are not a good paradigm for a debate on the risks of automated
controls.  Arguments based on the safety record of one class of reactors
versus another will miss the point; the reactors differ in many interesting
respects (training, discipline, nature of the task, ...), but the nature of
the control mechanisms is not one of them.


USN Automated Reactors

Dan C Duval <dand@tekigm.TEK.CSNET>
21 Oct 86 12:10:27 PDT (Tue)
Arguments over whether the Navy's choice to NOT use automated safety
systems on USN reactors are overlooking one major point, in that the
choice of using or not using any safety equipment of any kind also has
to meet a weight/benefit tradeoff.

If you design a reactor with built-in automated safety features, you
have the weight of the reactor, the weight (and bulk) of the safety
systems, the reactor operators (and the systems to support them, such
as galleys, bunk space, food stores, etc), and the personnel to maintain
the safety equipment (with support for them as well).

A "manual" reactor requires only the reactor and the operators (plus
their support).

Adding the automated safety gear adds weight, requiring a larger boat,
a larger power plant, more support for boat and crew, etc, all for no
added war-fighting capability. Meantime, adding training to a human
being does not add appreciable weight to the human being, nor require
further support systems.

Though this weight consideration is paramount for subs, it holds as well
for surface ships (or "targets", as my ex-submariner buddy calls them.)
Thus, I think the argument that the USN doesn't trust automation is
weakened, since the USN also has other things to worry about than just
the automated safety vs non-automated safety tradeoff.

This weight-consideration argument also has some bearing on the aircraft
sensor question. More weight in sensors means a larger plane, more systems
that can break, more potential for overlooking problems during maintenance,
and more ways to confuse the flight crew. (Scenario: Crew cannot see wing
to tell if engine has fallen off, but sensor says it has; did it fall off
or did the sensor fail? Did anyone ever see the movie where the flight crew
shut down their last remaining engine because coffee, spilled into the control
panel, caused the "Engine Fire" warning to sound? So we have sensors to
check the sensors, to check those sensors, etc.)

Dan C Duval, Tektronix, Inc
uucp: tektronix!tekigm!dand


Keep It Simple as applied to commercial nuclear power generation

"Martin Harriman" <"SRUCAD::MARTIN"@sc.intel.com>
Fri, 17 Oct 86 17:05 PDT
I think it might be rather amusing if the nuclear power generating plants
in the US were all run by some (reasonably competent) admiral.  Oh well...

The nuclear power (design) industry--the folks who design the nuclear
steam supply systems and their controls--uses a very similar approach to
that used in the Navy.  The automated controls on the reactors I am
familiar with are limited to the reactor protective systems--the system(s)
that detect a fault condition, and trip the reactor.  These systems are
kept very simple (on the same principle as keeping a circuit breaker as
simple as possible for the job it does).

Control of reaction rate and profile is accomplished through manual adjustments
of the control rods and the water chemistry.

The reliability of this system (and its safety) depends on the quality of the
reactor operator (that is, the power company operating the reactor).  One of
the more encouraging signs in recent years has been the NRC's willingness to
suspend the operating licenses of operators who have poor safety records:
the TVA suspension is the most obvious.

  --Martin Harriman, Intel Santa Cruz


Works as Documented

Martin Minow, DECtalk Engineering ML3-1/U47 223-9922 <minow%regent.DEC@decwrl.DEC.COM>
22-Oct-1986 0842
> When was the last time you used a mailer, operating system, compiler,
> etc.. that you trusted to work *exactly* as documented on all kinds of
> input?  (If you have, pls share it with the rest of us!)

The problem is not that the software (etc.) works as documented, but
whether it works as we *expect* it to.

This distinction has wider applicability.  We *expect* SDI to protect us
from a Russian missile attack.  SDI is *documented* to protect some large
percentage of our missiles from a Russian missile attack. 

Martin.


Re: Editorial on SDI

<scott@rochester.arpa>
Wed, 22 Oct 86 11:51:50 edt
RISKS-3.82 contains a response from Andy Freeman to an editorial
I posted to RISKS-3.81.  Andy and I have also exchanged a fair amount
of personal correspondence in the past couple of days.  In that
correspondence he maintains that I have disguised a political argument
as expert opinion.  This from his posting to RISKS:

> Most op-ed pieces written by experts (on any subject, supporting any
> position) simplify things so far that they're actually incorrect.  The
> public may be ignorant, but they aren't stupid.  Don't lie to them.
> (This is one of the risks of experts.)

I do not believe that I have oversimplified anything.  I certainly haven't
lied to anybody (let's not get personal here, ok?).

When technical arguments disagree with government policy, it is standard
practice to dismiss those arguments as "purely political."  Almost everything
that a citizen says or does in a democratic society has political overtones,
but those overtones do not in and of themselves diminish the technical
validity of an argument.  "The emperor has no clothes!" can be regarded
as a highly political statement.  It is also technically accurate.

In my original editorial, I declared that we could not be certain that
the software developed for SDI would work correctly, 1) because we don't
know what 'correctly' means, and 2) because even if we did, we wouldn't
be able to capture that meaning in a computer program with absolute
certainty.  Andy takes issue with point 1).  My words on the subject:

   > Human commanders cope with unexpected situations by drawing on their
   > experience, their common sense, and their knack for military
   > tactics.  Computers have no such abilities.  They can only deal with
   > situations they were programmed in advance to expect.

This is the statement Andy feels is 'actually incorrect'.  His words:

> Operating systems, compilers, editors, mailers, etc. all receive input
> that their designers/authors didn't know about exactly.  Some people
> believe that computer reasoning is inherently less powerful than human
> reasoning, but it hasn't been proven yet....
>
> It can be argued that SDI isn't understood well enough for humans to
> make the correct decisions (assuming super-speed people), let alone
> for them to be programmed.  That's a different argument and Dr. Scott
> is (presumably) unqualified to give an expert opinion.

Very true, the designers of everyday programs don't know about their
input *exactly*, but they *are* able to come up with complete
characterizations of valid inputs.  That is what counts.  The "inputs"
to SDI include virtually anything the Soviets can do on the planet or
in outer space.  It does not require an expert to realize that there is
no way to characterize the set of all such actions.  A command interpreter
is free to respond "invalid input; try again"; SDI is not.

I stand by the technical content of my article: SDI cannot provide
an impenetrable population defense.  Impenetrability requires certainty,
and that we can never provide.  Though the White House has kept
debate alive in the minds of the public, it is really not an issue
among the technically literate.  Almost no one with scientific credentials
is wiling to maintain that SDI can defend the American population
against nuclear weapons.  There are individuals, of course (Edward Teller
springs to mind), but in light of the evidence I must admit to a personal
tendency to doubt their personal or scientific judgment.  Certainly
there is no groundswell of qualified support to match the incredible
numbers of top-notch physicists, engineers, and computer scientists
who have publically declared that population defense is a myth.

What we do see are large numbers of individuals who believe that the
SDI program should continue for reasons *other* than perfect population
defense.  It is possible to make a very good case for developing
directed energy and kinetic weapons to keep the U.S. up-to-date in
military technology and to enhance our defensive capabilities.

My editorial is not anti-SDI; it is anti-falsity in advertising.
Those who oppose SDI will oppose it however it is sold.  Those who
support it will find it very tempting to allow the "right" ends to
be achieved (with incredible budgets) through deceptive means, but
that is not how a democracy is supposed to work.  Let the public know
what SDI is all about, and let us debate it for what it is.


Risks from Expert Articles

<LIN@XX.LCS.MIT.EDU>
Tue, 21 Oct 1986 22:43 EDT
    LIN@XX.LCS.MIT.EDU (Herb?) writes:
        When was the last time you used a mailer, operating system, compiler,
        etc.. that you trusted to work *exactly* as documented on all kinds of
        input?  (If you have, pls share it with the rest of us!)

    From: Andy Freeman <ANDY at Sushi.Stanford.EDU>
    The programs I use profit me, that is, their benefits to me exceed
    their costs.  The latter includes their failures (as well as mine).  A
    similar metric applies to weapons in general, including SDI.

But you can bound the costs of using a faulty mailer.  You can't with
missile defense for population.

    Dr. Scott's expertise applies to the question of
    whether a given spec can be programmed acceptably, not whether there
    is an spec that can be implemented acceptably.  Much of the spec,
    including the interesting parts of the definition of "acceptable", is
    outside CS, and (presumably) Dr. Scott's expertise.

Are you saying that computer scientists should not be calling attention to
the problem of writing specifications?  Or that they have no expertise in
knowing the consequences of faulty specs?  I think quite the contrary --
computer scientists know, probably better than anyone else, how important
the specs are to a functional program.  I agree that CS background does not
grant people particular knowledge about which specs are proper, but in my
view CS people are entirely proper to holler about lousy specs and what
would happen if they were bad.

    Another danger (apart from simplification to incorrectness) of expert
    opinion articles is unwarranted claims of expertise.  Dr. Scott
    (presumably) has no expertise in directed energy weapons yet he claims
    that they can be used against cities and missles in silos.  

Reports that space-based lasers can be used against cities were
recently published, and a fairly simple order of magnitude calculation
that anyone can do with sophomore physics suggests that city attack
with lasers is at least plausible.  You're right about silos.

    Both proponents and opponents of SDI usually agree that it doesn't 
    deal with cruise missles.  If you can kill missles in silos and attack
    cities, cruise missles are easy.

Hardly.  The problem with cruise missiles is finding the damn things.
Cities and silos are EASY to find.


Stealth vs. ATC / SDI Impossibility? / Missing Engines ?

Douglas Humphrey <deh@eneevax.umd.edu>
Wed, 22 Oct 86 12:52:44 EDT
This is kind of a grab bag of responses to the last RISKS. 

Stealth vs. ATC - The general public does not seem to know a lot about the
Air Traffic Control system and how it works. In controlled airspace such as
around large airports, a Terminal Control Area (TCA) is defined into which
only aircraft equipped with a Transponder may traverse. In reality, the
rules and flavors concerned with this whole process are very complex and
aren't needed here. If you are really interested, go to Ground School.  The
transponder replies to the interrogation of the ATC radar providing at least
a bright radar image, and in more sophisticated systems the call sign of the
aircraft, heading, altitude, etc. Thus, the concept of Stealth vs. ATC is
not real. If the stealth aircraft is flying under Positive Control of ATC,
then it will have the transponder. If it does not have one, then it better
stay out of busy places or it is illegal and the pilot sure as hell will
have his ticket pulled.

    [Peter Ladkin also responded on this point.  However, if the stealth
     plane is foreign/unfriendly/hostile/sabotage-minded/..., and NOT flying
     under postive control of ATC, then this argument does not hold.  PGN]

SDI Impossibility?  - I have a good background in physics, computing
(software and vlsi hardware) and a lot of DEW (Directed Energy Weapons), and
I have yet to hear ANYONE explain WHY SDI is impossible. I hear all this
about the complexity of the software, but I used to be part of a group that
supported a software system of over 20 million lines of code, and it rarely
had problems. Admittedly, we wrote simulators for a lot of the load since we
did not want to try experimental code out on the production machines, but we
never had a simulator fail to correctly simulate the situation. There were
over 100 programmers supporting this stuff, and it was properly managed and
it all worked well.  Is someone suggesting that the incoming target stream
can not be simulated ?  Why not ? We do it now on launch profile simulations
involving the DEW (Distant Early Warning) network and a lot of other sensor
systems.  Is someone suggesting that PENAIDS (Penetration Aids) can not be
simulated ?  Why not ? We do it now also. Worst case studies just treat all
of the PENAIDS as valid targets. If you can intercept THAT mess, then you
can stop anything !

I get the feeling that people are assuming that the SDI software is going
to be one long chunk of code running on one machine and that if it ever
sees anything that is not what it expects its going to do a HALT and 
stop the entire process. Wrong. I wouldn't build a game that way, much less
something like SDI ?

So. The Challenge. People out there who think it is Impossible, please
identify what is impossible. Pointing systems ? Target acquisition ?
Target Classification ? Target descrimination ? Destruction of the targets ?
Nobody is saying that it is easy. Nobody is saying that our current level
of technology is capable of doing it all perfectly. But it sure isn't
(in my opinion) impossible. 

   [We've gone around on this one before.  DEH's message is somewhat fatuous,
    but needs a serious response.  Before responding further, make sure you
    have read the Parnas Papers from American Scientist, Sept-Oct 1985, also
    reprinted in ACM Software Engineering Notes October 1985, and the
    Communications of the ACM, December 1985.  But remember that we never seem
    to converge in these discussions.  Parnas does not PROVE that SDI is
    IMPOSSIBLE.  He gives some good reasons to worry about the software.  No
    one else can prove that it CAN BE IMPLEMENTED to satisfy rigorous
    requirements for reliability, safety, security, nonspoofability, etc.,
    under all possible attack modes and environmental circumstances — even
    with full-scale deployment in real combat.  Especially when operating
    under stressed conditions, things often fail for perverse reasons not
    sufficiently anticipated.  (That should be particularly clear to long-time
    readers of RISKS.)  Think about OVERALL SYSTEM TESTING in the absence of 
    live combat as one problem, among others.  Remember, this Forum exists as
    part of a social process, and contributions according to the masthead
    guidelines are welcome.  But SDI debates seem to degenerate repeatedly 
    into what seems like religious wars.  So bear with me if I try to
    close the Pandora's box that I have again reopened.  I would like to see
    some intelligent open discussion relating to computers and related
    technologies in SDI, but perhaps that is a futile wish.  But once again,
    much discussion has taken place before, on both RISKS and ARMS-D.  New
    RISKS participants might want to check back issues.  See the summary issues
    at the end of Volumes 1 and 2 noted above, and the end of Volume 3 --
    which will happen soon.  Computer relevance to RISKS, else to ARMS-D.  PGN]

Missing Engines  - In most aircraft the loss of a major component of
the control system is pretty obvious, generally announced by an abrupt
change in the flight characteristics of the aircraft. Same would go for
the loss of an engine. I am not sure why a pilot would need a video monitor
to tell him that Number 2 just fell off the wing, or that he no longer
has a left horizontal stabilizer. He will no doubt understand this by the
way the aircraft is acting. Most pilots have a good understanding of
Why they are flying and How, and are able to discern the condition of their
aircraft from how it behaves. Certainly I know of Airline pilots who have 
been able to tell by the handling of a DC-9 that a cargo door was partially
open, even though the indicator in the cockpit said it was closed.

   [See above note from Dan Duval.]

I might mention that the landing gear might be a good place for some sort of 
camera system. Pilots get rather paranoid about the state of the landing
gear when they fail to get 3 green lights up in the cockpit.

Doug Humphrey
Digital Express Inc.

Please report problems with the web pages to the maintainer

x
Top