The RISKS Digest
Volume 3 Issue 64

Wednesday, 24th September 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Sane sanity checks / risking public discussion
Jim Purtilo
More (Maybe Too Much) On More Faults
Ken Dymond
Re: Protection of personal information
Correction from David Chase
Towards an effective definition of "autonomous" weapons
Herb Lin
Clifford Johnson [twice each]
Info on RISKS (comp.risks)

Sane sanity checks / risking public discussion

Jim Purtilo <purtilo@brillig.umd.edu>
Tue, 23 Sep 86 12:54:10 EDT
  [Regarding ``sanity checks'']

Let us remember that there are sane ``sanity checks'' as well as the other 
kind. About 8 years ago while a grad student at an Ohio university that 
probably ought to remain unnamed, I learned of the following follies:

The campus had long been doing class registration and scheduling via
computer, but the registrar insisted on a ``sanity check'' in the form of
hard copy.  Once each term, a dozen guys in overalls would spend the day
hauling a room full of paper boxes over from the CS center, representing a
paper copy of each document that had anything to do with the registration
process.  [I first took exception to this because their whole argument in
favor of "computerizing" was based on reduced costs, but I guess that should
be hashed out in NET.TREE-EATERS.]

No one in that registrar's office was at all interested in wading through
all that paper. Not even a little bit.

One fine day, the Burroughs people came through with a little upgrade to the
processor used by campus administration.  And some "unused status bits"
happened to float the other way.

This was right before the preregistration documents were run, and dutifully
about 12,000 students preregistration requests were scheduled and mailed
back to them.  All of them were signed up "PASS/FAIL".  This was
meticulously recorded on all those trees stored in the back room, but no one
wanted to look.

I suppose a moral would be ``if you include sanity checks, make sure a sane
person would be interested in looking at them.''


  [Regarding break-ins at Stanford]

A lot of the discussion seems to revolve about ``hey, Brian, you got what
you asked for'' (no matter how kindly it is phrased).  Without making
further editorial either way, I'd like to make sure that Brian is commended
for sharing the experience.  Sure would be a shame if ``coming clean'' about
a bad situation will be viewed as itself constituting a risk...

               [I am delighted to see this comment.  Thanks, Brian!  PGN]


More (Maybe Too Much) On More Faults

"DYMOND, KEN" <dymond@nbs-vms.ARPA>
23 Sep 86 09:18:00 EDT
The intuitive sense made by Dave Benson's argument in RISKS 3.50, that

  >We need to understand that the more faults found at any stage to
  >engineering software the less confidence one has in the final product.  
  >The more faults found, the higher the likelihood that faults remain.

seems to invite a search for confirming data because there are also counter-
intuitive possibilities.  For example there is the notion that the earlier
in the life cycle errors are detected, the cheaper to remedy them.  There is
a premium on finding faults early.  And the further notion that with tools
for writing requirements in some kind of formal language that can be checked
for syntactic and semantic completeness and consistency, it's possible to
detect at least some errors at requirements stage that may not have been
caught till later.  So SE projects using these and similar methods for other
stages in the life cycle would tend to show more errors earlier.  Would the
products from these projects be therefore less reliable than others made
with, say, more traditional, less careful, design and programming practice ?

Dave makes the further argument in RISKS 3.57:

  >Certain models of software failure place increased "reliability" on
  >software which has been exercised for long periods without fault. [...]

The models of software reliability exist to order our thinking about
reliability and to help predict behavior of software systems based on
observation of failure up to the current time.  The models that show
failures clustered early in time and then tapering off later do indeed model
an intuition but maybe not the one that more faults mean yet more faults.
Hence the need for data.  I suspect that the reality as shown by data, if it
exists, would be more complex than intuition allows.  More errors discovered
so far may just mean better software engineering methods.  As far as other
engineering fields, the failure vs time curve in manufactured products is
often taken to be tub-shaped, not exponentially decaying.  So more failures
are expected at the beginning and near the end of the useful life of a
"hard" engineered product.  Of course, "an unending sequence of irremediable
faults" should be the kiss of death for any product, whether from hard
engineering or soft.  But the trick is in knowing that the sequence is
unending.  The B-17, I seem to remember reading, had a rather rocky
development road in the 1930s, yet was not abandoned.  Was it just that the
aeronautical engineers at Boeing then had in mind some limit on the number
of faults and that this limit was not exceeded?  It might be easy to say in
hindsight.  On the other hand, sometimes foresight, in terms of spotting a
poor design at the outset, makes a difference, as in the only Chernobyl-type
power reactor outside the Soviet block.  It was bought by Finland (perhaps
this is what "Finlandization" means ?).  However the Finns also bought a
containment building from Westinghouse.

Ken Dymond


Re: Protection of personal information

David Chase <rbbb@rice.edu>
Tue, 23 Sep 86 08:56:18 EDT
                       [The two participants requested this clarification 
                        be included for the record...  PGN]

You misinterpreted my message in a small way; I was writing about a
university attended by a friend, NOT Rice university.  To my knowledge, Rice
has been very good about protecting its students' privacy.  My student
number is NOT my social security number, though the university has that
number for good reasons.  I do not want anyone to think that I was talking
about Rice.       David


Towards an effective definition of "autonomous" weapons

<LIN@XX.LCS.MIT.EDU>
Tue, 23 Sep 1986 18:00 EDT
         [THE FOLLOWING DISCOURSE INVOLVING CLIFF AND HERB IS LIKELY
          TO CONTINUE FOR A WHILE ON ARMS-D.  PLEASE RESPOND TO HERB LIN, 
          NOT TO RISKS ON THIS ONE.  HERB HAS VOLUNTEERED TO SUBMODERATE,
          AND THEN SUBMIT THE RESULTS TO RISKS.  PGN]

    From: Clifford Johnson <GA.CJJ at Forsythe.Stanford.Edu>

    An "autonomous weapon" [should be] defined to be any weapons system
    which is de facto preprogrammed to take decisions which, under the law 
    of nations, require the exercise of political or military discretion.

It's not a bad first attempt, and I think it is necessary to get a
handle on this.  With the realization that you have done us a service
in proposing your definition, let me comment on it.

I don't understand what it means for a weapon to "take a decision".  Clearly
you don't intend to include a depth charge set to explode at a certain
depth, and yet a depth charge could "decide" to explode at 100 feet given
certain input.

What I think you object to is the "preprogrammed" nature of a weapon,
in which a chip is giving arming, targeting and firing orders rather
than a human being.  What should be the role of the human being in
war?  I would think the most basic function is to decide what targets
should be attacked.  Thus, one modification to your definition is

    An "autonomous weapon" [should be] defined to be any weapons 
    system which is preprogrammed to SELECT targets.

This would include things like roving robot anti-tank jeeps, and
exclude the operation of LOW for the strategic forces.

But this definition would also exclude "fire-and-forget" weapons, and
I'm not sure I want to do that.  I want human DESIGNATION of a target
but I don't want the human being to remain exposed to enemy fire after
he has done so.  Thus, a second modification is 

    An "autonomous weapon" [should be] defined to be any weapons 
    system which is preprogrammed to SELECT targets in the absence of
    direct and immediate human intervention.

But then I note what a recent contributor said — MINES are autonomous
weapons, and I don't want to get rid of mines either, since I regard
mines as a defensive weapon par excellence.  Do I add mobility to the
definition?  I don't know.


Towards an effective defintion of "autonomous" weapons

Clifford Johnson <GA.CJJ at Forsythe.Stanford.Edu>
Monday, 22 September 1986 21:43-EDT
There's great difficulty in defining "autonomous weapons" so as to separate
some element that seems intuitively "horrible" about robot-decided death.
But a workable definition is necessary if, as CPSR tentatively proposes,
such weapons are to be declared illegal under international law, as have
chemical and nuclear weapons.  (Yes, the U.N. has declared even the
possession of nukes illegal, but it's not a binding provision.)

The problem is, of course, that many presently "acceptable" weapons already
indiscrminately-discriminate targets, e.g.  target-seeking munitions and
even passive mines.  Weapons kill, and civilians get killed too, that's war.
Is there an element exclusive to computerized weapons that is meaningful?

I don't have an answer, but feel the answer must be yes.  I proffer two
difficult lines of reasoning, derived from the philosophy of automatic
decisionmaking rather than extant weapon systems.  First, weapon control
systems that may automatically target-select among options based upon a
utility function (point score) that weighs killing people against destroying
hardware would seem especially unconscionable.  Second, but this presumes a
meaningful definition of "escalation," any weapons system that has the
capability to automatically escalate a conflict - and is conditionally
programmed to do so - would also seem unconscionable.

Into the first bracket would conceivably fall battle management software and
war games, into the second would fall war tools that in operation (de facto)
would take decisions which according to military regulations would otherwise
have required the exercise of discretion by a military commander or
politician.  The latter category would embrace booby-trap devices activated
in peacetime, such as mines and LOWCs; and here there is the precedent of
law which prohibits booby traps which threaten innocents in peacetime.
Perhaps the following "definition" could stand alone as *the* definition of
autonomous weapons to be banned:

An "autonomous weapon" is defined to be any weapons system which is
de facto preprogrammed to take decisions which, under the law of
nations, require the exercise of political or military discretion.

This might seem to beg the question, but it could be effective - military
manuals and international custom is often explicit on each commanders'
degree of authority/responsibility, and resolving whether a particular
weapon was autonomous would then be a CASE-BY-CASE DETERMINATION.  Note that
this could, and would, vary with the sphere of application of the weapons
system.  This is reasonable, just as there are circumstances in which
blockades or mining is "legal" and "illegal."

Of course, a case-in-point would be needed to launch the definition.
Obviously, I would propose that LOWCs were illegal.  How about battle
management software which decides to engage seemingly threatening entities
regardless of flag, in air or by sea?  Any other suggestions?  Does anyone
have any better ideas for a definition?


Towards an effective definition of "autonomous" weapons

<LIN@XX.LCS.MIT.EDU>
Tue, 23 Sep 1986 18:09 EDT
In thinking about this question, I believe that ARMS-D and RISKS could
perform a real service to the defense community.  There is obviously a
concern among some ARMS-D and RISKS readers that autonomous weapons
are dangerous generically, and maybe they should be subject to some
legal restrictions.  Others are perhaps less opposed to the idea.

It is my own feeling that autonomous weapons could pose the same danger to
humanity that chemical or biological warfare pose, though they may be
militarily effective under certain circumstances.

I propose that the readership take up the questions posed by Cliff's recent
contribution:

    What is a good definition of an autonomous weapon?  

    What restrictions should be placed on autonomous weapons, and why?

    How might such limits be verified?

    Under what circumstances would autonomous weapons be militarily
    useful?

    Should we be pursuing such weapons at all?

    How close to production and deployment of such weapons are we?

Maybe a paper could be generated for publication?

Please report problems with the web pages to the maintainer

x
Top