The RISKS Digest
Volume 3 Issue 55

Monday, 15th September 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Hardware/software interface and risks
Kevin Kenny
F-16
Holleran
Ken Miya
Ihor Kinal
Doug Wade
Info on RISKS (comp.risks)

Hardware/software interface and risks

Kevin Kenny <kenny@b.cs.uiuc.edu>
Thu, 11 Sep 86 10:37:05 CDT
In RISKS 3.53 mlbrown at ncsc-wo writes:

>In RISKS 3.51 Bill Janssen writes of errors made in failing to consider
>the interaction of the hardware and the software under design.  This
>failing was all too common in the writing of assembly and machine code 
>during the early days of programming.  Discrete wired machines often had
>OP codes that were not generally well known (i.e. the computer designers
>kept it secret)...

This posting raises another interesting issue; in any system with a
long service life, there is a likelihood that the underlying hardware
technology will change.  Use of anything undocumented on a particular
machine is asking for trouble when that machine is replaced with a
``compatible'' one that lacks the undocumented feature.

In fact, the undocumented op-codes on 4-pi and 360 were not ``kept
secret'' by the machine designers; in many cases they simply were not
foreseen.  It turned out that the combinations of operations that were
performed by certain bit patterns did something useful.  The modern
microprocessors have this tendency also; witness the plethora of
undocumented opcodes on the Z80.

The modern mainframe manufacturers have all been burned at one time or
another by users who take advantage of undocumented features and then
have their programs fail when transported to a ``compatible'' machine
using newer technology; the IBM 1401 compatibles brought out by
Honeywell after IBM dropped the product line are the classic example.
Some of the manufacturers now consider it worth the cost to add logic
to verify that a program is using only documented instructions
(generate a machine fault rather than an undocumented result); their
experience is that documenting something to be forbidden doesn't keep
the hackers from using it.  There's some justification for the
``everything not permitted is forbidden'' attitude; I've seen
mysterious failures years after a machine conversion caused by hardware
incompatibilities in little-used areas of the software.

I have also discovered successful penetrations of security on systems
in which undocumented opcodes allowed user programs to perform
privileged operations.  I will deliberately refrain from discussing
these further since some of the designs thus penetrated are still in
service in the field.

The goal that the hardware designers should aim for is to provide
predictable results under all circumstances, even the cases that are
documented to be illegal.

Kevin Kenny


F-16 exceeding 55 mph

<Holleran@DOCKMASTER.ARPA>
Thu, 11 Sep 86 00:49 EDT
  I would like to provide some diversion on Jon Jacky's comments.

         >Date: Mon, 8 Sep 86 16:55:19 PDT
         >From: jon@uw-june.arpa (Jon Jacky)
         >Subject: Upside-down F-16's and "Human error"

         >... should prevent any unreasonable behavior.  This way lies
         >madness.  Do we have to include code to prevent the speed from
         >exceeding 55 mph while taxiing down an interstate highway?

  I agree with this and subsequent statements about the capabilities of
the operator (the pilot).

  Let's examine a silly analysis of providing that particular code.  After you
code the routine to prevent the " exceeding the speed", you are going to have
to test it.  Thus, the F-16 will have to "attempt" to exceed 55 mph on the
expressway.  Whether the code is there or not, the trooper is still going to
give you a ticket.  You have already made his day, but no one will believe him
without the pilot getting a ticket.  Besides he has to make his quota.  So you
may as well save your money for more important coding.  Then the pilot will
appear on either 60 minutes or Johnny Carson to explain his side of the
problem.  The analysis could go further but it belongs in a comedian's
dialogue now.

  I would say that many "unreasonable behavior" situations being analyzed in a
silly mode would show that some coding efforts should not be done.  You may
find out that certain situations cannot be tested in a justifiable fashion.
As Jon Jacky and others have concluded, lets be reasonable in the questions
responsible people should be addressing vice situations which have little
chance of occuring.  Good analysis will be better if common sense helps us to
priortize these situations.


Re: F-16 software

Eugene Miya <eugene@AMES-NAS.ARPA>
10 Sep 1986 1317-PDT (Wednesday)
It seems F-16's are a hot topic everywhere.  I think it's novelty
thing like computers except for aeronautics.

> I am trying to make the point that the gross simplification of
> "preventing bomb release while inverted" doesn't map very well to what I
> assume the actual goal is: "preventing weapons discharge from damaging
> the aircraft".  This is yet another instance where the assumptions made
> to simplify a real-world situation to manageable size can easily lead to
> design "errors", and is an architypical "computer risk" in the use of
> relatively simple computer models of reality.
> 
> Wayne Throop      <the-known-world>!mcnc!rti-sel!dg_rtp!throopw

Excellent point.

Several things strike me about this problem.  First, the language used
by writers up to this point don't use words like "centrifugal force"
and "gravity."  This worries me about the training of some computer people
for jobs like writing mission critical software [Whorf's "If the word
does not exist, the concept does not exist."]  I am awaiting a paper
by Whitehead whch I am told talks about some of this.

It can certainly be acknowledged that there are uses which are novel
(Spencer cites "lob" bombing, and others cite other reasons [all marginal])
equal concern must be given to straight-and-level flight AND those
novel cases.  In other words, we have to assume some skill on the part of
pilots [Is this arrogance on our part?]. 

Another problem is that planes and Shuttles do not have the types of sensory
mechanisms which living organisms have.  What is damage if we cannot
"sense it?"  Sensing equipment costs weight.  I could see some interesting
dialogues ala "Dark Star."

Another thing is that the people who write simulations seem to have the
great difficulty discriminating between the quality of thier simulations
and "real world" in the presence of incomplete cues (e.g., G-forces,
visual cues, etc.) when solely relying on things like instrument disk
[e.g., pilot: "Er, you notice that we are flying on empty tanks?" disturbed
pilot expression,  programmer: "Ah, it's just a simulation."]
Computer people seem to be "ever the optimist."  Besides, would you ever
get into a real plane with a pilot who's only been in simulators?

Most recently, another poster brought up the issue of autonmous weapons.
We had a discussion of of this at the last Palo Alto CPSR meeting.
Are autonmous weapons moral?  If an enemy has a white flag or hand-ups,
is the weapon "smart enough" to know the Geneva Convention (or is too
moral for programmers of such systems)?

On the subject of flight simulators: I visited Singer Link two years
ago (We have a DIG 1 system which we are replacing).  I "crashed" underneath
the earth and the polygon structure became more "visible."  It was like
being underneath Disneyland.

--eugene miya           sorry for the length, RISKS covered alot.
  NASA Ames Research Center
  President
  Bay Area ACM SIGGRAPH


Re: RISKS-3.53

<cbosgd!mtung!ijk@ucbvax.Berkeley.EDU>
Fri, 12 Sep 86 07:44:24 PDT
Mike Brown wrote:

<>  Its fine to tell the pilot "Lower your wheels before you land, not
<> after" but we still have gear up landings.  We should not concern ourselves
<> with checking for and preventing any incorrect behavior but we should preclude
<> that behavior which will result in damage to or loss of the aircraft or the
<> pilot.  We do not need to anticipate every possible mistake that he can make
<> in this regard either - all we need to do are to identify the hazardous operational modes and prevent
<> their occurrence.

I disagree that software MUST prevent: what about the case when an
aircraft can lower only ONE side of its landing gear????  A belly-up
landing is then the only way to go [ assume combat damage, or something,
so that the pilot can't eject, and the computer INSISTS on lowering
the landing gear whenever you attempt to go under 50 feet, or
something stupid like that].

On the other hand, some of the latest experimental planes are
totally UNFLYABLE by normal human control — for those planes,
the software better be reliable, because there is no backup!!!

Obviously, one can present arguments for each side [human vs computer
having the last say — at TMI, computers were right, but ...]  I
would say that if humans do override CRITICAL computer control [like
TMI], then some means of escalating the attention level must be
invoked [ e.g., have the computers automatically notify the NRC].
Again, there's lots of tradeoffs to be made [seriousness of the problem,
timeliness of the response necessary, etc.] which means thats there's
NO PAT answer in most cases, just hope that people involved in these
cases realize the possible consequences of their work.  In that
case one could argue for professional certification in these fields
[ we're software ENGINEERS, right?!? : you wouldn't to go over a bridge
built by an uncertified mechanical enginerr, would you??  What if the
software he used was written by a flake? ]; if not certification,
then perhaps the software should undergo wide scrutiny by independent
evaluators [ I'd feel a lot better if I knew that the software controlling
nuclear plants had undergone such scrutiny].

Enough said, I believe.
Ihor Kinal
ihnp4!mtung!ijk.


re. F-16 Software.

<Doug_Wade%UBC.MAILNET@MIT-MULTICS.ARPA>
Wed, 10 Sep 86 11:42:14 PDT

Reading comments about putting restraints on jet performance within
the software reminded me of a  conversation I had a few years ago
at an air-show.
In talking to a pilot who flew F-4's in Vietnam he mentioned that
the F-4 specs said a turn exerting more than say 8 G's would cause
the wings to "fall off". However in avoiding SAMs or ground-fire
they would pull double? this with no such result.
  My comment to this, is what if a 8G limit had been programmed into
the plane (if it had been fly-by-wire). Planes might have been hit and
lost which otherwise were saved by violent maneuvers. With a SAM targeted
on your jet, nothing could be lost by exceeding the structural limitations
of the plane since it was a do-or-die situation.
I'm sure 99.99% of the lifetime of a jet is spent within designed
specifications, but should software limit the plane the one time
a pilot needs to override this constraint?

Please report problems with the web pages to the maintainer

x
Top