The RISKS Digest
Volume 10 Issue 34

Saturday, 8th September 1990

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Risks of shutdown?
PGN
French prisoners use "smart cards"
Robert Nagler
Instrument Software failure in BAe aircraft
Sean
BMW's 'autopilot'
Michael Snyder
Re: "wild failure modes" in analog systems
Henry Spencer
Re: Dealing with software complexity
Martin Minow
Re: Software bugs "stay fixed"?
Robert L. Smith
Re: Computers and Safety (John
J.G.) Mainwaring
Re: Object Code Copyright Implications
Dan Bernstein
Randall Davis
Re: Accidental Disclosure of non-published phone numbers
Jeff Johnson
Info on RISKS (comp.risks)

Risks of shutdown?

"Peter G. Neumann" <neumann@csl.sri.com>
Sat, 8 Sep 1990 12:49:01 PDT
In August 1989 the town of Tisbury, Massachusetts, signed a $113,000 contract
for a computer system, but a town meeting in October nixed the deal.  The
vendor refused to take the system back, and sued the town.  The town left the
CPU and printer plugged in, because they were uncertain whether just pulling
the plug would cause any damage, and because they did not know the "secret
password".  Finally, this August the vendor agreed that if the town would
negotiate, they would divulge the password to the town consultant, Eric Ostrum,
who could then shut down the system.

While the attorneys were negotiating, the consultant read the manual
instructions, got the system to answer "Do you want to shut down the system?",
and typed "yes".  Then, the system shut itself down, with no password required!
So he called the town's attorney and told him not to bother to negotiate for
the vendor assistance.  [Source: An article by Rachel Orr in the Vineyard
Gazette, 28 August 1990.]

[MORAL: The next time you want to hot-dog a vendor, let Ostrum hire weenies.]


French prisoners use "smart cards"

Robert Nagler <nagler@olsen.UUCP>
Sat, 8 Sep 90 18:41:10 +0200
Prison `a la carte", The Economist, 8 Sep 1990, p 33:

  "Barely had the first miscreants arrived at France's newest prison, at Neuvic
in the Dordogne, than they started a riot.  Keen though the appetite of French
prisoners is for insurrection, their protest did not result from force of
habit.  The inmates of the ultra-modern jail had taken unkindly to exactly the
measures designed to make their lives more bearable.
  "In particular they objected to being locked up by computers.  Neuvic, which
opened two months ago and is still filling up, is a technological strongbox
containing the best ideas the penal service has so far come up with.  Every
prisoner carries a personalised ``smart card'', which must be used to enter or
leave any part of the prison.  The cards tell the central computer where all
the prisoners are, [or at least where their cards are :-] all the time, and
make the constant vigil of warders unnecessary.
  "In other words, the prisoners inform the security system.  It was hoped this
might give them a satisfying sense of responsibility, but the signs are not
promising.  Inmates liken the jail to a computerised warehouse where they are
stored as goods.  For example, if a man is a minute late for breakfast, he
cannot get out of his cell: the smart card works only to pre-programmed times.
The convicts also complain of prices in the prison shop that are 20% higher
than in other jails; cynics point out that Neuvic is being experimentally run
by private enterprise."  [After recently having been robbed in the south of
France, I have absolutely no sympathy.--RJN]

              [Excerpted by PGN, who thought it must have been a smarte garde.]


Instrument Software failure in BAe aircraft

<sean@aipna.edinburgh.ac.uk>
Sat, 8 Sep 90 13:19:49 BST
>From The Guardian, September 8, 1990, page 2, column 7

Serious Failure' on BA planes (Patrick Donovan, Transport editor)

Instrument Panels on three British Airways aircraft `crashed' on eleven
occasions shortly after take-off or landing because of computer software
faults, according to Company Documents.  The problems occured on short-haul
European and domestic flights by three of the airline's British Aerospace
Advanced Turbo Prop aircraft in a period between July 9 and August 20 last
year.  Special restrictions were imposed on the 64-seater aircraft by the Civil
Aviation Authority because of the problems.  BA was told that pilots must be in
their seats when the aircraft flew below 5,000 feet in case emergency action
was needed.  Takeoff and landing was restricted to a minimum of 1,000 metres
visibility, the documents say.

Two of the aircraft affected continued flying although they had experienced
failure of `captain's and co-pilots primary displays' twice in one day.  One of
the aircraft, call sign G-BTPJ, suffered instrument failure five times on
fights between Belfast, Manchester, Glasgow, Aberdeen, Bremen, Berlin and
Manchester [sic].  Pilots were alerted by the caption `I/O Fail' flashing onto
a blank screen.  Aircrew managed to restore instrument readings after following
emergency technical procedures, the papers say.

A BA spokeswoman said last night that the problems had been rectified.  She
said that safety was not compromised as backup systems were available despite
loss of `primary instrument panels'.  However, the BA document described the
problems as being of a `serious nature'.  According to the papers the `cause of
the failures was due to a design shortfall' in instruments made by Smiths
Industries.  `Three incidents involved failure of all four displays which
required the aircraft to be flown using the standby instruments until the
displays were restored.  All the failures reported occured during the initial
climb, or climb phase with the exception of one incident which occured during
the approach.'

The documents also explain how BA is developing a `totally new code of practice
for dealing with major accidents'.  The report, circulated amoung BA
operational staff, says: `it is an inevitable fact of life that pilots,
engineers, operational personnel, indeed everyone involved in the operation of
aircraft will from time to time make professional errors.  Some of these
mistakes may result in an incident.'

Captain Colin Seaman, BA's head of safety, urges staff to be frank about
accidents.  `No matter what temptation there is after an accident to be
economical with the truth when rationalising it with hindsight, please remember
it would be unforgivable if, by not revealing the facts or the complete truth,
a similar incident became an unavoidable accident.'

    [What a wonderful sentence!  Reminds me of my favorite legal phrase,
    "Nothing in the foregoing to the contrary notwithstanding, ..."  PGN]


BMW's 'autopilot'

<SNYDER@csi.compuserve.com>
05 Sep 90 17:42:04 EDT
In regard to BMW's "Heading Control System", my understanding (based solely
on hearsay) is that the guidance system provides more of a gentle nudge than
an irresistable shove when it thinks the car is approaching a lane divider
line.  Thus, it should not be difficult to leave the lane on purpose, either
to pass or to get out of danger.  However, has anyone thought of a more
distressing problem?  I refer you to countless "Roadrunner" cartoons.  Picture
a nice shiny white line leading directly into the side of a mountain...
                Michael Snyder, Compuserve


Re: "wild failure modes" in analog systems

<henry@zoo.toronto.edu>
Fri, 7 Sep 90 14:27:22 EDT
>... we would never attempt to build systems of the complexity
>of our current digital systems if we had only analogue engineering to rely on.

Unfortunately, this is not entirely true.  Very complex analog systems were
not at all unknown in pre-digital days.  Concorde's engine computers are
analog circuitry, which makes them a maintenance nightmare by modern
standards — each setting interacts with others, so readjusting them
is a lengthy and difficult chore.  (It is not worth developing a digital
replacement for such a small number of aircraft.)  For another example,
I believe the US Navy's four ex-WWII battleships are still using their
original *mechanical* analog fire-control computers.  For still another,
although the recent fuss about fly-by-wire systems for aircraft has
focused mostly on digital versions, analog fly-by-wire systems were
not at all unknown:  Avro Canada's Arrow interceptor was flying at
Mach 2 with analog fly-by-wire in 1958.

Certain jobs simply require complex systems, and will be done one way or
another.  It is probably true that digital implementations make it easier to
add *unnecessary* complexity, but they also make it easier to do a better and
(potentially) more foolproof job on the basics.  This argument has many
parallels to the old ones about whether word processors lead to poorer writing,
or whether 16-bit address spaces forced better programming.  Analog circuitry
does encourage simplicity, yes... by making design, testing, and maintenance of
even simple systems more difficult.  This is not necessarily a virtue.

                         Henry Spencer at U of Toronto Zoology utzoo!henry


Re: Dealing with software complexity

"Martin Minow, ML3-5/U26 07-Sep-1990 1555" <minow@bolt.enet.dec.com>
Fri, 7 Sep 90 13:12:24 PDT
Several Risks postings have discussed system complexity and whether "risks"
are affected by, say, using analog components or finite-state automata.

One advantage to putting parts of the system into an analog component
is that it separates the problem into isolated (and smaller) units that can
be implemented and tested independently.  The benefits of isolation might
outweigh the disadvantages of analog designs.  Of course, such a
system could also be built out of all-digital components with a/d and d/a
convertors at the final stages.

Much the same can be said for finite-state automata.  Hand-encoded automata
offer a good development methodology for many problems as it is simple to know
when all state/action pairs are covered.  On the other hand, finite-state
automata generated by compilers (such as Unix' lex and yacc) can be fiendishly
difficult to hand-check as the "intelligence" is distributed among a number of
extremely obscure tables.  (In passing, I would make the same complaint about
object-oriented programming techniques.)
                                                                 Martin Minow

   [At least you were not CoOOPTed by a lex barker.  PGN]


Re: Software bugs "stay fixed"?

Robert L. Smith <rlsmith@mcnc.org>
Fri, 7 Sep 90 21:17:40 -0400
    Mr. Thomas implies that it is impossible to be certain, other than by
nonrecurrence, that a nontrivial software bug is fixed.  My experience
indicates otherwise.
    Just last week I was faced with misbehavior in a new program to analyze
tablet input, where the analysis depended upon prior selection of an input
sequence two or more PELs apart on either coordinate.  But close examination
revealed that points were occasionally being selected in adjacent PELs.  This
could occur because, having selected a point, instead of using that point as
the basis for succeeding comparisons, the program chose the next following
point.  The fix was easy — as it happened by removing a line from the program
-- and the misbehavior was eliminated.
    This is my point: I am as certain as a man can be that the noted
misbehavior will never recur in that program for that reason, and I don't have
to run it five years to convince myself, either.
    Over the years I've tackled thousands of software bugs and noted that they
divide into two fundamental classes: those that I hoped I'd fixed and those
that I knew I'd fixed.  I've seldom been wrong about the latter but often about
the former.  In retrospect in seems to me that all of the latter were in code
that I'd written myself.  Maybe Mr. Thomas's "software rot" would be less
evident if the original writers were held responsible for quality throughout
the life of the code.  Maybe that's where we ought to have a law!
    Properly maintained software — by the original writers — asymptotically
over time approaches faultlessness of execution and design.  The reason for
this is that truly fixed bugs stay fixed.  Of all control logic media, only
software exhibits this characteristic.
    That is, when we let it.  In practice we never let that approach continue
for very long before all must be redone to fit the next generation of hardware.
rLs


Re: Computers and Safety

John (J.G.) Mainwaring <RM312A@BNR.CA>
Fri, 7 Sep 90 15:42:00 EDT
In his otherwise well rounded summary in RISKS-10.32 of the computers,
complexity and safety discussion, Bill Murray raises the question of GOTOs,
admitting that it will probably be a red rag to someone.  I happen to be feel
bullish on GOTOs at the moment, so here goes.

I don't for one minute dispute Dijkstra's original thesis that GOTOs can be
replaced by other control structures, and they allow the creation of
programming monstrosities.  Deeply nested IF/THEN/ELSE and loop constructs are
also error prone. In C (which many people mistakenly believe to be a
programming language), BREAK is probably a worse time bomb than GOTO as
programs are maintained and modified, because the target of the branch is not
explicit.  Of course BREAK and also RETURN from the middle of a subroutine are
actually forms of GOTO.  Before using any of them, it's worth examining the
whole page to see if the flow of control can be improved.  However, some
strategies such as introducing new control variables may increase complexity of
the program rather than reducing it.

There is a parallel with the attitude of British Rail to safety as quoted by
Brian Randell in Risks-10.31: if the machine is built not to fail, you don't
have to worry about the operator.  Of course programming languages should be
designed to avoid as many programmer pitfalls as possible.  GOTO is a dubious
construct.  So are razors, but many people use them every day.  Changes in
design, caution in use, and readily available bandaids make them an acceptable
risk to society.

If we want safer computer software, we will have to concentrate on formation of
programmers. You can train people to keep their fingers away from the pointed
end of a chisel and not to drive screws with it, but it takes years to develop
a cabinet maker.  Likewise, you can train people what to do or not to do in a
language that provides GOTO, but to get good software for safety critical
applications requires the development of a body of people familiar with both
the application and software technology.  Of course improved software
technology will help too.

A cautionary example would be the design of bridges.  We now have much better
methods for designing bridges than we did in the nineteenth century.  Bridges
don't fall down nearly as often as they used to.  Nevertheless, people were
willing to use bridges then, and they still occasionally get killed using them
now.  We should remember that the design of a bridge is an abstraction, just
like a computer program.  Society has learned a good deal about managing the
life cycle of the bridge abstraction and the artifact derived from it.  We will
learn more about using computers safely over the next few lifetimes.  In the
meantime, perhaps the most useful function those of us who work with computers
can perform, apart from maintaining high standards in our individual endeavors,
is to continue to urge caution on those who, with less understanding, have
become entranced with the possibilites.


Re: Object Code Copyright Implications

Dan Bernstein <brnstnd@KRAMDEN.ACF.NYU.EDU>
Thu, 6 Sep 90 02:27:44 GMT
Three further comments on reverse engineering:

1. Decompilation isn't necessary for cloning; I don't think that a good
programmer gets anything useful out of decompilation anyway. So a law
prohibiting reverse engineering may not have much commercial effect.  (It's a
shame that economic concerns demand object-only distribution.)

2. Decompilation can be very useful in, e.g., figuring out the inner workings
of a virus as quickly as possible. So a law prohibiting reverse engineering may
add new risks to an already dangerous world.

3. Current copyright laws have special exemptions for each different case of
computer translation. A general rule like ``It is fair use to apply a
translation to a copyrighted work for personal use if the translation may be
defined precisely and axiomatically'' would embrace both compilation and
decompilation. It would also be in the spirit of recently expressed sentiments
on the unpatentability of software---and, unfortunately, nearly as contrary to
current practice.
                                           ---Dan


Object Code Copyright Implications (Biddle, RISKS-10.24)

Randall Davis <davis@ai.mit.edu>
Wed, 5 Sep 90 18:01:32 edt
> 1) If object code is copyrightable, what *exactly* is it that is subject
>   to the copyright? Magnetic patterns? Ones and Zeros? Source code?

Copyright covers the way information is expressed and in this case the key to
expression is any binary alphabet (for which 1's and 0's are merely
conventional notation).  The only thing important about the magnetic patterns
of course is that there are two of them.  So it is in fact that particular
collection of 1's and 0's (or up's and down's or left's and right's...)  that
is copyright.

The same thing is true of text: it's the expression (the way information is
conveyed) that's copyright, not the shape of the letters nor the alphabet
that's used (using a different type font or a foreign alphabet won't change
anything important as far as copyright is concerned.)

> Most importantly, these people seem to be arguing that if you have (legally)
> an object-code program protected by copyright, and even though you *do* have
> the "fair use" right to execute the program, you may *not* have the right to
> inspect the program itself by disassembling or reverse compilation, to
> determine how it may work in future circumstances.
>  ... of course programs may be protected by other legal mechanisms which
> are not addressed here. But copyright is usually the minimum.

Copyright law (in the US and probably elsewhere) is currently unclear on the
subject of reverse engineering: different lawyers argue it in different
directions and the recent Lotus case stemmed in part from the uncertainty
surrounding reverse engineering.

Hence in practice copyright is not invoked to deal with it: contract law is.
Most code is sold under the explicit agreement that it will not be reverse
engineered.  You agree to that under most shrink wrap contracts.


Re: Accidental Disclosure of non-published phone numbers

Jeff Johnson <jjohnson@hpljaj.hpl.hp.com>
Wed, 05 Sep 90 10:13:01 PDT
In RISKS DIGEST 10.29, Peter Jones provided an excerpt from a Bell
Canada mailing that warned that, when calls are billed to other than
the calling phone (i.e., collect calls), the calling number is given to
the billed number.  The mailing stated that "this is necessary because
the customer being billed must be able to verify whether or not he or
she is responsible for the charge."

While I agree with this practice, I question the use of the words "necessary"
and "must".  What sounds like a logical requirement is in fact merely a
practice of North American culture.  I don't think it is common worldwide.  In
particular, as I recall from living in West Germany, residential phone bills
there are completely unitemized, they simply say: "Pay this amount."  If you
think your bill is out of line, you dispute it, and the (govt. run) phone
company double-checks for you.  Not a great system from my (American) point of
view, but it proves that is merely a matter of managing people's expectations,
rather than a question of necessity and requirement.
                                                            JJ

Please report problems with the web pages to the maintainer

x
Top