The RISKS Digest
Volume 3 Issue 68

Friday, 26th September 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

VDU risks — Government changes its mind, perhaps
Stephen Page
"Drive by wire" systems
Charles R. Fry
Viking Landers worked the first time and met the specs
Dave Benson
Unix breakins - secure networks
David C. Stewart
Comment on the reaction to Brian's Breakin Tale
Dave Taylor
Reliability, complexity, and confidence in SDI software
Bob Estell
Info on RISKS (comp.risks)

VDU risks — Government changes its mind, perhaps

<Stephen Page <sdpage%sevax.prg.oxford.ac.uk@Cs.Ucl.AC.UK<>
Fri, 26 Sep 86 21:28:47 GMT
>From "Computer News" no. 141 (September 25, 1986):

                    Executive does U-turn on VDU risk

The [UK] government's Health and Safety Executive is spending nearly 1.5m
pounds on research into the hazards of using VDUs — just five months after
assuring users that there is no danger.

The Executive has commissioned five reports into the possible health problems
which may arise from working with VDUs.

The studies, which typically last three years, will look at topics such as
repetitive VDU work, discomfort and optimum rest periods. It has contracted the
work out to a number of universities at a cost of 475,000 pounds.

[...]

Earlier this year, the Executive issued a booklet aimed at dispelling fears
that VDU work can lead to health risks and denying that radiation from
terminals would lead to birth defects and miscarriages.

Part of the new research will look at the possible effects of VDU strain and
stress on pregnant women.

                       [Of course, the US Government had previously  
                        cancelled some ongoing work in this area!  PGN]


"Drive by wire" systems

Charles R. Fry <Chucko@GODZILLA.SCH.Symbolics.COM>
Tue, 23 Sep 86 08:59 PDT
From Henry Spencer:

  Doug Wade notes:

    >   My comment to this, is what if a 8G limit had been programmed into
    > the plane (if it had been fly-by-wire)...

  My first reaction on this was that military aircraft, at least front-line
  combat types, obviously need a way to override such restrictions in crises,
  but civilian aircraft shouldn't.  Then I remembered the case of the 727 ...
  It would seem that even [commecial] airliners might need overrides.

The "drive-by-wire" features now appearing in some cars, ostensibly to make
them "safe to drive in all conditions," also seem to require overrides.  For
instance, the most common of these systems is anti-lock braking.  The first
such system available to the public, introduced by Audi on its original
Quattro, could be disabled by a switch on the dashboard.  Why?  Because
under some conditions (e.g.  on gravel roads) the best braking performance
is obtained when the wheels are locked.  This was especially important on
the Quattro, a street-legal rally car which was intended for high speed
driving on all types of roads.  (But as Detroit catches on, look for such
switches to disappear in order to design some cost out of the systems.)

Now several European manufacturers (Mercedes-Benz, BMW) are introducing cars
with "accelerative anti-skid systems," with no direct linkage between the
gas pedal and the throttle on the engine.  The intent is to prevent the
engine from seeing full throttle when it would just cause excessive
wheelspin, especially in slick, wintry conditions.  However, on rear wheel
drive cars (only!! — don't try this with your Honda) such wheelspin can be
used to make the car turn more tightly than it would without, and I can
easily imagine circumstances in which this maneuver could save some lives.

No matter how many automated controls we install on cars (and airplanes)
to prevent operators from exceeding their vehicles' limits, there will
always be a need to allow the deliberate violation of these limits.  

  [Chuck added an aside on the value of high performance driving schools.]

    — Chuck Fry 
       Chucko@STONY-BROOK.SCRC.Symbolics.COM


Viking Landers worked the first time and met the specs

Dave Benson <benson%wsu.csnet@CSNET-RELAY.ARPA>
Wed, 24 Sep 86 18:01:18 pdt
Both Viking Landers worked in their first (and only) operation.  The
pre-operation testing simply ups one's confidence that the actual
operation will be successful.  Since the Viking Landers were the
first man-made objects to land on Mars, Murphy's Law should suggest
to any engineer that perhaps something might have been overlooked.
In actual operation, nothing was.

Both Viking Mars shots had specifications
for the length of time they were to remain in operation.  While I
do not recall the time span, both exceeded the specification by years.
I do recall that JPL had to scrounge additional funds to keep the
data coming in from all the deep-space probes, including the Vikings,
as the deep space mechanisms were all working for far longer than expected.

Surely any engineered artifact which lasts for longer than its
design specification must be considered a success.  Nothing
lasts forever, especially that most fragile of all artifacts, software.
Thus the fact that the Viking 1 Lander software was scrambled beyond
recovery some 8 years after the Mars landing only reminds one that
the software is one of the components of an artifact likely to fail.
So I see nothing remarkable about this event, nor does it in any way
detract from judging both Viking Mars missions as unqualified engineering
successes.


Unix breakins - secure networks

"David C. Stewart" <davest%tektronix.csnet@CSNET-RELAY.ARPA>
24 Sep 86 13:46:39 PDT (Wed)
    One of the observations that have been made in the wake of the
Stanford breakin is that Berkeley Unix encourages the assumption that
the network itself is secure when in fact, it is not difficult to imagine
someone tapping the ethernet cable and masquerading as a trusted host.

    I have been intrigued by work that has been going on at CMU to
support the ITC Distributed File System.  (In the following, Virtue is
the portion of the filesystem running on a workstation and Vice is
that part running on the file server.)

    The authentication and secure transmission functions are
    provided as part of a connection-based communication package,
    based on the remote procedure call paradigm.  At connection
    establishment time, Vice and Virture are viewed as mutually
    suspicious parties sharing a common encryption key.  This key
    is used in an authentication handshake, at the end of which
    each party is assured of the identity of the other.  The final
    phase of the handshake generates a session key which is used
    for encrypting all further communication on the connection.
    The use of per-session encryption keys reduces the risk of
    exposure of authentication keys. [1]

    The paper goes on to state that the authorization key may be
supplied by a password (that generates the key but is not sent along
the wire in cleartext) or may be on a user-supplied magnetic card.

    This is one of the few systems I have seen that does not trust
network peers implicitly.  A nice possibility when trying to reduce
the risks involved with network security.

Dave Stewart - Tektronix Unix Support - davest@tektronix.TEK.COM

[1] "The ITC Distributed File System: Principles and Design",
Operating Systems Review, 19, 5, p. 43.


Comment on the reaction to Brian's Breakin Tale

Dave Taylor <taylor%hpldat@hplabs.HP.COM>
Fri, 26 Sep 86 17:55:53 PDT
I have to admit I am also rather shocked at the attitudes of most of the
people responding to Brian Reids' tale of the breakin at Stanford.  What
these respondents are ignoring is The Human Element.

Any system, however secure and well designed, is still limited by the
abilities, morals, ethics, and so on of the Humans that work with it.  Even
the best paper shredder, for example, or the best encryption algorithm, isn't
much good if the person who uses it doesn't care about security (so they shred
half the document and get bored, or use their husbands' first name as the
encryption key).

The point here isn't to trivialize this, but to consider and indeed, PLAN FOR
the human element.

I think we need to take a step back and think about it in this forum...

                        — Dave


Reliability, complexity, and confidence in SDI software

"ESTELL ROBERT G" <estell@nwc-143b.ARPA>
26 Sep 86 13:22:00 PST
I apologize in advance for the length of this piece.  But it's briefer
than the growing list of claims and counter-claims, made by resepctable
folks, based on either/both sound theory or/and actual experience.  
And we're dealing with a critical question: 
    Can very large systems be reliable?


The "bathtub curve" for MECHANICAL "failures" has always made sense to me.
I've heard lectures about how software follows similar curves.  
But I've really been stumped by the notion that "software wears out."

I'd like to attempt to "bound the problem" so to speak.
SUPPOSE that we had a system composed of ten modules; and suppose that
each module had ten possible INTERNAL logical paths, albeit only one 
entry and only one exit.

 The MINIMUM number of logical paths through the system  is ten (10); 
 i.e., *IF* path #1 in module A INVARIABLY invokes path #1 in modules 
 B, C, ... J; and likewise, path #2 in A INVARIABLY invokes path #2 
 in B, C, ... J; etc. then there are only ten paths.
 NOTE I'm also assuming that the modules invariably run in alpahbetical
 order, always start with A, and always finish with J; and never fail
 or otherwise get interrupted.  [I'm trying to avoid nits.]
 Some residential wiring systems are so built; there are many switches
 and outlets on each circuit; but each circuit is an isolated loop to the 
 main "fuze" box; "fuzes" for the kitchen are independent of the den.

 The MAXIMUM number of logical paths through the system is ten billion 
 (10.E10); i.e., *IF* each module can take any one of its ten paths in 
 response to any one of the ten paths from any one of the other ten modules, 
 there are 10**10 possibilities.
 AGAIN assuming that the system always starts with A, runs in  order, etc.
 *IF SEQUENCE IS SIGNIFICANT, and if the starting point is random, THEN
 there are ten!10.E10 paths; i.e., ten factorial times ten billion, or
 36,288,000,000,000,000 possible paths in the system.

 Further, *IF INTERRUPTS* are allowed and are significant, then I can't
 compute the exact number of possible paths; but I can guarantee that it's
 >MORE> than 10!10.E10.

End of bounds.  The scope reaches from the trivial, to the impossible.

The GOAL of good engineering practices [for hardware, software, and firmware]
is to design and implement modules that control the possible paths; e.g.,
systems should *NOT* interact in every conceivable way.
It does NOT follow that the interactions should be so restricted that 
there are only ten paths through a ten module system.
BUT there is some reason to HOPE that systems may be so designed, in a tree
structure such that:

 a. AT EACH LEVEL, exactly one module will be "in control" at any instant; 
 b. and that each module will run independently of others at its level; 
 c. and that there are a finite [and reasonably small] number of levels.

In "Levels of Abstraction in Operating Systems", RIACS TR 84.5, Brown,
Denning, and Tichy describe 15 levels, reaching from circuits to shell;
applications sit at level 16.  If one must have a layered application,
then add layers 17, 18, et al.

I will conjecture that at levels 1 and 2 [registers, and instruction set],
there are only five possible states (each):
 (1) not running; 
 (2) running - cannot be interrupted;
 (3) running - but at a possible interrupt point;
 (4) interrupted; and
 (5) error.

I will further conjecture that the GOAL of writing modules at each of the
other layers, from O/S kernel, through user application packages, can
reasonably be to limit any one module to ten possible states.  NOTE that
purely "in line code" can perform numerous functions, without putting the
module in more than a few states.  [e.g., Running, Ready to run, Blocked,
Intrerrupted, Critical region, or Error.]

Such a system, comprised of say 15 applications layers, would assume maybe
290 possible states; that's the SUM of the number of possibilities at each
layer, given the path that WAS ACTUALLY TAKEN to reach each layer.

Yet the number of functions that such a system could perform is at least
the sum of all the functions of all the modules in it.  If you're willing
to risk some interaction, then you can start playing with PRODUCTS [vice
SUMS] of calling modules, called modules, etc.  EVEN SO, if the calling
module at layer "n" can assume half a dozen states, and the called module
at layer "n+1" can assume a similar number, then the possible states of
that pair are about 40; that's more than a dozen, but it's still managable.

In real life, both humans and computers deal with enormously complex systems
using similar schemes.  For instance, two popular parlor games: chess, and
contract bridge.  Each admits millions of possible scenarios.  But in each,
the number of possible sensible *NEXT plays* is confined by the present 
state of affairs.  So-called "look ahead" strategies grow very complex; 
but once a legal play has been made, there are again a small number of 
possible legal "next plays."

In bridge, for instance, at least 635,013,559,600 possible hands can be dealt,
to ONE player [combination of 52 things, 13 at a time].  That one hand does
not uniquely determine the contents of the other three hands.
Whether the hands interact is not a simple question in pure mathematics;
in many cases, they do; but in one unique case, they don't; 
e.g., if dealer gets all 4 aces, and all 4 kings, all 4 queens, and any
jack, then he bids 7 no trump; and it doesn't matter who else has what
else; it's an unbeatable bid.  [Non bridge players, accept both my word 
for it; and my apology for an obscure example.]

We've been playing bridge a lot longer than we've been writing large, real-
time software systems.  I'll conjecture that we don't know nearly as much
about "SDI class systems" as we do about the card game.
But in either case, if we aren't careful, the sheer magnitude of the
numbers can overwhelm us.

BOTTOM LINEs:

1. The curve for debugging software has a DOWNslope and length that is 
some function of the number of possible paths through the code.

2. Good software engineering practice says that one checks the design
before writing lots of code.  ["Some" may be necessary, but not "lots."]
*IF* errors show up in the design, fix them there.
*IF* the DESIGN itself is flawed, then change it.  [e.g., Rethink a design
that allows modules to interact geometrically.]

3. Confidence builds as one approaches the 90% [or other arbitrary level]
point in testing the number of possible paths.

4. The reason that we haven't built confidence in the past is that we've
often run thousands of hours, without knowing either:

 a. how many paths got tested; or
 b. how many paths remained untested.

5. INTERACTIONS do occur - even ones that aren't supposed to.
[Trivial example: My car's cooling and electrical systems are NOT supposed
to interact; and they don't - until the heater hose springs a leak, and
squirts coolant all over the distributor and sparkplugs.]
In "The Arbitration Problem", RIACS TR 85.12, Dennning shows that 
computers are fundamentally NOT ABSOLUTELY predictable; it may be that
an unstable state is triggered ONLY by timing idiosyncracies such as:
 At the same minor cycle of the clock, CPU #1 suffers a floating 
 underflow in the midst of a vector multiplication, AND CPU #2 takes an 
 I/O interrupt from a disk read error, while servicing a page fault.

6. Since interactions do occur, experiences that many have had with small
programs in a well-confined environment do *NOT* necessarily "scale up"
to apply to very large, real-time codes, that run on raw hardware in a
hostile [or just "random"] environment.  NOTE that I'm claiming that in
such a system, the O/S kernel is part of the real-time system.

7. The "problem space" we've been discussing is at least triangular. 
In one corner, there are assembly language monoliths, running on second-
generation computers, without hardware protection; such systems convince
Parnas that "SDI won't ever work."  Written that way, it won't.
[Important aside: It's one thing to argue that *if* SDI were built using
modern software techniques, it would work.  It's another thing to realize
that in DOD, some (not all) tactical systems run on ancient computers that 
cost more to maintain than they would to replace; and offer less power than 
a PC AT.  Such facts, known to Parnas, understandably color his thinking.]

In another corner, there are small [1000 or so lines] modules, running
in a controlled environment, that and have been "proven" to work.
Most of us doubt that such experience scales up to SDI sizes.

In another corner, there are 100,000 line systems that work, in real life,
but without formal proofs.  Probably built using good S/W Eng practices.

8. The KISS principle ["Keep It Simple, Stupid"] eliminates lots of problems.
Prof. Richard Sites, at UCSD in 1978, told of a talk given by Seymour Cray.  
In answer to audience questions about "how to make the circuits run at those
speeds", Cray explained that circuit paths were all of known, fixed lengths; 
and that all paths were terminated cleanly at both ends; and other 
"good EE practices" taught to undergrads.  Less successful builders were 
so wrapped up in megaFLOPS that they got careless.

We could do well to adopt Cray's philosophy for hardware as we build our
software; e.g., build "RISC" programs; write code that does only a few tasks,
but does them very well, both quickly and reliably.
Maybe that's one reason why UNIX systems are so portable, powerful, and
popular?  [Each module is simple; power comes from piping them together.]
NOTE that I'm claiming that "RISC" computer architecture is not new;
look at almost every machine that Cray has designed; instruction sets are
limited, and their implementation is superb.

Bob
For the record, I'm speaking "off the record" and expressing personal opinion.

Please report problems with the web pages to the maintainer

x
Top