The RISKS Digest
Volume 3 Issue 91

Thursday, 30th October 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Evolution, Progress
Jim Horning
System Overload
David Parnas
"Perfect" systems from imperfect parts
Bob Estell
The software that worked too well
Dave Benson
Assessing system effectiveness
Dave Benson
Risks of raining computer print-out
Alan Wexelblat
Martin Ewing
PGN
Info on RISKS (comp.risks)

Evolution, Progress

Jim Horning <horning@src.DEC.COM>
Thu, 30 Oct 86 15:15:51 pst
"guthery%ascvx5.asc@slb-test.CSNET" asks:

    What do we know about scaling up and evolving software?  Are there any
    empirical studies of the evolution of large code bodies (5+ million
    lines, 10+ years)? Do we know how to engineer global behavior from
    local function?  How do we recover functional descriptions and
    domain-specific knowledge from large, mature software systems?  

There have been at least a few such studies. The one I can retrieve most
quickly is "Programs, Cities, Students--Limits to Growth?" reprinted in
PROGRAMMING METHODOLOGY: A COLLECTION OF ARTICLES BY MEMBERS OF IFIP
WG 2.3, Edited by David Gries, Springer-Verlag, 1978. Belady and Lehman
published a number of other articles based on their studies of the
metadynamics of systems in maintenance and growth. (Their studies are to
most studies of programming as Thermodynamics is to Classical Mechanics:
They stand back far enough that the activities of individual programmers
can be treated statistically.)

Scott Dorsey comments

    that no one can really forsee how far software engineering technology
    will advance in the next few years, and how far simulation technology
    will advance.

I agree, and certainly am in favor of research. However, the recent past
is often a good predictor of the near future. A good measure of the
progress of software engineering in the last 18 years is to compare the
proceedings of the two NATO conferences in 1968 and 1969 with the contents
of RISKS. The NATO proceedings were reprinted in SOFTWARE ENGINEERING:
CONCEPTS AND TECHNIQUES, edited by J. M. Buxton, Peter Naur, and Brian
Randell, Petrocelli/Charter, 1976. I think many people will be surprised
and disappointed at how little the problems and approaches have changed
in that time. I interpret this to mean that our ambitions for computer
systems have grown at least as rapidly as our abilities to produce them.

Jim H.


System Overload (RISKS-3.87)

<parnas%qucis.BITNET@WISCVM.WISC.EDU>
Tue, 28 Oct 86 07:25:40 EST
  Mike McLaughlin raises the interesting issue of system overload in software
systems.  I think RISKS readers should focus on that issue with regard to
hard real-time systems, systems in which an answer too late is worthless.
A long time scale example of a hard real-time system is a weather forecast.
If you receive it after you have experienced the weather, it is of little
value.  A more familiar example is a bomb-release computation.  If you
are told, release 10 ms ago, the information is useless.  Overload in such
systems can make them useless.  The only solution is to make sure that
overload cannot happen.  However, this is not the same as making sure that
the system will not be aware of the overload.

  According to the BSTJ articles on the ABM system known as SAFEGUARD, the
system protected against overload by knowing its limits and refusing to
attempt to deal with a new attacking missile if this would cause overload.
This guaranteed capacity to handle the load that was being handled and meet
the real-time deadlines.

  The same approach is often used for handling overload in telephone
switching.  If calls exceed the capacity users are asked to wait.  There are
delays in getting a dial tone or in a call going through.

  Clearly, there are differences in the two situations.  In the telephone
situation the callers wait, they have little choice and the delay, while it
may be annoying is seldom critical.  In the ABM situation, the missiles
don't wait; they do not need the services of the defense system anyway.

  In fact, the solution of ignoring newly arriving "users" gives rise to an
effective countermeasure, send your decoys first.  Thus, our inability to
provide infinite capacity in real-time systems gives rise to an unavoidable
weakness when dealing with an enemy.  The finite limit is always there, and
there are often cheap ways to exploit it.  We should note that the same
situation arises in a telephone system.  I am told that when President
Kennedy was shot, many Washington telephones did not respond because of
overload.  Rock concerts have been known to have similar effects.  If you
are planning a live version of "The Mouse that Roared" announce the
availability of a large number of cheap tickets for a popular group or
groups just as you attack.

  There is a simple but important lesson here.  There are clear limits on
what we can do and in an adversary situation those limits can be exploited.
Nobody would suggest that we should not have built the telephone system
because of these inherent weaknesses, but we would laugh out loud if those
who make their living by developing telephone systems were to advertise a
system that could not be defeated by a determined and sophisticated enemy.


"Perfect" systems from imperfect parts

"ESTELL ROBERT G" <estell@nwc-143b.ARPA>
30 Oct 86 13:47:00 PST
Did I *really* read in a recent RISKS that for a system to work perfectly,
each component in it must work perfectly?

Well, if by "perfect" one means no errors anywhere, no matter how minor;
and if by "system" one means a collection of parts connected in series;
then I guess I agree.

But if "perfect" can be defined as "don't let any runs score" then the recent
World Series offers a counter-example.  The Mets got hits in game #1; there
were base runners - just none of them got all the way around to score.
What's more, balls that got by one infielder were scooped up by another,
with the result that the batter was still thrown out.

It's been a long time now since I had to rely on a computer system that was
a single thread series of non-reduntant parts; our systems do have troubles;
memory modules fail; CPU's fail; mag tapes and disks and printers fail;
communications lines, and modems fail.  So the system comes up [stays up?]
in degraded mode; I get my work done.

Maybe we should abandon the debate about SDI, and just roll up our sleeves
and make something work acceptably.  Doesn't have to be high energy beam;
probably should not be space based.  Undoubtedly should be a collage of
over-lapping and co-operating subsystems.  Those subsystems that get done
first can be deployed first; maybe some off-the-shelf technology is ready
now.  Some of the subsystems can be used against targets other than ICBM's;
e.g., cruise missile defenses might also work well against drug runners.

The RISK I'm beginning to see is that if we who know well enough how to
design redundant systems don't help, others may design SDI as a "chain
no better than it's weakest link."  If they do: (a) The links will be VERY
strong, and (b) gold-plated - so they won't corrode; and (c) it will cost
way too much; and (d) it still won't work.

RGE
Opinions expressed are entirely personal.


The software that worked too well

Dave Benson <benson%wsu.csnet@CSNET-RELAY.ARPA>
Wed, 29 Oct 86 17:31:59 pst
This story is nth hand, thus to be classified as rumor.  But it is
relevant to RISKS, so I pass it on, if only as a parable.

SeaTac is the main Seattle-area airport.  Ordinarily aircraft landings are
from the north, and this end of the runway is equipped with all the sensing
equipment necessary to do ALS (Automatic Landing System) approaches.

The early 747 ALS worked beautifully, and the first of these multi-centaton
aircraft set down exactly at the spot in the center of the runway that the
ALS was heading for.  The second 747 set down there.  The third 747 landed
on this part of the runway. ... As did all the others.

After a while, SeaTac personnel noticed that the concrete at this point at
the north end of the ALS runway was breaking up under the repeated impact of
747 landings.  So the sofware was modified so that 3 miles out on the
approach, a random number generator is consulted to choose a landing spot --
a little long, a little short, a little to the left or a little to the right.

   THE MORAL:
   Don't assume you understand the universe without actually experimenting.


Assessing system effectiveness

Dave Benson <benson%wsu.csnet@CSNET-RELAY.ARPA>
Wed, 29 Oct 86 17:31:42 pst
( sp == Scott Preese )
 sp> Dave Benson argues that it is more reasonable and conservative to assume
 sp> that an overloaded system will fail entirely than to assume it will either
 sp> perform at its design limit but no more or perform above its design limit.

 sp> That's unarguably the conservative assumption.  I would deny that ANY
 sp> assumption was reasonable, given only a performance ceiling and the
 sp> knowledge that performance demand will exceed that ceiling.

Might be helpful to look to the history of engineered artifacts, especially
military artifacts, and most especially military software artifacts.  Then
your "givens" are no longer the only data to bring to bear on the problem.

 sp> It is obvious that the system could be designed to perform in any of
 sp> the suggested ways when unable to cope with load.

While it might be possible to DESIGN the system to perform in any of a
number of ways, there is no particularly good reason to believe that
a software system would, in fact, meet those design goals.  There is
plenty of evidence to suggest that military software can only meet design
goals after repeated operational testing and rework.  

 sp> Suggesting one response or another is simply
 sp> expressing an opinion of the designers' competence

Yup, but not "simply".  It is an expression of the thirty year's history
of software engineering.  It is an expression of the difficulty of
understanding the informational milieu, both external and internal, of
software.  It is an expression of the historical fact that we consistently
fail to predict all the relevant factors, and are thus forced to learn
from experience.  It is not a claim that even the most brilliant team
of individuals could do better.

 sp> rather than any realistic assessment of the risks of SDI.

History certainly suggests this is a realistic assessment — although
I admit that a complete assessment of the risks requires greater length
than our Dear Moderator would be willing to allow, or than many mailers
could stand.
     [[DM = Dear Moderator]]
     [DM> THERE IS NO SUCH THING AS A COMPLETE ASSESSMENT OF RISKS.  PGN]

 sp> Given that neither the design nor the
 sp> designers are determined yet, this is a silly exercise.

Nope.  It is called looking to history for guidance.


Risks of raining computer printout

Alan Wexelblat <wex@mcc.com>
Thu, 30 Oct 86 10:40:59 CST
This is an old one from my viewpoint.  At Penn, there is an event called
Primal Scream Night, which occurs on the Sunday night before the first
Monday of finals.  Students are encouraged to let off steam by yelling and
tossing paper (an occasional notebook or Econ text has been known to fly).

Anyway, in anticipation of this event, students raided the waste bins at
the computer center, acquiring many reams of junked output as well as boxes
full of punch-card holes.  The next morning, we went down to breakfast early
and to relieve the boredom we started reading some of the fanfolded output:

    "Gee, here's a list of all the CSE110 accounts" [ > 300 names]
    "And here are the randomly-generated passwords."
    "I'll bet nobody's bothered to change their passwords"

Sure enough, we found dozens of "available" accounts.  It seems that the
monthly accounting run had been done that Sunday and the output had been
appropriated before the janitorial service had come around to dispose of it.

Several RISKS violations can be seen here:

    - leaving a paper trail of information that should be secure
    - not disposing of said paper in a secure manner
    - not forcing users to change their passwords (ever)

Still, it was lots of fun to see the look on the comp center director's face
when we handed him the printout and he realized what it was.

Alan Wexelblat
ARPA: WEX@MCC.ARPA or WEX@MCC.COM
UUCP: {seismo, harvard, gatech, pyramid, &c.}!ut-sally!im4u!milano!wex


Risks of raining computer printout

Martin Ewing <mse%Phobos.Caltech.Edu@DEImos.Caltech.Edu>
Thu, 30 Oct 86 09:43:26 PST
How many thousand sheets per printout dropped?  Indeed, this seems like a
brutal risk if the sheets aren't burst and/or shredded first.


Risks of raining computer printout

Peter G. Neumann <Neumann@CSL.SRI.COM>
Thu 30 Oct 86 10:37:48-PST
I might have noted in RISKS-3.90 that I once littered New York's Central
Park West with TWO MILES of printout during Charlotte Moorman's Avante Garde
Festival in 1967 — a two-mile long continuous computer-printed
human-composed visual poem.  My poet friend Emmett Williams and I did a
bunch of such computer-aided visual poetry in the late 60's.  (That year
Charlotte led the parade playing her 'cello suspended from helium balloons.)
I had rigged up my station wagon to have Bell Labs' computer music emanating
from roof-mounted speakers and computer-generated murals of Ken Knowlton on
the sides of the car, with Emmett nursing the printout out of the back
window to cover the middle stripe of CPW.  It was wonderful to see kids
rushing out between the moving vehicles, tearing off some of the printout
for souvenirs!  The computer RISK lay in the fact that our then-developing
Multics system bellied up for a day or so when — having used Ken Thompson's
QED to context-edit an incredibly lovely 7-language interwoven visual pun --
I was ready to prepare the printout.  A simpler substitute had to be used,
produced by an alternative means.  (The show must go on.)  PGN

Please report problems with the web pages to the maintainer

x
Top