The RISKS Digest
Volume 9 Issue 2

Monday, 10th July 1989

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Re: A "model" software engineering methodology?
PGN
Stan Shebs
Victor Yodaiken
Dave Davis
Gideon Yuval
Jon Loux
Re: UK Defence Software Standard
Eugene Miya
Joshua Levy
Norm Finn)
Exxon file deletions
Anonymous
Stalking the wary food shopper
David Gursky
Info on RISKS (comp.risks)

Re: A "model" software engineering methodology? (RISKS-8.86)

Peter G. Neumann <Neumann@KL.SRI.COM>
Mon, 10 Jul 89 09:01:23 PDT
I apologize to Stan Shebs and to RISKS readers for not having sufficiently
exerted my responsibility as moderator regarding the Shebs/D'Ippolito
exchange.  Sorry.  Occasionally one slips by.  Rich D'Ippolito's message was
less than circumspect.  I probably should have let them try to agree on a
counterposition and a rebuttal before troubling you all.  However, there are
some important issues raised.

A response from Stan Shebs follows, along with four other commentaries.


Re: A "model" software engineering methodology? (RISKS-8.86)

Stan Shebs <shebs@apple.com>
Fri, 7 Jul 89 12:02:57 PDT
I didn't expect such a violent reaction to my comments on cruise missile
software that appeared in RISKS-8.86, but several of my remarks really ought
to be clarified.  First off, I worked on ALCM 1981-82, fresh out of Texas
A&M, with no programming experience outside of research projects, and my
only education in software engineering having been to read "The Mythical
Man-Month".  Boeing did not require any training in their software
methodology (they probably offered some, but I don't remember), we just
learned on-the-job.  (Friends who are still there tell me that things have
improved in the past several years.)  My original article was written as
personal impressions, for a non-expert audience, and unfortunately Jon's
quoting couldn't make it into more of a learned treatise.

>  From: rsd@SEI.CMU.EDU

>  In RISKS 8.86, Jon Jacky quotes Stan Shebs:

<>    We supposedly had a "model" software engineering methodology; what I
<>    remember most clearly is that half the work was done on one flavor of IBM
<>    OS, and the other half done on a different flavor, and file transfer
<>    between the two was tricky and time-consuming.

>  The coupled clauses are unrelated, a compositional practice Mr. Shebs is
>  apparently quite fond of.

In this case, it was a subtle (clearly too subtle) literary device; there was
almost no relation between the official methodology and daily practice.  The
process of conforming to the DoD's requirements resulted in a fanatical
adherence to the letter of the standards, with almost no concern for actual
quality.  I had written some tricky code and proved its correctness; it got
thrown away because it was "too general".  We had an independent auditor that
checked over the documentation and found missing commas, while completely
ignoring the parts of the program design that parroted Fortran code without
saying anything about its structure.  I could go on, but this is supposed to be
a clarification, not an independent message.

>  [...]
>  Of course the program design should have been documented beforehand, but
>  recognizing that it is necessary to have for testing and maintenance
>  purposes is not stupid.  I have seen many systems where the software is very
>  old (or inhereted) and must be re-documented to current standards.  What was
>  the case here?

The software was about a year old I believe; probably the same standards
were in effect then, but I can't say for sure.  It was written under
considerable time pressure and almost entirely free of comments.  The
experience of trying to puzzle out the workings of 30k lines of mixed
Fortran, Cobol, and assembler was truly horrible.  Management knew it was
all a sham, but they just asked us to go along with it.  (Funny, 30k lines
doesn't seem like a lot anymore - times change...)

<>    (Regarding) statistics on software quality, the closest thing we had was
<>    maybe a count of problem reports (hundreds, but each report ranged from
<>    one-liners to one-monthers in terms of effort required).

>  Sigh.  There is no mention here whether this applies to _delivered_ product
>  or corrected production errors.  Is this his view of what constitutes
>  quality?

I believe the numbering of software problem reports started after delivery
of the first version, but I don't know for sure. The whole setup seemed pretty
disorganized; for instance, there was no formal system for module-level
testing, before or after delivery, and regression testing was limited to
rerunning a set of standard flight paths, which weren't complicated enough
even to test all the paths in the program code, let alone boundary cases.

>  This article was posted to illuminate "the accuracy/quality of strategic
>  weapons guidance systems", presumably by offering a coherent and reasoned
>  exposition.  Instead, it presents jokes, innuendo, and unsubstantiated
>  charges and conclusions in indefinite (and sloppy, such as the 1/2 inch
>  diameter missile) language such as:

It never occurred to me that anybody could possibly misinterpret
"6 meters long and 1/2 in diameter", but it must be possible.  For the
record, it's 1/2 *meter* (actually more like 2/3, but who cares).

<>    The difficulty of all this apparently didn't occur to anybody until after
<>    the missile was working...

This was what I heard from the people who were there at the time.  They may
not have been entirely accurate, but the objective evidence and the history
of the ALCM program support this.  Old documents seem to assume that laying
out a mission by hand in the traditional way is good enough.  In actuality,
the missile is so sluggish that it has to begin climbing long before it
reaches a hill, or it will end up plowing into the side (this actually
happened during testing).

<>    ...error accumulation over 2000 km is immense,...

We had graphs of just how bad inertial navigation gets over long distances.
The actual numbers are probably classified, but in general an inertially
guided missile would be useful only against cities; silos and other hardened
sites require much more accurate navigation techniques.

>  Rich D'Ippolito

stan shebs                             shebs@apple.com


Ad hominem arguments and "model" s.e. methodology

victor yodaiken <yodaiken%ccs2@cs.umass.edu>
Fri, 7 Jul 89 01:55:27 EDT
I found the comments of Rich D'Ippolito in RISKS 9.1 ("A 'model' software
engineering methodology") to be unconvincing and ill mannered.  Mr. D'Ippolito
is offended by a some anecdotes posted in RISKS 8.86 which reflect poorly on
the design and implementation of a cruise missile ( John Jacky, quoting Stan
Shebs).  The points which Mr. D'Ippolito references are:
    1) The development was carried out on 2 incompatible
    development platforms --- file transfer between the 2
    systems was ``tricky and time consuming".
    2)  The "Program Design" was written after the program
    was written.
    3) There failure conditions of the program seemed to have
    been poorly planned for, e.g, error accummulation over a long
    flight path, and the effects of snowfall on topography.
    4) Program quality was measured by raw count of problem reports
    without reference to the complexity of the reported problems.

These are all significant problems, which cannot be dismissed by the ad
hominem arguments presented by Mr. D'Ippolito. Mr. Shebs is not the first
person to point out instances of sloppy software engineering in D.O.D.
projects, but I found his story interesting and informative. Mr. D'Ippolito
argues in the mode of a press secretary, rather than as a scientist.
Comments such as:

>Really, if Mr. Shebs's rambling demonstrates anything, it shows that the
>greatest risk is hiring inarticulate and confused programmers like himself
>who don't have the faintest idea what software engineering is.

are out of place and offensive.

    Victor Yodaiken


Shebs/D'Ippolito

Dave Davis <davis@community-chest.mitre.org>
Fri, 07 Jul 89 08:16:47 -0400
In the 6 July Risks Mr. D'Ippolito somewhat justifiably takes Mr. Shebs to task
about his criticisms of the Cruise Missile project Mr. Shebs was a part of.

However, one comment relating to producing the design specification after
coding was completed brought back my memories of being one of the senior staff
during the mid-80s for the Navy's version of this missile, called Tomahawk.

In the view of the project's mangement, the required documentation was an
unnecessary burden placed on us hardworking developers by government
bureaucrats.  An example was the spec. which provided an English description fo
the guidance software line-by-line.  It was essentially useless.  The guidance
software had been developed in the usual atmosphere of intense deadline
pressure, with the highest priority given to making the thing work.  As a
result, the code was not a textbook example of modularity and
understandability.  In fact, the documentation probably was irrelevant, since
the engineers and computer scientists used simulations of the guidance system
and software models of the guidance equations to develop the system.

My point is that these projects are examples of the fact that industry is just
beginning to recognize the size of the investment they are making in software,
and the technologies needed.  Most defense contractors (there is a
Carnegie-Mellon survey on this) are only now coming out of the Stone Age of
focusing on coding in marathon sessions to produce software while producing
reams of documentation.

However, it is by no means clear that applying what we think of as
state-of-the-art that we will do much better.


personalities

Gideon Yuval 1.1114 x4941 <gideony@Microsoft.UUCP>
Fri Jul 7 11:20:34 PDT 1989
I think a large part of the comments by rsd@sei.cmu.edu, in the 6/Jul/89
Risks-digest, were personal attacks on Stan Shebs; such attacks are relevant
to risks only to the extent that they discourage whistle-blowers, lest the
ayatollahs attack them too. Was that the intention of these remarks?

Gideon Yuval, gideony@microsof.UUCP, 206-882-8080 (fax:206-883-8101;TWX:160520)
             Paper-mail: Microsoft, 16011 NE 36th way, Redmond, Wa., 98073-9717


Re: A "Model" Software Engineering Methodology?

Jon Loux <JLOUX@UCONNVM.BITNET>
Fri, 7 Jul 1989 09:52:25 EST
In RISKS Digest 9.1 a certain gentlemen responds and quotes liberally from
another gentleman's experiences in software engineering for missile
guidance systems.  The jist of the article seemed to be that the original
poster was incompetent and that his remarks were irrelevant, derogatory, and
totally without merit.  Ahem, Isn't this just possibly another example of
shooting the messenger because of the message???

I once worked for a defense contractor who manufactured submarines for the U.S.
Navy (There are only two in this country, take your pick.)  Quite often there
was a tremendous difference between The Way It Is and The Way It Is Supposed To
Be. Sometimes design and manufacturing errors were found.  Sometimes they were
buried.  But usually, the reputation (of the company and the Navy) was more
important that the viability of the product.

The RISKS involved are probably as old as engineering itself; that is, creating
an absurd or impossible situation, and then deriding the first person who
brings attention to the problem.  Where would the Emperor be today if someone
had not pointed out the 'design flaw' in his new clothes?

If we cannot learn from our mistakes, we just rename them; "Success". (1/2 :-)

                                      Jon Loux
                                      The University of Connecticut
Flames? Go ahead, bake my Cray.


Re: UK Defence Software Standard (RISKS-9.1)

Eugene Miya <eugene@eos.arc.nasa.gov>
Fri, 7 Jul 89 11:15:27 PDT
Sean Mathews brings back shades of when I was working on flight projects.
Dicey stuff.  All examples he documents (starting with no dynamic memory [I was
amused by starting with this]) are all standard operating procedure (so
believed in the defense and aerospace arenas).  These ideas date back decades
(not just years).  Since NASA was mentioned (and projects differ), I should
mention that the policy of explicitly reading machine code is from the days of
smaller memories.  The Agency is trying to get away from this.  The code is
still generated (I know where this example comes from) by HAL/S computers for
the shuttle (HAL, BTW does not have dynamic memory, or any of the features [no
multiprocessor support, etc.]).

Sean's posting does raise quite a few other questions:

    Is there explicit range checking?
    Are there comments on performance?
    Are there specifications on clocks and interrupt handling?
    Etc.

For the classic summary to Sean's misgivings of the merchants of killing and
mayhem.  It was Andy Mickel at U. Minn. who in the early days of Ada published:
"Reliable software must kill people reliably." [Pascal News].
                                                                 --eugene miya


Re: UK Defence Software Standard (RISKS-9.1)

Flame Bait <hplabs!joshua@Atherton.COM>
Fri, 7 Jul 89 13:41:14 PDT
Here are some comments on Sean Matthews's posting:

>1. There should be no dynamic memory allocation (This rules out explicit
>recursion - though a bounded stack is allowed).

As a practical matter, this requirement rules out all real software systems.
Also, I do not see why this makes software safer.  I think this will result in
less safe software, since there will be arbitrary limits on the length of
everything.  All last names must be 30 chars or less, all place names must be
40 chars or less, etc.  Long buffer sizes will lessen the impact, but take up
more space.

>2. There should be no interupts except for a regular clock interupt.
>3. There should not be any distributed processing
>4. There should not be any multiprocessing.

As a practical matter, these requirements rule out most (all?) real systems.
Also, I do not see why software which does none of these things is safer than
software which does all of them.  These requirements seems to be designed to
compensate for the (huge) limitations of current program verification
techniques.

>10. No optimising compilers will be used.

What is an optimising compiler?  This is a serious question.  GCC with no
arguments provides better code than PCC or Sun's CC, does that make it an
optimized compiler?  What if you give it a -O1 option, or a -O3 option (which
turns on more optimization).  All compilers do some optimization. Point 10
should say something like "All compilers used must be verified to the same
level of certainty as the programs they are compiling."  It is kind of useless
to verify a program and then run it through a compiler which is not verified.
Of course writing a compiler which does not use dynamic memory (see point 1)
would be an interesting exercise!

Overall, I found these standards funny, not useful.  My conclusion is that no
safety critical software can not be written in the UK since points 1 and 2 will
mean that none of it will be up to standard.

Joshua Levy   joshua@atherton.com   work:(408)734-9822   home:(415)968-3718


Re: UK Defense software standard

Norm Finn <ultra!norm@ames.arc.nasa.gov>
Thu, 6 Jul 89 19:01:05 PDT
Sean Matthews writes of the UK safety-critical software standard:

> 2-4 seem to leave no option but polling.  This is impractical,
> especially in embedded systems.  No one is going to build a fly by
> wire system with those sorts of restrictions.

Polling is absolutely the safest and most-used method for programming embedded
systems for saftey-critical applications.  It is a commonly held fallacy that
the essence of a real-time embedded system is FAST response to external stimuli
-- that interrupts are therefore important.  The essence of real-time embedded
systems is a GUARANTEE of a specified, finite response time to EACH stimulus.

Many embedded systems divide real time into a hierarchy of time slots, and
execute fixed subroutines in each time slot (i.e. "polling").  This makes it
relatively easy to verify that the program can meet all response time
requirements.  Stimuli requiring slower responses are handled by routines that
run less often, and routines running more often handle stimuli requiring faster
response.  Every routine can be proven to run in less time than the length of
one time slot.

Additional advantages:  The trivial scheduling algorithm makes the interaction
between the routines vastly easier to write and verify than is the case in a
system that can switch tasks at random times.  Response time bottlenecks can be
identified and addressed early in the development cycle.  It is easy to match
the requirements of the sources/sinks of stimuli/responses and the capabilities
of the embedded computer.  And on and on ...

In short, when a life depends on your system ...
   KISS (Keep It Simple [,|and] Stupid)
                                                            Norm Finn
Ultra Network Technologies, 101 Daggett Dr., San Jose, CA 95134 (408) 922-0100


Exxon file deletions

<[Anonymous]>
Fri, 7 Jul 89 09:45:56 PDT
While the previous posting in RISKS didn't mention this, the "destroyed
files" regarding the Exxon oil spill were apparently actually simply
the routine recycling of a month-old (or older) backup dump tape.

Exxon has continued to state that they believe all files are still online or in
paper form.  Obviously the correct procedure would have been to explicitly
order the preservation of those dumps, and nobody wants to condone Exxon's
handling of this whole affair, but it is worth noting that the deletion in
question was apparently the result of routine computer center operations common
to many environments, not some "explicit" act of destruction.


Stalking the wary food shopper

David Gursky <dmg@lid.mitre.org>
Sun, 9 Jul 89 13:36:14 EDT
Today's (09 July 89) Washington Post has several articles about the latest
trend in foods:  Frequent Shoppers Programs.  For those of you not aware of
this, it is similar to Frequent Flyer Programs used by the airlines.  When you
buy an item at your local mega-chain supermarket, your purchase is noted in a
large customer database and you are given a credit on your account, and when
your account reaches certain defined levels, you receive coupons for various
foodstuffs.  Simple right?  Wrong.

First, the markets are proposing to record more than a purchase of so many
dollars and cents.  They intend to record the specific brands and items you
bought.  Based on this information and your address information, markets want
to start targetting mailings of flyers and coupons to you.  If their records
show you have started to buy baby food recently, the AI routines that examine
the data base will note this, and flag your record to be sent coupons on other
baby products, such as diapers and lotions.

This is in fact not a terrible thing.  If markets want to spend money send
paper in the mail to people who may or may not use them, there are far more
useless ways to spend money.  The problem is the second application.

It seems that the markets want to sell this information to the manufacturers
(the Campbell Soups, the Nabiscos, the General Foods, and so on) for use by
their own marketing people.  This also raises the traditional right to privacy
issues.

I wonder how much longer it will be till I will not be able to go into my local
Giant Food and buy a bag of flour, because their demographic survey shows that
no one eats in my apartment building.

Please report problems with the web pages to the maintainer

x
Top