The RISKS Digest
Volume 8 Issue 13

Sunday, 22nd January 1989

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Gigabit superhighway/worms
Vint Cerf
IAB Ethics DRAFT
Vint Cerf
Space shuttle computer problems, 1981--1985
Jon Jacky
F-16 that can't stall falls from sky
Scot E Wilcoxon
Re: China accused of software piracy
Jim Olsen
Losing systems
Dale Worley
Chris Lewis
Re: Structured Programming
John Mainwaring
Mark Rosenstein
Steve Pozgaj
Info on RISKS (comp.risks)

Gigabit superhighway/worms (RISKS-8.9)

<CERF@A.ISI.EDU>
23 Jan 1989 00:09-EST
In RISKS-8.9, Brad Blumenthal asks whether our legislators and their staff are
aware of the similarity between the internet and the proposed multi-gigabit
superhighway. I can assure Mr. Blumenthal that the question has arisen and has
been posed to several members of the research community by members of the
Congress responsible for scientific and technical matters.  The worm affair
made a strong impact on policy makers.
                                                  Vint Cerf


IAB Ethics DRAFT (RISKS-8.8)

<CERF@A.ISI.EDU>
23 Jan 1989 00:09-EST
The copy of the IAB statement on ETHICS was a DRAFT copy circulated for
internal comment by the IAB before final editing and release.  I would be
very interested to know how a copy happened to fall into Mr. Stoll's hands.
Readers should be advised that the copy they saw is still subject to change
until ratified by the IAB.
                                           Vint Cerf

  [By the way, subsequent to the appearance of RISKS-8.8, Cliff noted to me
  that he had accidentally omitted the author's name from the DRAFT.  PGN]


Space shuttle computer problems, 1981--1985

<jon@june.cs.washington.edu>
20 Jan 1989 15:27:13 EST
Here are excerpts from an article that appeared a week before the flight
of the space shuttle Discovery last September:

NASA's close calls: lessons learned? by Richard Doherty, ELECTRONICS
ENGINEERING TIMES, September 6, 1988, pps. 4,8.

... The House Science and Technology committee convened its own investigation
of the (January 1986 Challenger accident) just days after the Rogers commission
concluded its four-month effort.  (Its findings are reported in) `Investigation
of the Challenger Accident, House Report 99-1016'.  That report indicates ...
dozens of failures in the shuttle's General Purpose Computer (GPC) and Avionics
systems. ... NASA has reviewed these flight anomalies and decided that they fit
within the acceptable risk criteria.  Thus, it has not made any significant
changes to system hardware or software for Discovery's launch. ... Most
engineers tracking the shuttle program can recall very few reported avionics
and computer-system failures during the program's 24 completed missions.
Nevertheless, more than 700 anomalies involving computers and avionics have
been logged by NASA. ... 

[Here follow just a few of many examples from the EE TIMES article.  Most seem
to involve hardware or sensor failures. Several examples in the article are not
computer- or even avionics-related. ] 

STS-6, April 4, 1983: ... Landing gear must be manually deployed after computer
fails to trigger its descent.

STS-9, November 28, 1983: Four hours before re-entry, pilot orients orbiter
using RCS (Reaction Control System) steering jets. After jets fire, one
computer crashes.  A few minutes later, a second goes down [ There are four
redundant GPC computers running identical software plus a fifth GPC running
different backup software - JJ].  Pilot John Young delays landing while craft
drifts in space.  Then one of three Inertial Measurement Units fails.  (Young
testified three years later: `Had we then activated the Backup Flight Software,
loss of vehicle and crew would have resulted.'  He now says problems have been
resolved.  Post-flight analysis shows each GPC failed when RCS jet motion
jarred a piece of solder, shorting CPU boards). 

Before landing, the second of three APU's (Auxilliary Power Units) fails.  Fire
and explosion occurs while orbiter is parked at its landing site.  ... NASA
engineers label this incident a `double-failure scenario that just beat all the
probability odds.' ...

STS-19 (51-F) July 29, 1985:  Three minutes into ascent, a failure in one
of two thermocouples directs computer shutdown of center engine.  Two minutes
later, engine chamber pressure is indicated as zero.  Mission control decides
to Abort to Orbit. ... Challenger is in orbit 70 miles up, 50 miles lower
than planned.  Had shutdown occured a half-minute earlier, mission would
have had to abort over the Atlantic.  (NASA has reset some of the binary
thermocouple limits via software changes).

STS-13 (41-G), November 5, 1984: ... Landing gear must be manually deployed
after computer fails to trigger its descent.

- Jonathan Jacky, University of Washington 


F-16 that can't stall falls from sky

Scot E Wilcoxon <sewilco@datapg.MN.ORG>
Fri, 20 Jan 89 17:07:16 CST
Reprinted with permission from the Tampa Tribune, 23 December 1988, Page 1B

Crash of F-16 still unexplained by MacDill staff
By STEVE HUETTEL, Tribune Staff Writer

   TAMPA - Moments after an instructor warned 1st Lt. David S. Johnson that he
might be flying too slowly, the student pilot's F-16 fighter stalled and
plummeted into the Gulf of Mexico.  Johnson ejected safely Sept. 9 and was back
in the cockpit within a month.  A month before he is due to graduate, MacDill
Air Force Base officials still won't say whether the 24-year-old pilot was
negligent or if an onboard computer designed to keep the $9.5 million jet from
stalling failed.  But, crashing an F-16 isn't necessarily grounds for dismissal
from the six-month course, they say.
   "It's an environment where they're still learning the airplane, and
mistakes can happen," said Capt. Dian Lawhon, a MacDill spokeswoman.  "At
that point, they might not have acquired some of the skills they need."
Students can be yanked at any time, she said, for "gross pilot error."
   Word that Johnson's jet stalled surprised the F-16's manufacturer and
former pilots familiar with the fighter.
   A computer inside the fighter should override any commands that would cause
a stall, said Joe Thornton, a spokesman for General Dynamics in Fort Worth,
Texas.  "If a pilot tells the airplane to do anything the airplane doesn't want
to do, the computer will take control of the airplane from the pilot," he said.
"The pilots I talked to said you can't (stall) it."
   But the computer is programmed only for common, dangerous flight
configurations, said 1st Lt. Susan Brown, a MacDill spokeswoman.  "It's not set
up for every possible way you can get yourself into trouble," she said.
Johnson, of Parker, Colo., did not return telephone calls to comment on the
accident.
   On the morning of Sept. 9, he was practicing fighter maneuvers with an
instructor in a second plane over the Gulf west of Fort Myers.  It was
Johnson's seventh solo flight in the F-16.  He had flown more than 200 missions
in trainer aircraft, earning outstanding evaluations from his teachers at basic
flight and fighter preparation schools.
   Four times, the pilots flew a downward corkscrew maneuver in which Johnson
tried to get behind the other aircraft to line up a gun or missile shot.
Something went wrong as they broke off the exercise the last time.  The Air
Force won't release an investigation board's findings or statements by the
pilots.  But a heavily censored version of the report obtained under the
federal Freedom of Information Act states that Johnson's F-16 stalled after he
finished the last maneuver.
   A drawing of the maneuver doesn't show Johnson's speed or altitude.  The
instructor pilot he trailed started at more than 400 mph but slowed to 150 mph
as he climbed to 14,700 feet at the end of the exercise.  "We got a little slow
there, check your air speed," the instructor warned in a transcript of his
radio conversation with Johnson.  The student acknowledged the message, then
disappeared from the instructor's sight.
   The F-16 can stall at speeds of 230 mph or slower, depending on its weight
and angle of flight, MacDill officials said.  They declined to say what speed
Johnson was flying when his plane stalled or the altitude at which he ejected.
The report drawing depicts Johnson's jet in a near-vertical climb just before
it stalled.  "That should never have happened," said Howard Acosta, a former
Navy pilot and St. Petersburg attorney who successfully sued General Dynamics
on behalf of an F-16 pilot's widow last year.  "The computer should change the
angle of attack and get the wing flying again."
   Unlike older aircraft, the F-16 has a fly-by-wire system that controls the
flaps and engines through electrical impulses.  The pilot's commands go through
a computer that prevents the aircraft from getting into situations where it
could stall or break apart from excessive gravity forces, say pilots.  "It's
designed not to stall," said a former F-16 pilot.  "It's made to recover.  You
can take your hands off, and it'll fly."

Scot E. Wilcoxon  sewilco@DataPg.MN.ORG    {amdahl|hpda}!bungia!datapg!sewilco
Data Progress    UNIX masts & rigging  +1 612-825-2607    uunet!datapg!sewilco


Re: China accused of software piracy

Jim Olsen <olsen@XN.LL.MIT.EDU>
Sun, 22 Jan 89 13:50:51 EDT
>American companies are losing "many millions" of dollars in potential
>business in China because the companies' computer software has been
>widely pirated...  China has no copyright law of its own...

This is an example of the risk in assuming that the laws of one's own country
apply (or ought to apply) everywhere.  Copyright and patent protection are,
fundamentally, matters of internal law for each country.  Foreign copyrights
exist only via international copyright convention.

In a nation which is not signatory to a copyright convention, foreign copyright
is invalid.  However, authors in such a nation receive no international
copyright protection.  Each nation decides if such a tradeoff is in its best
interests.

Thus, copying American computer programs in China is perfectly legal,
and therefore does not deserve the term 'piracy'.  American law does
not apply in China, even if some American companies would like it to.


Losing systems

Dale Worley <worley@compass.UUCP>
Fri, 20 Jan 89 11:27:40 EST
Jerome Saltzer remarks that the domains of application of computers have been
enlarging at an enormous rate.  The rate at which computers become cheaper
relative to number of them sold (the "experience curve") is no different from
any other product.  What is different about computers is the extraordinary
price-sensitivity of potentially computerizable applications - a tiny drop in
the price of computers introduces whole new application domains.  This puts the
computer industry into a rapid positive feedback loop of dropping prices,
widening applications, and increasing unit sales.

I also agree with his remarks on how to manage computer-based projects, but you
must remember that one result of these policies is that one will get somewhat
less bang for the buck than the state of the art would tempt one to expect.

As far as managers are concerned, I'm reminded of a comment by Lester Thorow,
dean of the MIT School of Management, regarding their new "managing technology"
degree program:  Many managers want to learn how to manage technology, but few
want to learn about technology.

Certainly, American managers have little technical training, and few
(especially in the upper echelons) want to acquire any.  This has been blamed
for the inability of American companies to deal with rapidly changing
technology.  In contrast, German and Japanese managers often have technical
training and are reputed to be better at dealing with changing technology.  Do
they have a lower rate of computer project failure?

Dale Worley, Compass, Inc.                      compass!worley@think.com


Re: Losing systems....

Chris Lewis <clewis%ecicrl%gate%tmsoft@csri.toronto.edu>
20 Jan 89 20:12:09 EST (Fri)
In Risks 8.6, Vince Manis postulates a number of hypothesis over how "megabuck
systems ... go into the trashcan" after seeing reports of two such in Risks 8.4.

I can add another reason with an example (actually a "near failure"):

    A Government creates the system by executive edict, without
    any technical study.  Especially those in non-technical areas
    where the problem isn't well understood.

Which I expect is the actual reason for the failure of the first example 
in risks 8.4.

The second example in Risks 8.4 is probably simply that there was *no* design
control.  In projects where there are multiple "customers", it is extremely
important to have firm control vested in *one* person or small group of people.
If you have dozens of people yammering for this feature or that, and nobody can
or will say "no" to some of them, you're in *deep* trouble.  The report on this
system implied that this was one of the main reasons for failure.

Which is also why some language standards are so big....

My example:

The current incarnation of the Ontario Health Insurance Plan (Government run
health insurance system, OHIP for short) was created by Government legislation
(Ontario Health Insurance Act, 1972 - I think), to be in operation
approximately 6 months later.  At the time, the Ministry of Health didn't have
much of a DP dept., nor had there been *any* technical study.

When I studied this system for a Royal Commission back in '79, I can't help 
remembering how awestruck I was that they actually had the monster
in operation at 6 months.  Startup from zero staff, resource or facilities.
Awesome.  They then paid for it with two or three years of continuous
firefighting.  The only reason that they succeeded was that they had
lucked into some of the best DP people/managers I've *ever* met.  

As well as some of the worst senior administrative people I've ever had the
misfortune to meet.

This application is still probably the biggest single DP application in 
the entire province - 13 master files (one of which was 70 reels of 6250 
BPI tape back in '79), oh about 12 main programs and had to be run on a 
48 hour cycle.  It took somewhere near 24 hours to run on their machine
as of '77: 370/168 I believe.

The head analyst gave me a report discussing a lot of this, including
the comment "systems usually are obsolete and need to be replaced within
3-5 years - this one has already outlived it's lifespan by 5".

Last I heard, they're still running effectively the same stuff.... 
(10 years later)

Chris Lewis, Elegant Communications Inc., Markham, Ontario, Canada, 
{uunet!attcan, utzoo}!lsuc!{gate!eci386, ecicrl}!clewis


re: Structured Programming

John (J.G.) Mainwaring <CRM312A@BNR.CA>
20 Jan 89 16:16:00 EST
In the replies to Karsh's article published to date, several interesting points
were made, but one clear statement of objective was missing.  The methods which
have come to be known as structured programming were intended to avoid the use
of what were then recognized as error prone contructs.  This is in the spirit
of analyzing aircraft accidents and changing instrument or control designs
which pilots tended to misunderstand or misuse.

There may well be those who have lost track of the idea that the structuring of
a program should break the job down into segments which are small enough to
understand, and eliminate hidden interactions between segments which make it
difficult to understand how they fit together.  It is possible to indent
beautifully, avoid gotos, keep modules under a page, and use data structures
that make it totally impossible to understand how it all fits together.  The
larger the system, the greater the difficulty of creating an overall design and
ensuring that the parts really fit together in an understandable way, ie that a
structure really exists at ALL levels.

Does anyone know of recent studies based on current languages used in nominally
structured fashion of what errors are most common and what disciplines seem
most likely to avoid them?  Such articles used to be common at one time.
Perhaps now would be a good time for a few more, preferably in some of the
popular as opposed to academic magazines.  The converted may enjoy a good
sermon, but it has the chance to do more good when it reaches a wider audience.

My views are probably my own but only coincidentally those of my
employer or anyone else.


Structured Programming, Object Oriented Programming: A quote

Mark Rosenstein <rosenstein@mcc.com>
Sat, 21 Jan 89 08:27 CST
David A. Moon from the foreword to "Object-Oriented Programming in
Common Lisp" by Sonya E. Keene [an interesting book, by the way]:

  The nature of object-oriented programming is such that it is most beneficial
  for large programs that are written by multiple authors and are expected to
  last a long time. The ease of implemententing a small, simple program does
  not much depend on what programming methodology is employed, and one who has
  dealt only with small programs may not see any point to the object-oriented
  discipline. However, anyone who has been through the design, development,
  documentation, testing, and maintenance of a large software system in a
  non-object-oriented fashion, and then has experienced the same process in an
  object-oriented system, will understand why there is so much interest in
  object-oriented programming. It isn't magic, but it is a good technique for
  organizing large software systems and making them comprehensible.

I believe the above is also exactly true with respect to structured programming.
Mark.


Specious Arguments and Structured Programming

Steve Pozgaj <ames!uunet!dmnhack!dmnboss!steve%pasteur.Berkeley.EDU@ucbvax.Berkeley.EDU>
Fri, 20 Jan 89 08:55:40 EST
I have always enjoyed controversial debate, *but*, there is a major
difference between controversial debate and provocation.  I must say that I
find Bruce Karsh's posting in RISKS 8.8 simple provocation.  It is the kind
of statement that forces a "bite your tongue and count to 10" reaction.  Why?

Provocation can only lead to "heat" in arguments, not "light".  In this
regard, I agree wholeheartedly with Jim Horning's subsequent reply that we'd
all be better off, if we're to discuss structured programming, having a
discussion based on "light" issues, not "heat" issues.  You know, I'd bet
Karsh had his tongue just about puncturing his cheek when he wrote his
piece.  Surely he can't have been serious?  Either that or he's never
produced code bigger than a student programming assignment. (Wonder how it
passed with all those left-margin aligned GOTO's?-)

In the real world of programming, systems are often very complex, as well
as complicated.  To not bring a disciplined attitude to their construction
is suicide.  I learned structured programming at University, from people
such as Jim Horning.  It is one of many disciplines.  It works, as do
others.  In my opinion, it works better ... but, maybe, not for everybody.
However, I cannot imagine program construction without *some* discipline.

In any case, I view Karsh's provocation as one of "form", not "substance".  He
argues [very speciously] about indenting, variable naming, and other "rules"
which all pertain only to form.  This is like attacking literature based on
rules of grammar (e.g. saying that only "free form" prose is valid poetry
and that rhyming couplets produce garbage).  Why waste the time?  Any
conclusion can be drawn from incorrect premises, which is exactly what Karsh
does.  By stating that SP isn't about correctness, but about maintainability,
he goes on to draw all sorts of silly conjectures.  So what?           [...]

Please report problems with the web pages to the maintainer

x
Top