The RISKS Digest
Volume 13 Issue 51

Wednesday, 20th May 1992

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Autopilot Flaw
Jaap Akkerhuis
GAO report on C-17 software
James Paul
Big Brother in The Netherlands
Jan L. Talmon
Keystroke capture
Mark Rasch
Risk of serving lunch to the First Lady
Timothy Petlock
Re: TRW
Willis H. Ware
comp.risks WAIS servers available
Scott Draves
Re: Not enough trained computer experts
Fred Cohen
Re: Yet more Software-in-the-Air scares
Pete Mellor
Martyn Thomas
REMINDER on COMPASS '92: Conference on Computer Assurance
Laura Ippolito
Info on RISKS (comp.risks)

Autopilot Flaw

Jaap Akkerhuis <jaap@research.att.com>
Mon, 18 May 92 12:31:33 D
Older Boeing 747 Airplanes Suspected of Diving Due to Design Flaw

   SEATTLE (AP) - The Boeing Co. should redesign the autopilot system on
hundreds of 747 jumbo jets because of a flaw that could send the planes into a
dive, the National Transportation Safety Board said.
   The board has asked the Federal Aviation Administration to order the
redesign, NSTB chairwoman Susan Coughlin said Thursday.
   The board investigated an incident in December in which a 747-100 cargo jet
rolled to the right and dove 10,000 feet from an altitude of 31,000 feet while
on a flight from Anchorage to New York.
   Stray signals told the autopilot to put the plane into a roll, according to
Coughlin, who cited tests by Canadian authorities on the Evergreen
International Airlines aircraft. It is unknown what caused the signals, she
said.
   The plane landed safely.
   The FAA will decide within 90 days whether to order the redesign, spokesman
Dave Duff said.
   ``We believe the autopilot system is safe,'' Boeing spokesman Chris Villiers
said.
   The NTSB request includes the systems on 724 airplanes delivered between
1969 and the late 1980s. A different autopilot system is used now.


GAO report on C-17 software

James Paul <jpaul@nsf.gov>
Mon, 18 May 92 11:58:45 EDT
Those interested in the C-17 software report from GAO can get a free copy by
calling (202) 275-6241 and asking for report IMTEC-92-48, dated May 7, 1992.
The title is "Embedded Computer Systems: Significant Software Problems on C-17
Must be Addressed."  Alternatively, call your Congressman's or Senator's
district office and ask them to get it for you from GAO.

P.S. [for PGN, but relevant here in case you try to send mail to James!]
  [For reasons DELETED, ] messages going to PAUL@NOVA.HOUSE.GOV have
  been wafting off into the electronic ether somewhere.  We are supposed to be
  back on-line again sometime soon, but until then I've been downloading RISKS
  from the archives to try and keep up.  You'll love the C-17 report — GAO
  says "The C-17 is a good example of how _not_ to approach software
  development when procuring a major weapons system."  It has most of the usual
  problems — underestimated risks, failure of the customer to exert control,
  poor documentation.  Great reading material for classes in software
  development.

-- James Paul (House Science Committee)


Big Brother in The Netherlands

"Jan L. Talmon" <MFMISTAL@rulimburg.nl>
Tue, 19 May 92 09:18 MET
In today's [19th of May] issue of the "Volkskrant", a quality Dutch newpaper
there appeared an article which says that the Departments of Justice and
Traffic are studying the possibility to introduce smart cards to detect, among
other things, violations of the traffic laws such as speeding (quite common on
the Dutch highways), crossing red lights, frauds with number plates etc.  The
other things that could be controlled is whether the car owner has paid his
insurance fee (obligatory in The Netherlands), his road tax, and whether the
car has had it's yearly technical check (APK).

The system under study consists of a smart car to be attached to the car,
detectors in or near the roads and a central computer system.

The article says: "A major drawback is the possible feeling that `Big Brother
is watching you'.  By installing a privacy code, data on law violations will
only be transferred to the relevant organizations.  Another problem is the
value of information obtained by electronic means and stored in magnetic form
in court.  Currently, it is up to the judges to value the information provided
by computer systems."

The article ends with: "The organizational consequences, however, are
still completely unclear.  Of course, attention should be paid to the
protection of the system against tampering."

Risks.... obvious!!!

Jan Talmon, Dept. of Medical Informatics, University of Limburg, Maastricht
The Netherlands       EMAIL: Talmon@MI.Rulimburg.nl      [Translated by JT]


Keystroke capture

<Rasch@DOCKMASTER.NCSC.MIL>
Wed, 20 May 92 16:03 EDT
There has been a lot of talk on the net, (and off the net) about whether or not
it is legal or proper for a system administrator to capture keystrokes of
intruders/trespassers who are using their system to break into the systems of
others.  We all remember Cliff Stoll's exploits in "The Cookoo's Egg" where he
traced the German Hackers through LBL by keystroke capture, and then notified
downstream users that they were being attacked.

Several people (and organizations) have taken the position that keystroke
capture both violates privacy rights and constitutes illegal electronic
surveillance.  I believe that, with respect to *intruders* both these arguments
are specious.

Fourth Amendment

The principal protection against *governmental* intrusions into privacy rights
is the Fourth Amendment to the constitution which provides that:

  The right of the people to be secure in their persons, houses, papers, and
  effects, against unreasonable searches and seizures, shall not be violated,
  and no Warrants shall issue, but upon probable cause, supported by Oath or
  affirmation, and particularly describing the place to be searched, and the
  persons or things to be seized.

It is important to note that this only applies to searches performed by the
government. Burdeau v. McDowell, 256 U.S. 465, 475 (1921) even if the
government is not acting in a law enforcement capacity New Jersey v. T.L.O.,
469 U.S. 325, 336 (1985). Thus, to the extent a sysop is not a "government
agent" the Fourth Amendment is not implicated.

Also, in order for there to be a Fourth Amendment violation, the individual
must have exhibited an actual subjective expectation of privacy (Katz v. U.S.,
389 U.S. 347, 361 (1967) (Harlan, J., concurring)) and society must be prepared
to recognize that expectation as objectively reasonable.  An intruder should
have neither a subjective expectation of privacy, nor should society recognize
any expectation of privacy as "reasonable."  Thus, if you break into my system,
I should be able not only to kick you off, but also to monitor what you do on
my system.

Finally, the general sanction for violation of the Fourth Amendment is
suppression of the illegally seized evidence and its fruits. Weeks v. U.S., 232
U.S. 383, 398 (1914) (federal search); Mapp v. Ohio, 367 U.S. 643, 655 (1961)
(state search).  Thus, a private keystroke capture of an intruder would not
violate the Fourth Amendment.

Electronic Surveillance

In 1986 Congress amended the Electronic Communications Privacy Act to prohibit
the unlawful interception of electronic communications, including e-mail and
the like.  In general, the law, contained in Title 18 of the United States
Code, Section 2511, prohibits the interception of wire, oral or electronic
communications.  HOWEVER, there are several provisions which would permit
keystroke monitoring in certain circumstances.

First, 18 U.S.C.  2511(2)(a)(i) notes that:

  It shall not be unlawful under this chapter for an operator of a switchboard,
  or an officer, employee, or agent of a provider of wire or electronic
  communication service [bbs operator] . . . to intercept, disclose or use that
  communication in the normal course of his employment while engaged in any
  activity which is necessarily incident to the rendition of his service or to
  the protection of the rights or property of the provider of that service,
  except that a provider of wire communication service to the public shall not
  utilize service observing or random monitoring except for mechanical or
  service quality control checks.

While this statute is not a model of clarity, and fails to define key terms
like what is a *provider* of electronic communication service (the network
administrator? the sysop?) it appears to permit electronic interception and
keystroke capture it this is necessary to protect the rights and property of
the provider of the service.  If the intruder is breaking in to the computer of
*another* (not the provider) and the provider can easily terminate this
unauthorized use, then it could be argued that the keystroke capture is not
necessary to protect *his* property.  However, the statute uses the term
"necessarily incident to . ."  not "necessary to" and, in light of the strong
possibility of downstream liability to the provider for somehow permitting the
intruder to use his system to break into another's, a strong argument can be
made that keystroke monitoring of intruders is reasonable, prudent, and
necessarily incident to the protection of rights and property.

In addition, 18 U.S.C.  2510(13) defines a "user" of electronic
communications as:

     any person or entity who -

     (A)  uses an electronic communication service; and

     (B)  is duly authorized by the provider of such service
          to engage in such use.

Since an intruder is not authorized to use the service, he is not a "user"
entitled to protection under the statute.  Finally, while warning banners are
helpful to demonstrate a lack of authorization to use a particular system, they
are not required to demonstrate a lack of authorization any more than "No
trespassing" signs are necessary to demonstrate a lack of authorization for an
individual to, for example, break into your house. (a simplistic analogy
admittedly)

This is, of course, only part of the story.  Many states have privacy statutes,
and their own definitions of illegal electronic interception, and this does not
address potential civil liability to users for excessive keystroke capture.
However, I believe that if keystroke monitoring is accomplished in a reasonable
and prudent fashion, it would not run afoul of either the constitutional or
statutory provisions.  Let the trespasser beware!!!

Mark Rasch, Esq., Arent Fox Kintner Plotkin & Kahn    [Std. Disclaimer]


Risk of serving lunch to the First Lady

Timothy Petlock <timdude@cs.wisc.edu>
Mon, 18 May 1992 17:25:32 GMT
Yes, that's right.  Serving lunch to Barbara Bush can cause all sorts of
problems from the past to be dug up.  My roommate and I found out firsthand on
Saturday afternoon.  He called me 20 minutes after I dropped him off at work,
saying "It seems I'm in jail.  Can you come downtown and bail me out?  Bring
$220."

The cause?  It seems they did a background check on all the hotel employees
that would be involved in the function that day.  He had bounced one too many
checks at a grocery store in the northern Wisconsin town where he lived — a
year and a half ago.  The checks were all paid before he moved down here and he
had no idea that any charges had been filed.


Re: TRW (Culnan, RISKS-13.48, Loshin, RISKS-13.50)

"Willis H. Ware" <willis%iris@rand.org>
Mon, 18 May 92 11:05:38 PDT
Mary Culnan reported the lengthy list of items that TRW asks for in order
to receive the free credit report.  Peter Loshin reported that he found
his credit report satisfyingly "sparse".  TRW is reported to be in the
information-sales business which would imply large and quite complete
records.  How does one reconcile all of that?

The Fair Credit Reporting Act requires that a copy of the "credit report"
be given to anyone upon request [or for fee] or upon denial of credit.  At
the time the FCRA was passed [roughly 1970], things were simple in the
credit-reporting and information business.  They are not today.

One must wonder what the definition of "credit report" would be in today's
world.  I'm sure that the TRWs of the world would argue that it would be
just that part of a data-subject's record that is pertinent to a credit
decision.  It is unlikely that any data-subject gets the full content
of the record by requesting a credit report, although it is tempting to
believe to the contrary.

I know of no law compelling the credit reporting industry to go beyond
furnishing the most simplistic from of an individual's record; namely, that
part of the record pertinent to credit matters.  One does not know what the
status of the individual's total record might be.  We're not seeing them, but
it's obvious that they're available for sale.  Might they be subject to
subpoena without knowledge of the data-subject?  One wonders what corporate or
industry policy is on that count.

Of course not everything need be included in a single record.  A company could
maintain separate databases — although perhaps less efficiently — for credit
reporting vs. general information sales.  Of course, modern software can easily
subset a record for printing, but it really doesn't matter from the viewpoint
of the data-subject who is not seeing everything.
                              Willis Ware, Santa Monica, CA


comp.risks WAIS servers available

Scott Draves <spot@FORTRAN.FOX.CS.CMU.EDU>
Sun, 17 May 92 22:33:54 EDT
The RISKS digest is available via WAIS.  there are two servers, one run by TMC
and one by me.  TMC's is better, but is available only during restricted hours.

WAIS is a distributed full-text search system based on the Z39.50 protocol.
There are on the order of 200 public servers scattered around the world which
provide a straightforward way to search through mailing list and newsgroup
archives, network directories and catalogs, poetry, weather, back issues of the
Communications of the ACM, and a variety of other stuff.

For more information, ftp to think.com then "cd wais".  or see the
comp.infosystems.wais newsgroup.

Below are the "source" files for the two servers:

(:source
   :version  3
   :ip-address "128.2.206.11"
   :ip-name "gourd.srv.cs.cmu.edu"
   :tcp-port 6000
   :database-name "comp.risks"
   :cost 0.00
   :cost-unit :free
   :maintainer "spot@cs.cmu.edu"
   :description "Server created with WAIS release 8 b4.1 on
      May 9 21:58:25 1992 by spot@gourd.srv.cs.cmu.edu
The files of type mail_digest used in the index were:
   /gourd/usr0/spot/wais-db/comp.risks

This server contains issues 1.00 to (at least) 13.47 of the comp.risks
newsgroup/mailing list:

        FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

This server runs 24 hours everyday (cf risks-digest.src).
"
)

(:source
   :version  3
   :ip-address "131.239.2.110"
   :ip-name "cmns-sun.think.com"
   :tcp-port 210
   :database-name "RISK"
   :cost 0.00
   :cost-unit :free
   :maintainer "bug-public@think.com"
   :description
"Connection Machine WAIS server.  Operated between 9AM and 9PM EST.

Risk Digest collection from the arpa-net list, but this is so far an unofficial
archive server.  It contains all issues, but is not updated automatically yet.
"
)


Not enough trained computer experts (Marshall, RISKS-13.50)

fc <FBCohen@DOCKMASTER.NCSC.MIL>
Mon, 18 May 92 07:16 EDT
How true - but the root cause of the current software crisis (which is widely
unknown to the user community) is that it's bad for business to talk about the
down side.  Let me give a few examples:

I was writing a monthly column for a rag in the Unix world, and it was
cancelled because (according to the publisher) the advertisers threatened to
pull their ads if the security problems with Unix were published in the rag.
This despite the fact that I included code to fix every problem I described in
the same article as I described the problem.

I was writing a monthly column for another rag in the Netware world, and the
Novelle lawyers told the publication they would sue if my articles were not
stopped!  It seems they were upset that I was pointing out how netware could be
abused and how to avoid the problems inherent in the implementation.  Ther goes
another forum for the public.

Over the last several years, I have applied for positions in over 100
universities, and the universal response is that protection is not of interest
to the university community.  This dispite the recent report from the US
National Research Council that calls for increased university research in the
field.

You cannot find a single US university (and only a few outside the US) with
more than 2 computer security experts on the same faculty.  You also cannot
find a university in the US where more than one person was hired with the prior
knowledge that they have computer security interest (again I am talking about
faculty positions).  With the educators woefully ignorant, we can only expect
the students to be equally ignorant.

I have dealt with literally hundreds of companies over the last 10 years in the
area of computer security consulting, and the universal feeling seems to be
that you only invoke security after a disaster forces you to, and then you back
away from it as soon as possible afterwards.  It's like insurance, except that
the board won't force you to get it, and the stockholders are never told that
it's imprudent not to have it.

The media constantly hits the idea that any security system can be broken and
that the human end of things saves the day.  This makes the technological end
of protection a negative in most people's minds.  The social implication is
that regular people rarely see any benefits with protection systems.  How come
we never hear on the risks forum about any successes where the computer
security system saves people?  It happens every day you know.  How about a few
nuclear reactor stories where human error was detected and corrected by a
computer and saved us from a meltdown?  How about some airplane stories where
fly-by-wire kept a small plane from (large plane?)  crashing when the pilot was
hurt in a collision with a flock of birds?  How about the people saved every
day by airbags and anti-lock brakes?  I know that the risks forum is intended
to help us see and understand the risks, but many in the media view this forum
as well, and perhaps we should consider the risks of only discussing failures
and ignoring successes.

So, this whole thing was inspired by the British story about BA incidents.  I
have been in a 747-400 flying from DC to Heathrow (it starts in Pittsburgh)
several times, and once we were told after a very smooth landing on a fairly
poor landing condition day, that the computer have made the landing for us.  I
was pleased to know that I was an unwitting part of the great experiment,
butthe landing was far smoother than most pilot landings in similar conditions,
so I should feel pretty good about it.

The point is that the reason we don't have enough experts to do QA is that we
don't teach that stuf in schools, we punish those who follow that line (as a
society), and not enough executives lose their children in these incidents to
cause them to care about it.  After all, we can't even detect a bomb on a 747
flying out of the most security conscious airport in the world (Heathrow -
commercial that is), so why do we think we can track down minor computer
software bugs that don't even kill hundreds of people?

Well, that's enough space for now - I'll continue my ravings at a later date.

P.S.  How can we expect these computer systems to work so well when the
computer I use to talk to the network doesn't let me see the last line when I
reach the end of a page, doesn't let me backspace past a line break, and
doesn't automatically check for spelling errors before sending my mail out?
And this is a computer operated by the NSA designed for multilevel secure
operation.  It obviously has some major integrity problems.  Until we get
computers past these problems, I doubt if we will be able to design a truly
safe fly-by-wire system to control aircraft.
                                                         FC


Re: Yet more Software-in-the-Air scares [RISKS-13.50]

Pete Mellor <pm@cs.city.ac.uk>
Mon, 18 May 92 14:30:04 BST
Simon Marshall drew our attention to an article on:

> the front page of the UK ``Sunday Telegraph'' May 17, 1992, a so-called
> ``quality newspaper''.

This was rather useful, since the only Sunday paper I normally read is the
``Observer'' (a *real* quality newspaper :-).

I would like to make a few comments on the article itself and on Simon's
comments.

The article states:

<> But leading computer experts are worried that there is no adequate way of
<> testing the enormously complex software routinely used by the aircraft
<> industry.

and Simon comments:

>   Given the number of years these systems have been
> around, it worries me to think that these relatively simple systems fail at
> this frequency, while fly-by-wire, with its increased complexity and the
> increased reliance upon those systems for the safety of the aircraft, is now
> being applied to commercial aircraft.

So are the systems complex or simple?

It depends which on-board system we are talking about.

Fly-by-wire systems (according to my best information - the probability of an
outsider being allowed to see the source code are slightly less than the
required maximum probability of failure of a critical avionics system, i.e.,
10^-9. :-) are *relatively* simple. They accept an input vector of flight
parameters, multiply it by a matrix whose elements represent the "flight
control laws" in force at that moment, and output a vector of signals to
the hydraulic actuators which move the flight control surfaces. They control
the second-by-second behaviour of the aircraft, and they don't have to
"remember" too much: in the event of a transient failure, it is usually
acceptable to reset them by switching off and on (if you have time! :-).

On the other hand they *are* regarded as "critical" from a certification
point of view: they possess modes of failure which can crash the 'plane.

Flight Management Systems, on the other hand, are (by any standards)
horrendously complex. They not only include the function of the traditional
autopilot, but can also guide the aircraft over its entire route, including
performing an automatic landing. To do this, they can in some cases access a
database of airport information and topographical details of the approach
terrain, and can select the cheapest or fastest route.

However, they are *not* regarded as critical: the pilot can override them,
and if you just happen to be a few miles off-course, it's not necessarily
fatal.

>  at least some of the "software errors" were within
> auto-pilot control systems (it may be all, the article is not clear -

The incidents cited in the article all involve sudden unintended manouevres.
They *could* be due to the FMS "telling" the FBW to do something stupid, but
might conceivably arise within the FBW. (Where the under/over thrust
conditions are concerned, bear in mind that the FBW must "talk to" the
engine controller, these days almost invariably a Full-Authority Digital
Engine Controller (FADEC), and that the software in the FADEC is a single
point of failure, since there is no diversity of the software in the duplex
channels of the FADEC.)

>  maybe BA does not fly any fly-by-wire aircraft anyway, I don't know).

It operates some A320s. The aircraft in the three incidents described are all
recent Boeings, which don't have quite the same degree of automation at the
FBW level, but still have some.

> The third is that the software had "CAA approval", as if this is meant to
> make us feel any better, and that it had been tested by BA themselves
> (not the CAA),

A few facts about "CAA approval":

To start with, the regulations stipulate quantitative demonstration of
reliability for *systems*, i.e., the manufacturer must convince the
Airworthiness Authorities that an on-board system has the famous maximum
probability of catastrophic failure of 10^-9 per flying hour. (For less
serious failure modes, higher probabilities are allowed.)

For the *software* in those systems, however, NO FIGURE IS PLACED ON
RELIABILITY. Instead, a *process* certification, rather than a *product*
certification is employed: the manufacturer has to provide the Authority
with documents which show that a "good job" has been done in developing
the software. These include (for software in critical systems) test plans
and reports, details of inspections carried out, summary of achievement
(whatever that is), etc., etc. These procedures and the required documents
are specified in a set of guidelines: ``Software Considerations in Airborne
Systems and Equipment Certification'', referred to as RTCA/DO-178A.

In no case do these guidelines oblige the manufacturer to make available to
the Authority either a machine-readable copy of the source code or object code.
The question of the Authority (or anyone else) doing independent verification
and validation (IV&V) therefore simply does not arise with the regulations as
they stand.

> for "100 hours before entering service".  This does not seem particularly
> rigorous to me!

I disagree that this is "not particularly rigorous". It's pathetic! In fact,
I suspect that a proof-reader failed to spot a missing zero or two here.
The A320 systems were run on a ground simulator for a year before the first
test flight, and run for a further year in flight and on simulators before
service.

> The fourth came with the old call for qualifications for those working in
> safety-critical software design; lack of suitably trained people.

What John Cullyer meant here was training in formal mathematical methods:
the use of formal specification languages such as Z or VDM, together with
mathematical proof of correctness as part of verification. He's right, except
that the use of such methods does not guarantee perfect software. In fact,
no method that we know of, and possibly no *conceivable* method, would ever
enable us to claim the incredibly low failure probability required by the
regulations, where software is concerned (which would have been the main
message imparted by Bev Littlewood in his talk to the ACM).

Peter Mellor, Centre for Software Reliability, City University, Northampton
Sq., London EC1V 0HB, Tel: +44(0)71-477-8422, JANET: p.mellor@city.ac.uk


Re: yet more software-in-the-air scares

Martyn Thomas <mct@praxis.co.uk>
Mon, 18 May 92 15:47:51 +0100
The Sunday Telegraph report was based on an issue of British Airways'
newsletter FlyWise, which seems to be a monthly safety awareness newsletter
for BA pilots. This issue covered January, and more incidents were caused by
software than by any other cause except ground handling (eg trucks
colliding with parked aircraft, baggage handlers denting the hull).

The UK trade paper, Computer Weekly, is covering the story in this Thursday's
issue. BA have apparently told them that the faults in the 747-100 and -200
were the result of a maintenance upgrade to the FMS, and were not
safety-critical.

So, do these incidents provide any reason for concern? Do they reveal process
failures in software maintenance? Do they reveal failures in recertification?
Will DO-178B help? Should all airlines make their safety-records available for
statistical analysis, so that Bev Littlewood can predict *next* year's
reliability figures? These are some of the questions which probably won't be
found in next week's Sunday Telegraph.


Conference on Computer Assurance REMINDER

Laura Ippolito <ippolito@swe.ncsl.nist.gov>
Mon, 18 May 92 09:19:47 EDT
                       Final Announcement

                           COMPASS '92
         SEVENTH ANNUAL CONFERENCE ON COMPUTER ASSURANCE

    Systems Integrity, Software Safety, and Process Security

                        June 15-18, 1992
                        Gaithersburg, Md.

         National Institute of Standards and Technology
                   Technology Administration
                   U.S. Department of Commerce

              FOR MORE INFORMATION, SEE RISKS-13.45.

Please report problems with the web pages to the maintainer

x
Top