The RISKS Digest
Volume 18 Issue 27

Tuesday, 23rd July 1996

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Problems with Olympic Information System
Edupage
Re: *Primary Colors* and Joe Klein
Joel Garreau
Ariane 5 failure: specification and design flaws
Pat Lincoln
Remote software changes are here
David Cassel
*The Logic of Failure*, Dietrich Doerner
PGN
Addendum to the complexity of everyday life
Don Norman
Re: The increasing complexity of everyday life
John Pescatore
Re: Western power outages
PGN
Jonathan Corbet
Tracy Pettit
Re: 56-Bit Encryption Is Vulnerable
Barton C. Massey
Steven Bellovin
Centre for Software Reliability: Design for Protecting the User
Pete Mellor
Info on RISKS (comp.risks)

Problems with Olympic Information System (Edupage, 23 July 1996)

Edupage Editors <educom@elanor.oit.unc.edu>
Tue, 23 Jul 1996 17:04:28 -0400 (EDT)
The "Info'96" IBM computer system designed to deliver instantaneous results
of Olympic competitions to the worldwide press is working for journalists in
Atlanta but not for the journalists worldwide who are supposed to be getting
information from the World Press Feed.  Some journalists are angrily
referring to the "Info'96" system as "Info'97."  An IBM spokesman said that
"we expect people to judge us from our performance over the long haul of the
games, instead of the first two days."  Results are available quickly over
the site maintained by IBM at < http://www.atlanta.olympic.org >.  (*Atlanta
Journal-Constitution*, Atlanta Games, p25) [presumably 22 July 1996]

[See also a fine article by Jerry Schwartz in *The New York Times*, 22 July
1996, C1 in the National edition.  This situation was attributed to
``start-up problems.''  Jerry's article noted that ``Olympic technology
officials were organizing a manual results system, part of which the ancient
Greeks might have appreciated.  Results are to be transmitted by facsimile
machines from outlying venues to a central office and distributed by
runners.''  The article also noted the 12-minute blackout at the Georgia
Dome during Saturday night's Dream Team appearance — blamed on a technician
who pulled the wrong switch.  Transportation problems are also quite severe,
and cellular telephone systems were seriously overloaded at the opening
ceremonies.  I presume everything will be ironed out nicely just in time for
the final ceremonies.  PGN]


Re: *Primary Colors* and Joe Klein (PGN, RISKS-18.26)

Joel Garreau <garreau@well.com>
Sun, 21 Jul 1996 06:48:04 -0700 (PDT)
PGN makes excellent points about the difficulty of living a lie in his
report on Joe Klein being unmasked as the author of "Primary Colors."  But
as the editor of *The Washington Post* team that had a lot of fun and a lot
of pain reporting the "Primary Colors" story, allow me to cough a little
dryly about the positive spin you put on the role of computers in the
eventual success of our efforts.

For openers, the lesson I drew from my experience was that I would *never*
trust a computer text analysis again.  We ran a massive such effort
independent of Professor Foster and *New York* magazine, and ours turned up
results that at the time seemed fascinating, but in retrospect were
ludicrous.

Even Foster didn't trust his results enough to bet the ranch on it.  As
recently as the day we finally broke the story, he was saying he thought it
was Klein plus somebody else, and was still berating *New York* magazine for
editing into his copy the flat statement that Klein was the author.  Said
flat statement was inserted by an editor with no special computer
experience.  Klein, however, first achieved note as a political columnist
for the very same *New York* magazine.  I suspect, therefore, that human
intuition if not specific knowledge had more to do with that piece than the
computer did.

We at *The Post* *did* get a frightening amount of financial information on
Klein and his wife by computer, including the cost of his house, the amount
of his mortgage, his address, his previous address, everything there is to
know about his cars, and so forth.  And we did it in a startlingly short
period of time.  It's amazing what you can do when you have a person's
social security number and date of birth, and equally sobering how easy it
is to get that information.  Only our sense of journalistic propriety
prevented us from pursuing and using further information that was readily
available.  But again, the information so gathered ended up being largely
tangential to the final report.

I find it marvelous that what finally broke the case was good old-fashioned,
if imaginative, gumshoe reporting.  David Streitfeld, a Washington Post
reporter with eclectic literary interests, receives all sorts of snail-mail
catalogues from tiny second-hand bookstores.  He saw offered for sale a copy
of the manuscript...and the rest you can read in your newspapers.  The
handwriting analyst was an expert human.  No computers were significantly
involved.

Also, the reason Klein is in hot water today is that back when the *New
York* article ran, we had our junk-yard dog, my boss, David Von Drehle, put
him up against the wall by reminding him that credibility is the only asset
a journalist has.  Von Drehle than asked him to swear on his journalistic
credibility that he was not the author of "Primary Colors."  That's when he
most memorably lied, as Klein himself acknowledged at his press conference.

In short, we put an extraordinary amount of computer effort into this story,
including a passworded spreadsheet to keep track of all our reporting.  But
the cyberheroics ended up at best a sideshow if not a distraction, at least
in our experience.

It finally was cracked and developed by old-fashioned means.

Joel Garreau

   [And in subsequent elections, Joe may now be saddled with Primary Collars.
   Somehow, I am reminded of a quote from the cast party after the final
   episode of an early TV serial, Peyton Place, in which one of the actors
   who had been on the show longest was asked,

     ``To what do you owe your success in acting?''

   The answer was this:

     ``Honesty.  Once you've learned how to fake that, you've got it made.''

   PGN]


Ariane 5 failure: specification and design flaws (RISKS-18.24)

Pat Lincoln <lincoln@csl.sri.com>
Tue, 23 Jul 1996 10:55:22 -0700 (PDT)
A recent press release contained a good quote about the cause of Ariane 5
failure.  The key 2 sentences were these:

> The failure of Ariane 501 was caused by the complete loss of guidance
> and attitude information 37 seconds after start of the main engine
> ignition sequence (30 seconds after lift-off). This loss of
> information was due to specification and design errors in the software
> of the inertial reference system.

The full text is at
http://www.esrin.esa.it/htdocs/tidc/Press/Press96/press33.html


Remote software changes are here

David Cassel <destiny@wco.com>
Fri, 19 Jul 1996 20:46:29 -0700
Tonight when I logged onto AOL, I was told something like "New Features!
America Online is being updated."  They then downloaded a software change.

What's disturbing is they didn't give a chance to opt out of the upgrade
first.  (The disclaimer wasn't readable.  All but a corner of the text block
was hidden behind the "Welcome" screen as the bar indicating "download in
progress" snaked its way to 100%...at which time the disclaimer vanished.)


*The Logic of Failure*, Dietrich Doerner

"Peter G. Neumann" <neumann@csl.sri.com>
Fri, 19 Jul 96 14:41:37 PDT
  Dietrich Doerner
  The Logic of Failure:
    Why things go wrong and what we can do to make them right
  Metropolitan Books (Henry Holt), New York, 1996

      [Auf deutsch war es vorher: Dietrich D\"{o}rner,
      *Die Logik des Misslingens*, Rowohlt, 1989.]

This is a book that appeals to me very much because of its system-oriented
viewpoints.  ``Faced with problems that exceed our grasp, we pile small
error upon small error to arrive at spectacularly wrong conclusions.  We too
often ignore the big picture and seek refuge in what we know how to do --
fiddling while Rome burns.''  The problems under consideration are largely
not computer problems, but the lessons are all generally relevant to RISKS
readers.

Of some importance here is the logic of analysis, which in this case
includes some creative simulation studies of very diverse failures — among
which are some that are familiar to RISKS readers.  The results are quite
far reaching.  The book provides a very interesting quasimathematical
approach.  The author is a distinguished German researcher, professor of
psychology at the University of Bamberg, and winner of the Leibniz prize.


Addendum to the complexity of everyday life (RISKS-18.26)

Don Norman <dnorman@apple.com>
Sun, 21 Jul 1996 11:14:25 -0700
As an addendum to my original posting, I'd like to recommend an excellent,
just published book, that describes many aspects of the ever-increasing
complexity of everyday life:

        Tenner, E. (1996). Why things bite back: Technology and the revenge of
        unintended consequences.  New York: Alfred Knopf (
www.randomhouse.com/ ).

The main emphasis is on the unintended side effects of human introduction
of items alien to the culture or environment — this includes new
technologies, but also natural things such as the eucalyptus tree imported
into Southern California) or the kuzdu, introduced into the southeast
United States. Or the computer, supposedly an enhancement of productivity,
but instead a time sink. Or  ...

Table of contents:
      1. Ever since Frankenstein
      2. Medicine: Conquest of the catastrophic
      3. Medicine: Revenge of the chronic
      4. Environmental disasters: Natural and Human-made
      5. Promoting Pests
      6. Acclimatizing pests: Animal
      7. Acclimatizing pests: Vegetable
      8. The computerized office: The revenge of the body
      9. The computerized office: Productivity puzzles
    10. Sport: The risks of intensification
    11. Sport: The paradoxes of improvement
    12. Another look back, and a look ahead.

Donald A. Norman, VP Apple Research, Apple Computer, Inc MS 301-4D, 1 Infinite
Loop, Cupertino, CA 95014 +1 408 862-5515 http://www.atg.apple.com/Norman/


Re: The Increasing Complexity of life

John Pescatore <johnp@tis.com>
Tue, 23 Jul 1996 08:46:27 -0400
I think increasingly complexity is an inevitable result of our nature as
tool builders and users. Most animals (non-domesticated type) have very
non-complex lives: find food, eat, find food, eat, sleep, procreate - kinda
like folks who retire to swinging retirement communities in Florida.

As humans developed tools, they found all kinds of things to do with the
tools: cook food, build shelter, develop polyester to construct leisure
suits to support the pursuit of procreation, etc. We could have simply used
the tools to increase sleep time, much the way computers were to increase
our leisure time, but our nature seems always to lead to the building of
things (or as George Carlin calls it, "stuff") which then demands the
maintenance of things. This feeds the complexity upward spiral, since the
old things never seem to go away and constantly interact with the new things
in odd ways. Witness voice mail.

>From a Risks perspective, I think increased communications paths mitigates
many risks. In my experience on the system engineering side of software
development, most bugs occurred at interfaces, either between systems or
subsystems, or between people or organizations. While one approach might be
to eliminate interfaces, the end result is a lot more work inside each
element. My life would be less complicated without a telephone, but I would
spend a lot of time calculating what I could find out in one phone call. I'm
not sure which scenario is more complex or more error-prone.

Putting big pipes between elements and maximizing interconnections can
certainly lead to unpredictable results, but we can rarely predict the
future anyway, so outside of relatively small systems unpredictability is
not always bad. I think the United States melting pot model was an example
of a highly interconnected system that lead to many unpredictable results -
what war simulation would have predicted a recovery from Pearl Harbor? I
fear that as American society swings back towards a less interconnected set
of systems/cultures, many risks and interface errors will emerge with
serious consequences. Similarly, any business that tried to reduce, rather
than increase, its level of "connectiveness" would be a very risky investment.

John Pescatore, Trusted Information Systems, 3060 Washington Road
Glenwood, MD  21738  301-854-5710 johnp@tis.com 301-854-5363 (fax)


Re: Western power outages (RISKS-18.25)

"Peter G. Neumann" <neumann@csl.sri.com>
Sun, 21 Jul 96 21:07:05 PDT
In RISKS-18.25, I noted the Western power outages of 2-3 July 1996, and
Jerry Saltzer commented on the evident confusion among the differing
reports of what might have happened.

It took until 20 July 1996 — 18 days later — for the cause to be
identified officially: an Idaho transmission line that short-circuited when
electricity jumped to a tree that had grown too close.  The tree, which has
since been removed, caused a flashover in an area about 100 miles east of
the Kinport substation in southeastern Idaho.  The line carried 345
kilovolts.  [Source: Associated Press item in the *San Francisco Sunday
Examiner and Chronicle*, 21 July 1996, p.A-8.]

  [I did not hear anyone say, ``Of course, we've fixed everything and it
  will never happen again.'' (But I thought I heard that in the 1960s.)]


Re: Western power outages (RISKS-18.25)

Jonathan Corbet <corbet@stout.atd.ucar.edu>
Fri, 12 Jul 1996 09:02:36 -0600
Just a quick pointer: for those of you interested in how power outages like
the one we experienced could happen, I highly recommend getting and reading
a copy of "Brittle Power" by Amory Lovins and Hunter Lovins.  I found it to
be a high-quality discussion of a certain class of technology-related
risks — the failure modes of our energy distribution systems.

Jonathan Corbet, Nat'l Center for Atmospheric Research, Atmospheric Technology
Division   http://www.atd.ucar.edu/rdp/jmc.html  corbet@stout.atd.ucar.edu


Re: Western power outages (RISKS-18.25)

tracy pettit <tnpetti@nppdnet.com>
Fri, 19 Jul 96 08:32:19 CDT
The United States has two major power grids. The Eastern and the Western,
split approximately along a line passing through the Nebraska,
Colorado/Wyoming border.  The Western grid is complicated by the population
pattern of the western U.S.  The grid there tends towards a large "donut".
The two grids are tied together by a few relatively small DC ties.

This is not the paltry amounts of electricity flowing through the wires in
your walls, or even down your neighborhood alleys.  This is the massive
amounts of POWER used by the entire sections of the U.S. each second. The
physics and engineering involved covers multiple college courses.

The power flows according to the laws of physics, not according to who is
selling how much to whom.  For any given load level, the grid systems have
many bottlenecks. Power companies go to great lengths to constantly balance
generation to minimize these problems.  Take a couple of lines out of the
grid (an overload, or a fallen tree, a local storm, or just plain equipment
failure), and if conditions are right, as the power attempts to get to the
load, remaining lines become overloaded, protection systems open circuit
breakers before the equipment can "burn down", which either causes outages
(relieving the load), or causes more overloads as the power instantly takes
other routes through the grid. As the outages mount, you have generators
creating massive amounts of power with no place for it to go. The generators
own protection systems knock them off line before they damage themselves and
the humans around them.  Enough generators trip off, and now you have load
not being served, so it attempts to flow from the remaining generators,
overloading more lines, etc.  (And you thought all those dominoes were
impressive.)  This energy is flowing at a significant portion of the speed
of light.  These "system disturbances" can occur in seconds, crossing the
boundaries of all interconnected utilities in 2 blinks of an eye.  A unit
trip in one part of the grid can cause problems in another part, without
affecting the intervening portion.

Now try and do a post-mortem on this.  You have hundreds of pieces of
electric power system equipment changing state, logged by multiple (several
hundred ?) computer systems at numerous utilities all using separate clocks
(you need to figure the sequence timed in fractions of a second).

Yes, utilities talk to each other, but even our computers can't talk that
fast or accurately.  Reliability costs money, and that has to come out of
the electric rates.  Nobody is willing to pay for us to have the constant,
instant type of communications necessary to be able to determine the cause
of a large outage such as this.  It also costs money to get all this
information and people from across the U.S. together to figure out what
happened. Is it worth it?  The initial reports are most likely each utility
trying to make a determination looking at it from their end of the pipe.

Utility rates have always been partly a balance of reliablity versus
economics. Now throw the Federal Energy Regulatory Commission (FERC) and
their brand new rules 888 and 889 into the mix.  This is driven entirely by
an idea to lower rates by increasing competition, with very little
consideration for the cost of reliability.  These rules are very lengthy and
complicated, but one of the many requirements is that it requires our
Transmission System Operators and our Energy Marketers (who buy and sell
bulk power, determining generation levels) to not have any contact with each
other except through computer bulletin boards called Open Access Same-Time
Information System (OASIS) .  They cannot even see each other at the coffee
machine.  Any contact between these people has to be reported on the OASIS
(this is not interpretation; it is in the rules).  In an area that is basic
to our reliablity operations, we can no longer talk to ourselves, much less
other utilities.  Also, we have to allow anyone with power to market (who
knows where they got it) equal access to our transmission systems.  The
transmission paths (even though you cannot herd electrons) are bought and
sold via the OASIS.  This is all driven by Independent Power Producers and
intermediate Power Brokers with the idea that more competition will lower
rates.  You may have read about "open access" in the newspapers.  To keep
the transmission system owners from giving preference to themselves or
anyone else (equated to "insider trading"), all information about our
transmission system capabilities has to be posted on the OASIS, including
any engineering studies used to determine those capabilities.  All using
``standard'' Internet methods with Internet connections.  Yes, the word
*Internet* is used in the rules.  These transmission paths do not go cheap.
We are talking about millions of dollars of transactions a day being done
over the Internet.

Volumes have been written and discussed in Internet Forums about the
possible repercussions to the electric power industry.  FERC is not to be
swayed.  They say they did it with the gas industry and the
telecommunications industry.  Gas can be stored and controlled with valves.
 Telecommunciations is point to point.  Electric power is instantaneous, and
"pooled" into the grids.  They just don't seem to get it.  In some places
rates will drop.  In others where they are already low, they will rise.
Everywhere as utilities cut corners to "compete", reliability will go down.

It was asked if our energy infrastructure is vulnerable to attack?  Anyone
with a $US buck-three-ninety-eight to buy a power brokers license and a copy
of NetScape can surf the net and get access to the "OASIS" sights. With a
college minor in Electrical Engineering you can deduce the major power
bottlenecks in the power grids. Most of the transmission system lies in very
remote or rural areas.  [...]

Anybody see the 17 July 1996 issue of *USA TODAY*?  Page 2 has an article
talking about how the President of the U.S. is concerned about the security
of the infrastructure, including the Internet and the power grids.  It could
get a whole lot worse.

It has been said that my view is alarmist.  I've read the rules (I have to
write programs to help my company conform), and I have 20 years experience
with electric power companies.  Please, somebody prove to me I'm wrong.

Tracy Pettit


Re: 56-Bit Encryption Is Vulnerable (Peterson, RISKS-18.26)

Barton C. Massey <bart@time.cirl.uoregon.edu>
19 Jul 1996 19:18:04 GMT
Several articles in RISKS-18.26 discussed the conclusion of a blue-ribbon
panel that DES's 56b key is vulnerable to brute-force attack.  This issue
is, to me, totally surreal; I am frightened by the way US policy decisions
are being made.

As I understand it, the panel's motive in making the argument that DES is
vulnerable to brute-force attack is *not* to encourage people to switch to
better ciphers.  Instead, it is to argue that DES should be exportable,
since NSA can easily and cheaply break it!

IMHO two key facts ignored by this argument are:

1) Everybody in the world already has access to DES.  Exporting it will give
no one any technology access they don't already have.

2) Two-key 3DES, with its near-112b effective keylength, is *way* outside
any brute-force attack I have ever heard of.  Anybody with unbundled DES
software or with 3 pieces of DES hardware can trivially use it to do 3DES.

NSA seems to fear that bundling DES in popular American software products
will encourage routine use of good cryptography both inside and outside the
US; this makes traffic analysis much harder for them, and thus requires them
to expend much more effort on decryption.  This is a legitimate concern, and
deserves analysis in its own right — it's a much more a moral, social, and
political question than a technical one.

To phrase the debate in terms of details of the cost of specialized
brute-force DES attack hardware is, in my opinion, absurd.  As near as I can
tell, because of facts 1 and 2 above, such hardware is irrelevant to the
*real* argument, whether it costs $100M or $1.00.

Bart Massey  bart@cirl.uoregon.edu


Re: 56-Bit Encryption Is Vulnerable (Peterson, RISKS-18.26)

Steven Bellovin <smb@research.att.com>
Sat, 20 Jul 1996 04:06:31 -0400
Padgett Peterson wrote:

     Actually, the hard part is testing for success - of course if
     you have known plaintext as most cryptographers always
     assume...(can think of several ways to avoid that).

It's pretty trivial, in fact; one can do probable plaintext attacks.  David
Wagner and I wrote a paper a couple of years ago on a programmable plaintext
recognizer, designed to fit onboard a Wiener chip machine
(ftp://ftp.research.att.com/dist/smb/recog.ps).  All it demands as input is
statistical samples from the same distribution — it worked just fine on
both English text and executable files.

I also have a new paper — not yet quite available, but it will be in
ftp://ftp.research.att.com/dist/smb/probtxt.ps in a week or two, I think --
presenting an analysis of the probable plaintext available to the attackers
of the IPSEC protocols.  Even in a single packet, plenty of information is
available, it turns out; if the attacker can use traffic analysis to
identify two packets from the stream and has a suitable cracking chip (say,
two Wiener engines on a single chip, with their plaintext outputs fed to the
comparator under a programmed mask), the problem is trivial.


Centre for Software Reliability: Design for Protecting the User

Pete Mellor <pm@csr.city.ac.uk>
Tue, 23 Jul 96 11:18:43 BST
                 CSR, Centre for Software Reliability
                     THIRTEENTH ANNUAL WORKSHOP
                   DESIGN FOR PROTECTING THE USER
                 The Grand Hotel, Burgenstock, Switzerland
                         11th-13th September, 1996.

This is a brief summary of the programme. The complete version, including
full registration details and electronic booking form, is available on the
WWW at URL:-

http://www.csr.ncl.ac.uk/clubs/burgenstock.html

CSR home pages can be found at http://www.csr.city.ac.uk:8080/
and http://www.csr.ncl.ac.uk/

Who should attend?

The workshop will deal with a number of topics that are regularly aired on
the RISKS forum, and should be of great interest to all readers of, and
contributors to, RISKS.

It is intended for researchers, requirements owners and system designers who
are concerned with issues of protecting people from the consequences of
faulty and unsuitable computer and information systems.

Workshop theme

In the talk about making the roads of information safe and secure, many
wider social issues are ignored in the focus on technical solutions to
technical problems (secure protocols, trustworthy authentication, encryption
of confidential data and so on). Examples include people who have had their
creditworthiness destroyed or been made bankrupt or rendered homeless by
misuse or misinterpretation of data, and many computer systems cannot adapt
to human failings and/or have no mechanisms for allowing human attempts to
correct inappropriate actions or inaccurate data. In order to deal with such
problems, recent European legislation has decreed that data can only be used
for the purposes for which it was collected. This is clearly in the data
subject's interests, but how can the subject be reassured that it is being
enforced?

These examples can all be seen as design issues. Can we anticipate bad
consequences in the human system which arise from the computer performing
according to its specification rather than according to what is intended?
And if we could, how would this affect the design process?

There are three components to the workshop. Firstly, a couple of invited
papers will set the scene for discussing how social and ethical issues can
be translated into design. Secondly, submitted papers have been chosen to
reflect how some of the process and design problems can be addressed by
system designers. Plenty of time has been allowed for discussion of the
papers. Finally, a couple of debates will be arranged to give participants a
chance to express their views on the extent to which social concerns demand
trade-offs against efficiency (both process and product efficiency) and on
how the designer strikes a balance between ethical considerations and the
achievement of organisational objectives.

Invited Speakers are:
John Nicholls, University of Oxford: 'Design for protecting the user'.
M. Cavanagh: 'Ethics and system design'.

Other papers are by leading researchers from Europe, USA, and Australia.
They are grouped into four Sessions:

1: The Requirements Process
2: Regulatory Issues
3: Safety-critical Issues
4: Social Issues
5: Privacy Issues

REGISTRATION AND INFORMATION

For further information about the programme, the delightful venue,
and how to register, please see the WWW pages given above, or contact:

Mrs Carol Barrett
Centre Manager
Centre for Software Reliability
The City University
Northampton Square
London
EC1V OHB

Tel: +44 171 477 8421
Fax: +44 171 477 8585
e.mail: c.barrett@csr.city.ac.uk

The workshop is residential, and the full workshop registration package is
UK Pounds 895.  A specially reduced registration of UK Pounds 825 is
available to academics.

Please report problems with the web pages to the maintainer

x
Top