The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 8 Issue 17

Friday 27 January 1989


o ELIZA and Joe Weizenbaum
Bard Bloom
o Savings, Loans, and Easy Money
o Risks of inept management ["Losing Systems"]
John R. Levine
o MIT Athena Kerberos Authentication System available for FTP
John Kohl via Jon Rochlis
o Single-engine planes
Phil Karn
o Multi-engine airplanes
Craig Smilovitz
o Info on RISKS (comp.risks)

ELIZA and Joe Weizenbaum

Thu, 26 Jan 89 22:48:52 EST
> Or, there's the story about the guy who falls asleep in front of his
> terminal with an ELIZA program running and his boss logs on and thinks he's
> talking to him but is actually talking to the program, and gets pissed off.

This may have actually happened. Joseph Weizenbaum (MIT professor, author of
_Computer Power and Human Reason_) told the anecdote in a class, with himself
as one of the actors.  It went something like this — some of this is
doubtless my own memory inventing things.  The dialogue is partially courtesy
of GNU Emacs' Eliza program, and the rest is made up.

Weizenbaum had recently written ELIZA on one of the MIT AI Lab's
computers.  In those days, computers were rather weak.  The computer in
question had a time-sharing system on it, yes, but it got rather sluggish
when two people were using it at the same time.  Weizenbaum left ELIZA
running one evening and went home.  

That evening (around 4 a.m.), another AI Lab person was trying to get his
program working for a demonstration to his funding agency the next day, and
it wasn't working very well.  He was using the computer Weizenbaum was logged
on to, and decided that he needed the whole thing.  He went to Weizenbaum's
office, hoping that he could persuade Weizenbaum to log off.

When he got there, Weizenbaum was nowhere to be found, and his terminal was
on (and blank).  The AIist thought that Weizenbaum was working from home, and
had slaved his office terminal to his home one.  So, he typed "Joe, please
log off."


"I need the computer for an AI demo tomorrow, Joe"


"Joe, I'm serious.  I've got a demo here tomorrow and I need the computer."


After a few more exchanges like this, the AIist decided that Joe was being
very obnoxious, and called him at home to scream at him.  "Joe!  You *******!
Why are you doing this to me?"

Recall that it was four in the morning, and that Weizenbaum had no idea that
his creation was running amuck in the AI lab.  He quite reasonably replied,
"Why am I doing _what_ to you?"  This sounded so much like what ELIZA had
been saying that it was hard to convince the AIist that it hadn't been
Weizenbaum on the terminal.

Savings, Loans, and Easy Money

Peter Neumann <>
Fri, 27 Jan 1989 10:16:40 PST
Although the computer roles are probably insignificant, the scope of the abuses
in the savings and loan insolvencies (estimates are approaching $100 billion
just in bail-out money) are such that upwards of 20% of the cases are alleged
to involve fraud.  The incentives seem rather simple — set up an apparently
legitimate S&L, make all sorts of loans to friends, let them all default, and
then let the government pick up the pieces for the legitimate investors.  Three
of the nation's largest CPA firms — Deloitte Haskins & Sells, Coopers &
Lybrand, and Touche Ross & Co., plus smaller firms, have been sued for their
roles in failing to detect fraud.  Another large firm, Arthur Young, proclaimed
Vernon S&L of Dallas clean shortly before federal regulators declared it
insolvent — because 90% of its loans were bad.  Whatever the mixture of
mismanagement, incompetence, fraud, and other factors turns out to be, the
situation seems pervasive.  Why were the auditors were out to lunch?

Even if the era of decontrol were ended, it seems that a such widespread
problem could not be aided by better computerization (knowing what we know
about rigging computer systems, it might make fraud even easier!) — except
possibly in providing better on-line data for the auditors that might simplify
their task of rectifying computer records with reality.  Overall, enormous
amounts of money seem to encourage fraud and creative mismanagement.  Computer
systems designed to withstand misuse by one user will no longer suffice.
Separation of duties and the principle of least privilege help a little, but
massive collusions may become the order of the day, in which case checks and
balances — even on the auditors — become critical.  Who checks the checkers?

As far as who pays, I imagine that because of the S&L incorporation rules there
will be no deep pockets other than the taxpayers and S&L customers.  So the
real culprits will probably go untapped.  But recall the advice of Deep Throat:
``Follow the Money.''

Risks of inept management, was "Losing Systems"

John R. Levine <>
Sun, 22 Jan 89 23:16:13 EST
In issue 12, Keane Arase details a story of a botched data collection
manufacturing package at a large company which I assume to be Procter and
Gamble.  He reported on staff turnover, bad hiring, insufficient resources,
bad design, and a host of other terrible problems.  He points out that some
of the trouble could be traced to bad management.  It sounds to me like all
of the trouble was due to bad management.  Although large computing projects
are often plagued by management problems, such difficulties are by no means
unique to the computing business.

For example, he points out that his department was made a profit center with
profits measured quarterly even though the system wasn't expected to be
profitable for two years.  Normally under the profit center model, separate
centers are supposed to deal with each other as though they were separate
businesses, i.e. the client department should be making progress payments or
the computer department should have some provision for treating the
progressing project as a growing asset.  Accounting of multi-year projects is
hardly an unknown art, the construction business has been doing that at least
since the time of the Pyramids.

Finally, problems of under- or mis-specification aren't unique to the
computer industry either.  In New Haven CT there is (was? it may have been
torn down by now) an extremely badly built pre-fab housing project called
Oriental Gardens.  It had no rain gutters, letting in the rain and snow to
cause all sorts of damage.  Why?  The houses were partially constructed at a
factory, then transported and assembled on-site.  The factory expected the
gutters to be added on-site, the on-site expected them already to be on the
houses when they arrived.

The message here is that project management is a real problem, but it isn't
really a technological problem except where traditional project management
techniques fail to handle unique aspects of computer systems.  There is a lot
of management knowledge to be had for those that want it.

MIT Athena Kerberos Authentication System available for FTP

Jon Rochlis <jon@BITSY.MIT.EDU>
Thu, 26 Jan 89 22:18:39 EST
What is Kerberos and why is it needed?

In an open network computing environment a workstation cannot be trusted to
identify its users correctly to network services.  Software on the workstations
may not be trustworthly, so being a privileged user on a workstation is not a
meaningful test of authenticity.  Source network addresses are so easily forged
that they are are meaningful either.  Passwords sent uncrypted on the network
are vulnerable to wiretappers.  Kerberos provides an alternative approach
whereby a trusted third-party encryption-based authentication service is used
to verify users' identities.  Much more information is available with the
documentation (see below).

How to get it:

The first public release of the Kerberos Authentication System is ready
for retrieval.  Initial distribution will be by anonymous
FTP; eventually 9-track tapes will be available.

To retrieve the distribution, ftp to ATHENA-DIST.MIT.EDU (,
login as anonymous (password whatever you like, usually your
username@host), then cd to pub/kerberos.

Retrieve README.ftp, it has directions on how to get to the rest of the

Distribution is split compressed tar files (xxx.Z.aa, xxx.Z.ab, ...).

If you would like to retrieve documents separately, you can get them
from pub/kerberos/doc (documents) or pub/kerberos/man (manual pages).
If you prefer hardcopy of the documentation, send your address and request
to "".

If you would like to be put on the Kerberos e-mail list
(""), send your request to 

I would like to thank the following people for their assistance in
getting Kerberos in shape for release:

  Andrew Borthwick-Leslie,  Bill Bryant,  Doug Church,  Rob French,  Dan Geer, 
  Andrew Greene,  Ken Raeburn,  Jon Rochlis,  Mike Shanzer,  Bill Sommerfeld,
  Jennifer Steiner,  Win Treese,  Stan Zanarotti.

FYI, the copyright notice:

  Copyright (C) 1989 by the Massachusetts Institute of Technology

   Export of this software from the United States of America is assumed
   to require a specific license from the United States Government.
   It is the responsibility of any person or organization contemplating
   export to obtain such a license before exporting.

WITHIN THAT CONSTRAINT, permission to use, copy, modify, and distribute this
software and its documentation for any purpose and without fee is hereby
granted, provided that the above copyright notice appear in all copies and that
both that copyright notice and this permission notice appear in supporting
documentation, and that the name of M.I.T. not be used in advertising or
publicity pertaining to distribution of the software without specific, written
prior permission.  M.I.T. makes no representations about the suitability of
this software for any purpose.  It is provided "as is" without express or
implied warranty.
                     John Kohl, MIT Project Athena/Kerberos Development Team

Single-engine planes (Re: RISKS-8.15)

Phil Karn <>
Thu, 26 Jan 89 02:46:29 EST
My friend, Brian Lloyd, and his dad, former California congressman Jim
Lloyd, flew their single engine Piper Commanche across the Atlantic from
Gander to Shannon to visit the Paris Air Show a few years ago. They firmly
believe that small planes with single engines are more reliable than small
twin-engine planes, and they decided to demonstrate it.

Halfway across the pond, they're making one of their routine hourly position
reports with a passing British Air 747. After the formalities, the following
conversation ensues:

BA pilot: What're ya flying down there, 448 Poppa?

Brian: A Piper Commanche.

BA pilot: That's a TWIN Commanche, right?

Brian: Nope, single.

(long pause)

BA pilot: You're mad, you're absolutely mad, you know that! One engine??
I've got four!

Brian's dad: Well, that's just three more things to go wrong!

BA pilot: You've got me there, I've had to shut one down already!

As you can see, they lived to tell the tale...


Multi-engine airplanes

Craig Smilovitz <smiley@Think.COM>
Fri, 27 Jan 89 09:39:26 est
    In the discussion about multi-engine aircraft failures, we've seen a
lot of mathematical probability exercises that forget about analyzing the
basis assumption about probability theory.  That assumption is the
*independence* of the events in question.

    Taking just the two engine example, everyone has been talking about
the chance of a single engine failing as p.  Thus the chance of an engine 
failing on a two engine plane is 2p (for small p, as has been pointed out).
But then it has been assumed that the chance of the second engine failing
is p.  That would be true if the engine failures were independent.  But
this is not the case.  A two engine plane flying on one engine is applying
more stress and wear to the engine than normal (since it is probably running
at close to full design capacity)  Thus the chance of this remaining engine
failing is more than p.  How much more answers the question of whether a
two or a three engine plane is safer.  The second p is a function of all
sorts of mechanical factors that would only be known through a careful
study of the design af an individual airplane type and is probably
different for every single plane marketted.  (The airframe and other
critical systems are similarly more likely to fail on a plane that is
running without its full complement of engines). 
   Engine failures are also not independent in another way.  In a very
recent crash, a pilot of a two engine plane got an indicator that one
engine was on fire.  He turned off an engine.  Due to an unknown cause
(pilot error, miswiring?) the wrong engine was turned off.  On this flight
two engines 'failed' even though one was in working order.  From an engine
designers standpoint, you might say that only one engine failed, but the
plane still crashed.  It could even be conceivable that a three engine
plane after this occurence could get enough thrust from its remaining
engine to allow a restart of the engine turned off in error.
   But the survivability of a three engine plane in this case is not my
point.  The point is that engine failures are not necessarily independent
events when talking about engines on a multiple engine plane.

Please report problems with the web pages to the maintainer