The RISKS Digest
Volume 5 Issue 55

Thursday, 5th November 1987

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Phone prefix change cuts BBN off from world
David Kovar
A simple application of Murphy's Law
Geoff Lane
Wrongful Accusations; Weather
Willis Ware
Weather and expecting the unexpected
Edmondson
UNIX setuid nasty — watch your pathnames
Stephen Russell
Penetrations of Commercial Systems
TMP Lee
PGN
Re: Unix password encryption, again?
Dan Hoey
Software Testing
Danny Padwa
Risks of using mailing lists
Dave Horsfall
Info on RISKS (comp.risks)

Phone prefix change cuts BBN off from world

<dkovar@VAX.BBN.COM>
Thu, 05 Nov 87 15:46:11 -0500
  On the first of November, BBN in Cambridge MA acquired a new prefix to
replace it's three old prefixes. Any calls to the old numbers were supposed
to get a message informing the caller of the new number. Standard stuff.
Well, perhaps not quite. The following messages were culled from the
corporate bboard. Corporate and personal risks abound. "Has BBN gone out of
business?" "Oh, you gave us a false work number, that's grounds for a lawsuit."
"I've been trying to reach you all day to inform you that ....". 

-David Kovar

  I have been having problems with people reaching me at my new 873 number.
  It turn out that this seems not to be a fault of the tel company or our PBX
  but not-up-to-date data-base of phone exchanges in the caller's PBX. The 873
  extension is new to Cambridge! An interesting distributed data-base problem..
  ----------
  As noted in an earlier bboard posting, BBN's new number has to be distributed
  to a lot of telephone switches.  Errors in the distribution can be fixed, but
  only if the BBN folks in charge of voice communications get involved.  So, if
  you learn that anyone is having trouble calling BBN, it would be helpful if 
  you would report this to Curt D'Aguanno (ext. 3845, email "cdaguanno"); he 
  will need to know where the caller is calling from, and what carrier they 
  are using (AT&T, MCI, ...) if you know.

  Please note that the distribution of our new number is really outside of 
  BBN's control; all Curt can do is report the problem to the carrier in an 
  "official" way and keep after the carrier to fix the problem.
  ----------
  NOT ONLY DO SOME NUMBERS NOT WORK, BUT WHEN YOU CALL THE OLD NUMBERS,
  THE RECORDING SAYS THAT THE NUMBER IS OUT OF SERVICE, CALL YOUR
  OPERATOR FOR HELP.  I CALLED HER, SHE SAID CALL INFORMATION FOR THE
  NEW NUMBER.  INFORMATION SAID CALL SOMEONE ELSE,...
  IN OTHER WORDS, IF SOMEONE YOU KNOW TRYS TO CALL YOU AT YOUR OLD 
  NUMBER, TOO BAD!
  ----------
  My daughter's school has been trying to call me all day to tell me she is
  very sick.  But anyone who tries a 497 BBN number gets a NOT IN SERVICE
  message, and calling Information gets no information.  Help!  Is something
  being done about this, hopefully VERY quickly?  It's difficult to imagine
  how much BBN is losing every hour.  I just called a client who said he'd been
  trying to reach me since this morning and wondered when BBN had gone out of
  business.


A simple application of Murphy's Law

"ZZASSGL" <ZZASSGL@CMS.UMRCC.AC.UK>
Thu, 05 Nov 87 11:43:14 GMT
I look forward to reading all the various complicated ways in which computers 
can screw up your life in this forum. There are however some very simple
examples of Murphys Law. The following has just happened to me AND I AM ANGRY!

We received on Tuesday a tape from Purdue University (I mention this because
I'm in Manchester, England).  This tape I entered into our "Stranger Tape"
system where it was allocated a tape number and two big sticky labels were
placed on the reel showing the tape number to anybody who cared to read
them. The tape is then placed into the tape racks.  I successfully read all
the data from the tape.  Today, Thursday, I realize that there are in fact
some corruptions in one of the files that I read. So I attempt to re-read
the file from the tape.  This turns out to be impossible because sometime on
Wednesday an operator misread the tape number on the big sticky labels and
mounted my tape instead of the correct one and wrote someone's files onto it.

This occurred on a MVS system and the tape contained a standard label.  When
you attempt to overwrite a labeled tape on our system an operator message
appears asking if you really want to write to the tape. The operator must
have answered yes to this question.

This is of course an example of the seeing, hearing, reading what you expect
to see, hear or read rather than what is actually there. There appears to be
nothing that you can do to prevent this kind of error.  The system had
correctly diagnosed that there was a problem of some sort and asked for
assistance to resolve the matter. It was told to carry on and write to the
file.

The consequences of this little screwup are: I may have to request a new
tape to be sent from Purdue (24 dollars postage a time); I now have access
to someone's files as the physical tape is still registered to me and I can
remove it from the tape library by returning the receipt; someone has lost
their files without any evidence to show why.

Geoff Lane, University of Manchester Regional Computer Centre


Wrongful Accusations; Weather (RISKS-5.54)

<willis@rand-unix.ARPA>
Thu, 05 Nov 87 09:32:01 PST
RE:  Wrongful Traffic Tickets & Changing Computers (David A. Honig)

It's OK to have someone tell you over the phone to ignore some allegedly
wrongful action of a computer-based system.  But do you trust the phone
message?  And do you trust the person telling you the message?  And do
you believe that the person indeed knows the correct and complete facts?

At one time I was suspected of fraudulently using my bank card.  After it
got straightened out, I insisted on an explanation.  It took two letters to
the bank president to get it.

Turns out that the bank issued both MC and VISA cards, and — get this --
used the same numerical identification sequences for both.  I had a VISA;
an MC card of the same number had been lost and fraudulently used.  A data
entry clerk goofed on the digits which distinguish an MC from a VISA and I
got in the barrel.

Through my insistence on an explanation, the bank took corrective action
to avoid future similar problems.

For events beyond some threshold, the potential consequences of some
action alleged to be wrong are too RISKy to depend on phone notification.
It's your individual call when to insist on a written verification, but my
personal threshold is very low.  Only for the most trivial of events will
I accept phone statements.

                    Willis H. Ware, Santa Monica, CA

      - - - - - - - - - - - - - - - - - - - - - - - - - - - -

RE: Weather — or not to blame the computer? (Stephen Colwill)

Just a few other comments to wrap the subject up.  Maybe SE Britain isn't
geared up for serious weather, but neither is the U.S. in some ways.  In
spite of best efforts, we often screw it up.  Example: as a result of a
forecast that indicated only light snow, the Washington (D.C.) government
delayed calling out the snow equipment for an hour or so.  The snow was
heavy, not light, and the city came to a half.

Also turns out that the most critical forecast that the NOAA Severe
Weather Forecast Center (in the midwest somewhere) has to make for a
hurricane is prediction of its landfall location.  It's a tricky judgment,
and the WX folks try to get the best and continuous data on the storm.
But since the on-shore preparation time is measured in hours, the storm
can dance around considerably in the last few hours and hence we have the
occasional situation of prepared areas not being hit and unprepared areas,
clobbered.

Re weather data over the North Atlantic, NOAA experimented a few years
back with a package that flew in the hold of trans-Atlantic aircraft and
continuously reported weather observations at jet altitudes through the
GOES satellite back to Suitland, Maryland.  As I recall the argument, the
weight of the package cost the airline the revenue of one passenger seat
(maybe it was two or three seats) so the experiment was short-lived.  I do
not know its present status.

                    Willis H. Ware, Santa Monica, CA


Weather and expecting the unexpected

<EDMONDSON@COMP-V1.BHAM.AC.UK>
5-NOV-1987 12:33:37
It is worth commenting on Stephen Colwill's comment that in the USA everyone
is geared up to watch the weather, respond appropriately, or better still
anticipate, and in any case their larger land mass makes weather prediction
easier.... which I take to be the drift of his contribution.

The winter of '82 will be remembered by inhabitants of Minnesota.  Just after
Christmas the airport was closed (for nearly 24 hours, first time in 20 years,
if my memory serves me well)  -  by a freak, and unforecasted, snow fall.
Not just a few flakes, you understand - Minnesota folk are hardy stock - but
more than a foot of snow in a few hours, in the middle of the night.

I know, because I couldn't get back to Minneapolis, and when I did everyone
expressed surprise at being caught out, etc.  Note that Minneapolis has/had
the advantage of the KSTP weather-desk in addition to any national forecasts,
and that Minneapolis is just about centrally located in a huge land-mass.
The Norwegian bachelor farmers no doubt munched on their powder-milk biscuits,
but everyone else felt let down.

I add this to the debate because the usual British arguments  -  we don't
suffer extremes frequently enough (the usual complaint of railway service
when the points (switches) freeze, as they do most winters), we're too small
for good predictions, etc., just don't add up.

The moral - and comments welcomed on this - would surely be something along
the lines of: people are not psychologically prepared for extremes, of any
sort, and thus don't countenance them readily.  What this means for RISKS
is that we (designers of expert systems, or autopilots, or...) need to be
aware that it is at the margins of human experience that our experience
fails us!  We're least able to contribute the design data for those areas
of performance where they are most needed.

OK so someone is going to bring up Three Mile Island - fine  -  but note that
I'm note restricting my comments to simple monitoring failures, or to pressures
of time, etc.  A fly or two on the Met. Office walls may well have heard
comments to the effect that 'this looks like a hurricane - but surely not
here' or some such.  Does anyone know of observational or other studies on
this aspect of human behaviour - it is particularly relevant to understanding
human response to data provided by computers when those data are unusual,
and thus potentially a component in a risky situation?

          [Readers may recall our mention of the notion of Henry Petroski 
          ("To Err is Human") that we tend no to learn from our successes,
          but have a great opportunity to learn from our failures.  PGN]


UNIX setuid nasty — Watch your pathnames

Stephen Russell <munnari!basser.cs.su.oz.au!steve@uunet.UU.NET>
5 Nov 87 02:10:40 GMT
A major security bug in our student assignment submission system was exposed
recently. This system, which allows students to submit solutions for
selected assignments, consists of two programs. The first checks that
the assignment is current, etc., then constructs the file pathnames for
the student's file and the destination (in the appropriate directory for
that assignment). It then invokes the second program, which is basically
a setuid root file copy program. It needs to be setuid root, as it writes
into a protected directory. The copy program is publicly executable, although
(we believed) reasonably well hidden.

Now for the bug. The copy program attempts to prevent incorrect use,
by checking that the pathname for the destination begins with
"/user1/bags/GIVE/", which is the standard place for putting assignments.
However, _it checks no more than that_. It didn't take too long for
some students to discover that "/user1/bags/GIVE/../../.." was also
acceptable. This allowed them to `back up' out of the correct directory to
the root directory, and from there to anywhere they liked!

Thus, they had a program that allowed them to read or overwrite any file
on the system. You can imagine the consequences - modified password file,
trojan horses installed in various system utilities, etc. When we finally
discovered this, it took many, many hours to clean up the mess. Worst of all,
we cannot be 100% sure we haven't missed something.

Moral: breaking normal security to provide a needed feature is a risk.


Penetrations of Commercial Systems

<TMPLee@DOCKMASTER.ARPA>
Fri, 30 Oct 87 14:18 EST
Does anyone know if there exists, apart from Donn Parker's publications,
any compendium of recent (last ten years) cases of theft, fraud, or
unauthorized disclosure, modification, or destruction of information in
commercial computer systems (analogous government ones OK too) wherein
it is at least plausible that better computer security would have
prevented, or helped detect (sooner?)  the incident?  (I'm also aware of
PGN's lists too, of course.)  ("better computer security" means
technical measures, except for pure physical security and communications
security, if weaknesses in those was only exploited in a direct attack
on the end information — if they were exploited as part of an indirect
attack (e.g., stealing a password, planting modified software) that
would be of interest.)

(Individual reports also welcome; don't send them directly to either forum.)

Ted


Re: Penetrations of Commercial Systems

Peter G. Neumann <Neumann@KL.SRI.Com>
Sat 31 Oct 87 15:51:12-PST
Ted, I presume you are fishing for justifications for better system security
and network security in the face of many people trying to argue that most
breakins are not the result of poor system security, but rather weaknesses
in the administrative, operational, and user practice.  That argument needs
attacking.  Having better systems with more humane interfaces and with
nonbypassable and nontamperable auditing could help to diminish the sloppy
practice.  But having a system with inadequate security and integrity means
that the audit trails — and indeed some of the system controls themselves
-- can be readily compromised.  Also, many of the external breakins and
internal misuses have been inspired by system weaknesses — even if they
resulted directly from sloppy practice.

One particularly horrible case involved administratively turning off the
audit trail in order to permit the computer systems to cope with the backlog:

$H Removal of Wall St audit trail enables $28.8M computer fraud (SEN 12 4)

But suitably efficient, nonbypassable, and nontamperable audit trails are
included under the notion of adequate security controls; certain minimum
level of auditing should not be possible to turn off.

Following are just a few cases in which better system security controls
might have helped (including sounder operating systems, better enforcment of
separation of privileges in system use and application design, better user
identification and authentication, better audit trails and real-time
analysis, etc.):

SH Stanford network breakins (SEN 11 5)
SH Crackers break into AT&T computer systems (SEN 12 4)
SH W.German crackers plant Trojan horses, attack NASA, DoE systems (SEN 12 4)
SH Various other Trojan horses (SEN 12 4, etc.)
$H Volkswagen lost $260M, computer tampering foreign-exchange fraud (SEN 12 2)
$SH Phone credit-card numbers stolen from computer.  $500M total? (SEN 12 3)
$H  N-step reinsurance cycle; software checked for N=1 and 2 only (SEN 10 5)
    [although this would have required application-level integrity controls]
$SH 18 arrested for altering cellular mobile phones for free calls (SEN 12 2)

(SEN References are to Software Engineering Notes.  Most of these also
appeared in RISKS on-line.)

There are lots more cases.  These are just a few to get you started.  Peter


Re: Unix password encryption, again?

Dan Hoey <hoey@nrl-aic.ARPA>
5 Nov 1987 11:39:10 EST (Thu)
In Risks 5.48, Russ Housley asks whether Unix's ``modified DES'' is a
one-way hash.  Let us first pick nits:  he was not asking this of the
modified DES per se, but of the Unix password mapping algorithm.  The
DES and modified DES are cryptosystems, while the password mapping is a
transformation from the user's password to an ``encrypted'' version.
Confusion arises because the password mapping algorithm uses the modi-
fied DES as a subroutine, so there is a strong temptation to say that
the password has been ``encrypted by the modified DES''.  This usage of
the term ``encrypt'' is at odds with the common cryptological concept of
a transformation by which information is transformed for later decryp-
tion by a secret algorithm, or an algorithm that uses a secret key.

A second point of terminology concerns the term ``one-way hash'', which
has been interpreted in four different ways by me and the three people I
have discussed it with in private communication.  Russ Housley used the
term for the composition of a hash function and a one-way function.  (A
``hash'' function is a function that maps a large domain to a smaller
range.  A ``one-way'' function is a function that is computationally
infeasible to invert.)  When Matt Bishop (Risks 5.49) answered that the
password mapping was not a one-way hash, he was referring to the fact
that the password mapping is not a good hash function--the only hashing
that goes on is ignoring all but the first eight characters.  Peter
Neumann interpreted ``one-way'' as referring to a function that maps
many-to-one (a reading invited by the term ``hash'').  Certainly, it is
impossible in a sense to invert a many-to-one function F, since X cannot
be determined from F(X).  My understanding of a one-way function is one
for which it is hard to find any X' for which F(X')=F(X).  Such a
function can be either many-to-one or one-to-one; I suspect that the
password mapping is many-to-one even on eight-character passwords.

So, in answer to Russ's question, the password mapping is designed to be
a one-way function, but we have no proof that it succeeds.  In a prac-
tical sense, no one knows whether the password function is hard to
invert, though no one has reported an easy way.  In a theoretical sense,
if P=NP (and perhaps if not) then no one-way functions exist.

But even if the password mapping algorithm is a one-way function, it is
not very secure.  Any one-way function can be broken by trying all of
the possible inputs.  In his message, Matt illustrated this with an
example of 100 users who chose passwords from a 25,000-word dictionary.
He described the effect of the modified DES, noting that a search for
the passwords would require 2,500,000 password mappings, and concluded
that ``the time for such a search should be unacceptably high.''

In a later private communication, he clarified this.  The example he
gave was intended only to describe the purpose of the modification to
DES, and not to claim that 2,500,000 password mappings are a serious
barrier to password breaking.  You might not realize that if you use the
software distributed with Unix, which would require ten days for the
task on a SUN 3.  But last year, Robert W. Baldwin announced a way of
speeding up the password mapping by a factor of 300, using VAX assembler
code and some tricks.  Using his tricks and some of my own, I wrote a
fairly fast password mapping in C, and in fact Matt Bishop has his own
fast C implementation.  So those 2,500,000 mappings can be undertaken by
Matt on his SUN 3 in about eight hours, or by Bob on his VAX 8600 in
about forty minutes.  And if Rick Gumpertz is still out there with his
Cray, carry a laser.

The clear and present risk to a Unix system is that the users may have
chosen passwords that can be found in a list of words, of words spelled
backwards, of first names, of the first letters of famous quotations, of
possible license plate numbers, of the six-letter strings, of the
eight-digit strings, of the geometrical keyboard patterns, or any other
fairly short machine-accessible list.

I am horrified at the amount of verbiage it takes to straighten out these
simple misunderstandings.  If you want to know more about the issues, read
Morris and Thompson's ``Password Security: A Case History'' in the November
1979 CACM or your Unix Manual Set (volume 2, or System Manager's Manual,
depending on how your set is organized).  Please do not court the wrath of
the S. P. F. D. H. by further flogging this dead horse, or me.

Dan Hoey
          [This message is the result of extensive trialogue among Dan, Matt
          Bishop, and PGN.  There were enough confusions exhibited by other
          readers in other messages — including a bunch which have not been
          included in RISKS  — that it seemed worthwhile to try to set the
          record straight.  I hope this won't lead to further confusion.  PGN]


Software Testing

Danny Padwa <padwa%harvsc3@harvard.harvard.edu2>
Thu, 5 Nov 87 15:53:39 est
    John Haller mentioned the question of software testing an issue or two
ago. Last summer I worked at a financial information company which (for obvious
reasons) takes software reliability very seriously. They had a testing system,
which, although sometimes tedious, seems to work extremely well.

    When the development group is ready with a software release, they
forward it to the quality assurace group, puts it up on a test system and
tries very hard to break it (i.e. we simulated market conditions that make
"Blue Monday" look like nothing). Very detailed test plans are written and
carried out, testing all sorts of possible failures.

    When the QA group signs off on it (often after a few trips back to
development for tuning) a software package goes to the Operations Testing
Group, which runs it on a test string exactly the way it would run after
release. If it is consistent with currently operating systems for about a week,
it is then released to the operations teams.

    While this is not a sure-fire solution, it does make reasonably sure
that any software that goes "live" can handle normal conditions (the Ops
testing) and weird ones as well.

    Does anyone out there have similar experiences with multiple-redundancy
in testing. (NOTE: The various testing groups are relatively well separated
        administratively, so that pressure on one group usually is not
        paralleled by pressure on another.
            Danny Padwa, Harvard University

 BITnet: PADWA@HARVSC3.BITNET   HEPnet/SPAN: 58871::PADWA (node HUSC3)
 MFEnet: PADWA@MFE.MFENET   UUCP: ...harvard!husc4!padwa          

38 Matthews Hall, Harvard University, Cambridge MA 02138 USA


Risks of using mailing lists

Dave Horsfall <munnari!astra.necisa.oz.au!dave@uunet.UU.NET>
5 Nov 87 13:20:19 +1100 (Thu)
Quoted from the Sydney Morning Herald, 19 Oct 87:

``An Albion Park [Sydney suburb] reader didn't have to open the
letter from a Sydney computer software company to know the status
of his account.  On the envelope, between his name and address,
was printed: "35 days UNFINANCIAL".''

Obviously, the billing software merely took the data base details
as a mailing list label...
                                      — Dave

Please report problems with the web pages to the maintainer

x
Top