The RISKS Digest
Volume 11 Issue 67

Tuesday, 14th May 1991

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…


The UK Data Protection Act and email/net and university users
Chris Reynolds
DEC copies system software, charges pirates
Re: Free speech & government control of information
Larry Hunter
Re: Case of the Replicated Errors: An Internet Postmaster's Horror Story
Neil Rickert
Erik E. Fair
Dan Boyd
Re: Netware LOGIN problems
Leonard Erickson
Re: Emergency off switch - IBM 1620
R.I. Cook
Info on RISKS (comp.risks)

The UK Data Protection Act and email/net and university users

Tue, 14 May 1991 09:37:14 +1000
One of the problems of the UK Data Protection Act is that it is only concerned
with the use of the data and not the contents. An author writing a biography is
not covered by the Act because he is word-processing - but if he seachs the
text ONCE for the occurrence of a personal name, or creates a name index, the
Act immediately applies.

This applies to all UK users of email and usenet. If they just read their mail
and discard it the Act does not apply. If they keep a copy of the text for
later reference by name (or scan their mailbox to select mail) the Act applies.
(If you have a personal name in a usenet kill file this could well be
processing under the Act!)

The matter gets worse. If you have ANY data under the Act you have to register
(or be covered by an employers registration) with only a few exceptions. In
addition the usenet postmaster acts as a bureau under the Act so that if any of
his users process personal information from usenet he should register as a

Registration is a complex, time consuming and expensive process in which you
have to detail the kinds of personal information you hold, where you get it
from, and what you use it for. There are NO minimum levels. I produced some
schools software, including a dozen teaching examples. One of these used a list
of the English monarchs, and included the personal information that Queen
Elizabeth II came to the throne in 1952. Technically speaking, whenever I sold
a copy of the software to Hong Kong (or the Isle of Man!) I would need to be
registered as an overseas dealer in personal information, and any UK school
using the package should be careful not to reveal the information about the
Queen to a passing adult on an open day display unless their registration
included disclosure to members of the general public.

As far as examination marks are concerned, the Act contains specific provisions
which most UK Universities have chosen to ignore. They escape by using the 40
day maximum period allowed to execute disclosure, by saying they will always
take 40 days to disclose, and because exam marks are never in the computer for
more than 39 days they will never be disclosed. The Data Protection Registrar
(effectively the relevant ombudsman) has commented that this is probably legal
but violates the spirit of the Act.  If universities keep exam data (including
continuing assessment results) on the computer, or print out/OCR techniques to
"cheat", they are definitely in breach of the Act. If they manually re-input a
previous years data, they have deliberately chosen a risky route, with
obviously increased possibility of error, they may end up violating the
principles that "Personal data shall be accurate ..." and "Appropriate security
measures shall be taken ... against accidental loss or destruction of personal

Needless to say these aspect of the Act is totally unworkable, and only serve
to encourage people to ignore it, even when it matters, which has a serious
"risks" component. An Act which failed to distinguish between automated and
manual methods, and which avoided the need for registration by allowing anyone
to ask anyone for data (if they have any) would be far less risky.

For further information see my paper "Computer Conferencing and Data
Protection" in The Computer Law and Security Report, March/April 1990, or the
more popular "Letter of the Law" in the (UK) Personal Computer World for May
1990. (If anyone knows of any relevant UK court case developments in the last
year, please let me know.)

Chris Reynolds    CSIRO, Division of Information Technology, PO Box 1599,
NORTH RYDE, NSW 2113, AUSTRALIA                            +61-2-887-9480

DEC copies system software, charges pirates [ Digital Forgery ]

13 May 91 12:16 -0700
RISKS readers will recall the story of a DEC UK employee who attended a seminar
and copied the system software off the machine the seminar was taught on.  The
firm that gave the seminar was subsequently charged with pirating the software.

My concern is not with the cloak and dagger aspect or the actual copying, but
with the admissability of digital media as evidence.  Audio recordings are not
admissable as evidence in any jurisdiction that I am familiar with; compared
with the tricky job of splicing together an audio tape, forging a copy of a mag
tape with system software on it is trivial.

My conclusion ( as a legal layman ) would be that the presentation of a tape
with and incriminating serial number on it would have exactly the same weight
as the presentation of a piece of paper with the same serial number scribbled
on it: none ( actually, I guess it might convince me that the witness was not
misremembering ).  What counts here is the word of the DEC employee who says:
yes, I copied this ( to tape or paper ) off of the system in question.


Re: Free speech & government control of information (RISKS-11.60)

Larry Hunter <>
Mon, 13 May 91 11:41:45 EDT
I feel compelled to continue the debate that Jerry Leichter and I have been
having on goverment control of information, particularly as it applies to the
current attempts to regulate effective encryption.  In RISKS-11.60 Leichter

     There are two basic areas in which we differ.  First, Hunter
     believes I'm attempting to prescribe appropriate actions.  If I
     gave this impression, let me correct it: I'm trying to PREDICT.
     My claim is not that stricter controls are a good idea.  Rather,
     I suggest that they are an inevitable result of the direction in
     which our technologies are headed.

If Liechter thought stricter controls on the flow of information were a bad
idea, he certainly fooled me.  And the way I read the rest of his message,
doesn't seem to object that strongly.

     (There's certainly room for a good deal of debate about
     "technological determinism" here.  It's not that I don't believe
     that alternative paths are POSSIBLE; I'm just projecting what I
     think is by far the most likely path.)

I agree that the government is very likely to attempt to dramatically extend
its already quite intrusive control over the flow of information.  Major
corporations and other socially powerful entities will also attempt to control
the flow of information that effects them.  I further agree that alternative
paths are possible.  Does it not seem to follow from those stipulations and
implication above that this control is not good, that we, as a sophisticated
and priviledged (i.e. well educated, financially secure) computer scientists,
OUGHT to be working to PREVENT new controls on of the flow of information,
especially something as nasty and unnecessary as the prohibition of effective

     The second issue grows from the first, and Hunter's view of how
     the fundamental laws of our society are determined.  To state it
     starkly: If "society" comes to believe that government controls
     on information are necessary, will constitutional limitations
     still prevent them from coming into being?  Hunter believes so; I
     think he's being naive.

I believe that I (and others) OUGHT to work hard to keep the government from
imposing the controls that it has proposed, and others like them.  The
constitution is a very powerful tool that can be used in this battle to
preserve rights that lawmakers (and even political majorities) may wish to
curtail.  I think it is one of our best tools in this battle.  It may be naive,
but it seems to me that you have a very cynical view of the role of the
consitution in this society.  (To my mind, it is America's most positive
contribution to political history.)

     The Constitution protects "speech", "religion", "the press".  It
     never defines any of these terms; case law does.  We think we
     know what they mean, and that the "clear meaning" will not
     change, but history makes it clear that these terms are quite
     malleable. ....  Note that we don't need a constitutional
     amendment to effectively change the definitions of crucial terms
     in the Constitution - all we need is a majority of the Supreme
     Court.  ... I see little reason to suppose that the courts will
     blindly accept that all computerized information is "speech", if
     society decides that some limitations on it are necessary.

True, and somewhat cynical, but recall that I am not arguing for reliance on
the supreme court to protect the ability of Americans to use effective
encryption.  I think that we ought to be education, lobbying, FIGHTING to
preserve this aspect of the right to free speech, and that the constitution is
an important tool in this fight.  "Society" is not an entity that believes and
decides; people do.  People who, these days, are being called upon to have
opinions about many issues that were recently obscure technical minutia.  I
suggest that we as responsible computer scientists have an obligation to
communicate, educate and act as concerned experts in the political process.

     In the past, we've generally been able to draw the line between
     things or acts and information - "mere speech"....  In the
     information age, this line becomes fuzzy.  For export, a
     description of DES is OK, a chip implementing it is not.  How
     about a good software implementation?  Should a computer virus -
     simultaneously speech (pure information) and a potentially
     dangerous "thing" - be freely publishable?

These are important questions that can (and will) be settled by in the
political process, as is the question of whether the government should be able
to break all encryption schemes sold to American citizens.  Some of these
questions are easier than others.  For example, letting loose a virus or any
other kind destructive program seems clearly action.  E.g. Robert Morris, Jr.
didn't even try a free speech defense at his trial.  Free speech is not an
international guarantee (there are many people who are denied visas to visit
the US because of their opinions or comments), so the export issue seems moot,
although someone from DEC ought to know that export restrictions can include

     Let me give a non-computer example of the kind of problem we will
     face: Mr. M is a numerologist and conspiracy theorist.  He
     believes that he can track down conspiracies in the world by
     examining various numerical data related to people.  He starts a
     magazine, OutNumber, in which he regularly publishes any numbers
     he can find concerning (mainly) the rich and powerful.  Mr. M has
     a following, and he has money to pay for tips, so he has no
     problem finding all sorts of interesting numbers concerning
     people.  Soon he is publishing people's charge account numbers,
     checking account numbers, PIN's, private telephone numbers,
     cellular phone numbers, and so on.  At no time is there any
     question of Mr. M's involvement in any attempt to use this data
     for fraudulent purposes - he is sincerely interested only in his
     numerological research.

     OutNumber, and Mr. M, are probably protected under the
     Constitution as we currently construe it.  My question is, should
     they be?  Do you think there's really a social concensus that
     it's essential to protect the ravings of a Mr.  M, even in the
     face of (let us imagine) clear evidence of massive fraud by
     OutNumber readers against those "profiled" in the magazine?  How
     long do you think the courts will stand up in the face of a new
     concensus that says, hey, get rid of this guy?

I leave this intact because I think it is a good example.  I would say
that Mr. M should indeed be allowed to publish his magazine.  I for
one would suspect that anything that shut him down would also be used
to close down David Burnham's Transactional Records Analysis Center
(TRAC) which has made some remarkable inferences about the IRS and
other government activities on the basis of analysis of public
records.  I'm sure there are lots of people in the government who
would like to shut him down, and that such a law would be applied to
TRAC long before it would be applied to Mr. M's hypothetical gossip
rag.  And I suspect that existing law would adequately protect the
celebrities defrauded by readers - that's what fraud laws are for.  As
for social consensus, I recognize that the content of the Bill of
Rights is consistantly supported by less than half of the population
in polls, but that does not mean it ought to be overturned or ignored.
It means that we have to act to preserve it.

     Finally, Hunter responds to my suggestion of some fiction stories
     with readings on political theory.  I have no problem with this.
     The reason I suggest fiction is that social concensus, and
     ultimately law, grow as much out of the gut as out of the head.
     Good fiction lets you explore your own gut feelings.

Emerson's book is not political theory, it is a history and explication of free
speech rights.  Read whatever you like. After all, that is the whole point of
this argument, isn't it?

Lawrence Hunter, National Library of Medicine.  Please note that I am not
speaking as a representative of the government.

Re: Case of the Replicated Errors: An Internet Postmaster's Horror Story

Neil Rickert <>
Mon, 13 May 91 13:36:50 -0500
In RISKS DIGEST 11.66, Erik Fair <fair@APPLE.COM> reports on a mail problem
encountered at Apple.COM, at, and at

 Erik's report made interesting reading, and does raise some issues of concern.

 However, in pointing the finger at the culprit, I believe he has pointed it
fairly and squarely in the wrong place.

>The important part is that the "To:" field contained exactly one "<" character,
>without a matching ">" character. This minor point caused the massive
>devastation, because it interacted with a bug in sendmail.

 This "minor point", as Erik calls it, is a violation of the standard for
Internet addresses (RFC822).  Many would say that this is a MAJOR point.

>Sendmail, arguably the standard SMTP daemon and mailer for UNIX, doesn't like
>"To:" fields which are constructed as described. What it does about this is the
>real problem: it sends an error message back to the sender of the message, AND
>delivers the original message onward to whatever specified destinations are
>listed in the recipient list.
>This is deadly.

  Excuse me, but this by itself is not deadly.

  Let's look at the exact set of conditions which were involved:

  1.  Mail was sent with an invalid "To:" header.

  2.  The mail was completely deliverable, in spite of the syntax error, so
      sendmail proceeded to deliver it.

  3.  Sendmail reported the error to the message originator.

  4.  Sendmail did not "repair" the syntax error.

  5.  The message was destined for a mailing list with many recipients,
      implying that the error would be rediscovered at each of a large
      number of relay points.

 The combination of all of these was involved in the error.

 Erik points his finger only at items 2 and 3.  This, I believe, is incorrect.

 In spite of the syntax error, it is correct to attempt to deliver the mail
if this is still possible.  Robustness requires this.

 Once a serious error has been discovered, it is correct to report this.
Reliability of systems depends on reporting of errors.

 Items 2 and 3, then are just plain good programming practice.  They cannot
be blamed for this problem.

 Look now at item 4.  There is no question that had 'sendmail' repaired the
problem header this would have avoided the problem.  Unfortunately there are
no standards as to how this should be done.  The RFCs recommend against
modifying headers.  Perhaps some provision should be included that where a
an invalid header causes an error to be reported, that header must be
"repaired" in some way before the message is sent on.  Perhaps the best way
to repair the header would have been to relabel it as say "Invalid-To:"
or something equivalent, which hopefully would prevent a further syntax
analysis at future sites.  But, to implement something like this requires
a standard.

 Certainly sendmail can be indicted for item 4.  But it's guilt is secondary
to that of the originating mailer which emitted the erroneous header in the
first place.  Thus the finger here should be pointed back fairly and squarely
to Apple.COM, with only contributory negligence on the part of sendmail.

 The primary problem, however, is in item 5.  For a normal mail message
with a handful of recipients, each relayed through a modest number of hosts,
the number of messages would have been quite small.  It is because this
message is to a mailing list that so many problems arose.

 The conclusion is clear.  Administrators of mailing lists have a special
responsibility.  It is not enough to use an aliases entry to replicate the
original message.  The mailing list must be considered to be creating a new
message based on the contents of the original message.  As such it must
take care to meet the various standards for mail (such as RFC822).  This
should involve validation and repair, if necessary, of any required headers.

Neil W. Rickert, Computer Science, Northern Illinois Univ., DeKalb, IL 60115

Re: Case of the Replicated Errors: An Internet Postmaster's Horror Story

"Erik E. Fair" (Your Friendly Postmaster) <>
Mon, 13 May 91 17:16:06 -0700
I disagree [with Neil].  I would have no problem with sendmail logging that a
syntax error was found. What I object to is that it BOTH reported the error
back in a separate message to the sender, AND forwarded the message onward to
other waiting sendmail which would do the same thing. This is a recipie for
disaster, as I saw.

Sendmail should either bounce the letter, or deliver it with no further
comment than a log entry. It should NEVER report an error in a return
message when it is not the MTA doing final delivery, unless it is
actually bouncing the letter, and will not forward it further.

And this has nothing to do with mailing lists - it can (and will)
happen if a user just sends out to a list of 100 people, with no
formally set up mailing list involved.

    Erik E. Fair    apple!fair

Re: Case of the Replicated Errors: An Internet Postmaster's Horror Story

Dan Boyd <>
13 May 91 14:43:39
Just goes to show you how hairy sendmail is — a single misplaced open-bracket,
and suddenly your site switches into Craig-Shergold mode...
                                            — Dan
Daniel F. Boyd

Re: Netware LOGIN problems (John Graham-Cumming, RISKS-11.65)

Leonard Erickson <>
14 May 91 01:39:24 EDT
>I'm managing (at least for the moment) a Novell network running Netware
>286. I've recently realised that it is possible to pipe a file into the LOGIN
>command.  This has the rather unfortunate affect that it is possible to
>write a Trojan horse which simulates login

Well, under Netware 2.11 (the oldest version that I've worked with), piping
does *not* work. The password must be entered from the keyboard (or stuffed
into the keyboard buffer)

So the first solution would be to update your software. Stuffing the
keyboard buffer is still a loophole, but vulnerability is very limited if
proper security is used. For instance, not allowing users to write files
in the LOGIN directory on the network. This requires the trojan to be
installed on a particular machine. And requires the "owner" of the program
to visit that machine to get the info.

For statistical purposes, we wrote a program that is run as part of the system
login script that saves whatever strings are passed to it to a globally
*writable* file. We save the Physcal-ID, the login name, the date, time and a
few other things.

This turned out to be *very* useful the one time someone submitted a "fake"
request for a user account. Once the fake user was called to our attention (he
wrote some objectionable email). It was a matter of a few minutes to grep thrhu
the log and find which stations he'd used and when... from there, it was easy
to find him.

We also limit most users to *one* connection at a time. This makes it very
obvious if anybody tries to use someone else's account at the same time as they
are on line.

As others have noted, it is users ignoring good security practices that is
the biggest problem. I've come in early on a monday morning and discovered
that users in an open area (office cubicles), had not only left their
machine logged in all weekend, but that they had left them inside the mail
program. I wandered over and sent them a letter "from themselves" warning
them that I could have sent *anything* to *anyone*. Didn't faze them.

Emergency off switch - IBM 1620

Mon, 13 May 91 15:57:53 EDT
There was real concern in the days of the IBM-1620 and early 360's that the
need to destroy the link to power would arise.  In some versions, pulling the
switch caused a sort of knife blade to sever the cables.

These precautions were seldom needed.  Most centers, including the one in which
I operated the 1620 and 360, had an elaborate power control system which shut
off power to the computer, the lights, the terminals, and the air conditioning.
R.I.Cook, M.D.

Please report problems with the web pages to the maintainer