The RISKS Digest
Volume 11 Issue 58

Tuesday, 30th April 1991

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…


Hacking, Civil, and Criminal Law
Herman J. Woltring
Utrecht crackers and the law
Fernando Pereira
Larry Seiler
David Collier-Brown
dik t. winter
Richard A. O'Keefe
Re: Rude behavior
Andrew Marchant-Shapiro
Brad L. Knowles
Tim Wood
Re: One-time passwords
Steve VanDevender
Barry Schrager
Info on RISKS (comp.risks)

Hacking, Civil, and Criminal Law

"Herman J. Woltring" <>
Fri, 26 Apr 91 09:54 MET
My recent posting `Hacking a la Neerlandaise' <RISKS 11(52)> gave rise to a
couple of private comments, and an invitation for a telephone interview with
Unix Today.  One comment thought that I was too subtle, the other accused me
of network abuse:

>Oh, gimme one giant break.  No one cares whether or not you log in to Libra-
>ries on the Internet so you can look up a book.  That's why they are on the
>Internet, and that's part of the reason the Internet exists.  The articles
>that have been flying around lately seem to indicate that {you, your com-
>patriots, someone in The Netherlands} has been doing a little bit more than
>that, and more to the point, bragging about it.

>Stop trying to bull**** the rest of the net.  This isn't a RISK anymore, it's
>an annoyance.

> [names omitted] and   ...harvard!think!...

Maybe, PGN had different thoughts, because he released my tongue-in-cheek item
to the readership.  What I wanted to do is to show how the way things are
described in newspapers, in debates like the current one, and in philibustering
can bias arguments into a certain direction — of course, this is standard
practise in (televised ?) litigation: you're accused of all sorts of nasty
things, and the court (or the jury ...) has to make up its mind what facts are
behind the rhetoric.

For one thing, I don't see that my compatriots have been `bragging' — they
merely demonstrated possibilities under current law, in a time that the Dutch
Parliament is trying to make up its mind how to balance between Freedom of
(access to) Information and Computer/Network Trespassing.  Information and its
use have always been `free' unless special protections are provided for.
Patents law (information is public, with free review and research, but its
routine use is protected), trade secrecy law (independent discovery is free,
improperly obtaining trade secrets is not — e.g., through bribes or
blackmail), copyright law (pure information is free, its expression is not, but
the underlying information may be recovered under Fair Use / Fair Dealing in
Anglo-American, Common Law) are examples of this freedom, protected under the
USA's First Amendment and under various International Treaties.  Society is
dynamic, and new possibilities call for (new?) freedoms and new protective

One hot issue in the current, parliamentory proceedings is the extent to which
`computer trespassing' has appropriate protection as a mandatory prerequisite.
If you open your vaults, dismiss the guards, turn off the alarm, and if your
name is Dagobert Duck, you are equally liable for solliciting criminal
behaviour as is #789-123 for committing a felony while he purloins your
bullion.  Crying wolf without even guarding the sheep will not raise much
sympathy...  In my book, simplistic passwords, retaining known system passwords
or not plugging known, remote-access loopholes are tantamount to the same.

The point is important because legislators tend to write law books in a generic
fashion, leaving it to the courts to determine which paragraph applies in what
case (that's what appeals are about so often: in The Netherlands, lower courts
assess the facts, and the higher and supreme courts then may have to determine
whether the Law has been applied properly; beyond that, you may even go to the
European Court).  Thus, criminalising `computer trespassing' might, in
principle, apply not only to the Pentagon or Watergate, but also to your
cooking machine or Swiss/Japanese watch.  Determining boarderlines in Noman's
land between what's in the private and common interests is what we should be
concerned with.

We are a property-biased society, where civil law gives us our own
responsibilities to protect our goods (and note that Information is not a
material good but an immateriality which cannot be `stolen' but only kept
secret, shared, or destroyed).  With Information more and more becoming a
valuable `thing', the `owners' wish to increase their grip on what they wish to
hold.  Because they are often too lazy to accept their civil responsibilities
and to sue privately (it's expensive, too!), they try and fall back on criminal
law where public authorities can be called upon for free to protect their
private interests.  However, Criminal Law is (or should be), the Ultimate
Resort only, when all else fails.

I have been told that Shakespeare once described a technique for artificially
creating the sound of thunder in the manuscript for a future play.  When
attending the premiere of a colleague's play who had been privately reviewing
his manuscript for him, he heard to his exasperation a very familiar sound.
Jumping up, he roared "YOU STOLE MY THUNDER!", much louder than his `stolen'

There are many alternatives to criminalising computer trespassing: breach of
privacy is one of them, but this does not apply if you make public access to
your secrets too easy.

Herman J. Woltring, Eindhoven, The Netherlands, (Former) member, study comms on
s/w protection and computer crime Netherlands Society for Computers and Law
Tel. (private) +31.40.480869, +31.40.413744  ugdist@hnykun53.bitnet

Utrecht crackers and the law

Fernando Pereira <>
Thu, 25 Apr 91 22:30:34 EDT
In my two postings on the Utrecht crakers, I took pains to mention only ethics,
rather than the law (which I do not feel qualified to discuss), with respect to
my suggestion that sites whose *officials* condone Internet-wrecking activities
do not belong in the Internet.  However, several RISKS readers, in particular
Mike Godwin (RISKS 11.54) read my postings as coming on the side of
heavy-handed legal approaches to the problem of cracking. I worry that
discussion of these matters on RISKS degenerates often into fights between
``libertarians'' and ``string-them-uppers''.  I was looking instead at the
Internet as a voluntary association.  My point would have applied to any member
of any club that promoted activities against the club's norms. Most of the
restraints we observe in social intercourse are not in the law, but life
without them would be much, much harder. Those informal restraints and
sanctions form the first line of defence against destructive behavior, when
bringing in the law serves only to radicalize positions and create
                 Fernando Pereira, AT&T Bell Laboratories, Murray Hill, NJ 07974

Comments on computer trespassing (Dutch hackers)

"LARRY SEILER, DTN225-4077, HL2-1/J12 25-Apr-1991 1558" <>
Thu, 25 Apr 91 14:05:13 PDT
People are making several analogies between network breakins and other
sorts of situations that fairly cry out for comment.

1) Common law access.  In my part of the US, if you openly and flagrantly use
someone's property for 20 years, you have the right to keep doing it.  Phil
Agre suggests that this applies to those who don't block security holes.
Nonsense.  If a specific user is on a system, and the administrators know about
it and continue to allow that user to be on the system, this reasoning might
apply.  But crackers in general do not work openly.  A security hole is like a
hole in a fence.  The law does not require me to patch all holes to be
protected, or even to put up a fence.  It requires me to tell people I see
using my property that I don't want them to.  Attempting to prosecute a cracker
surely satisfies that criteria...

2) Tort law.  If you come onto my property and get injured, you can sue me if I
was maintaing an attractive nuisance.  If you come onto my property and injure
me or someone else, you cannot sue me, so it is silly to say that sites should
be punished simply for having security holes.  HOWEVER, if you use my property
to injure someone else, that other person could sue you, and possibly me as
well.  So it is reasonable to cut off from the network sites that are commonly
used to break into other sites — especially if those in charge of that site
express reckless disregard for the problem.  But none of that excuses the
cracker who actually causes the damage.

3) Socializing the youth.  Mike Godwin correctly points out that we have a
responsibility to socialize the younger generation, and putting them in jail
for doing what comes naturally (e.g. cracking?) is a bad way.  I respectfully
point out that saying that it's ok by leaving them unpunished is also a bad
way.  Other sorts of penalties should be applied, such as taking away their
computer access until they learn to be responsible.  Also, while sites should
be more secure, and people should build fences around atrtractive nusiances, it
isn't a solution to the problem of trespassing to require that *all* property
be surrounded by sturdy fences.

Re: Dutch nation portrayed as a bunch of network bashers

David Collier-Brown <>
Fri, 26 Apr 91 08:22:15 EDT (Ralph 'Hairy' Moonen) writes:
...                         Not just the issue of one
| single university like Utrecht, but of ALL sites on the internet.  Because you
| do realise that a smart cracker could get away with this just as easily in the
| States as in Holland? So don't lay any guilt-trip on the Dutch will you?

  Alas, it happens all the time on this side of the atlantic:  earlier this
year some 

Re: Dutch crackers and irresponsible officials

Tue, 30 Apr 91 09:06:22 +0200
>From Volume 11, Issue 53:
 >                                                  Clearly, as with burglary of
 > an unlocked home or theft of a car with keys hanging from the ignition,
 > carelessness by the owner does not set aside the guilt of the perpetrator.
 > Conversely, carelessness by the owner does not relieve her/him of
 > responsibility for the loss.  In the "Dutch cracker" incident, perhaps BOTH the
 > cracker's host and the host with known, repairable security holes should be
 > barred from the Internet.
Some clarification.  Dutch law specifically distinguishes 'insluiping'
(sneaking into) and 'inbraak' (breaking into).  The second being a worse
crime than the first (but both are crimes).  Separate from this again is
'stealing' and 'tampering'.  Also somebody who leaves his car/home unlocked
is guilty of 'uitlokking' (inviting?  I do not know the correct English term),
which serves as a means to diminish the guilt of the guy 'sneaking in'.

So in terms of Dutch law this appears to be a case where some hacker
'sneaked into' a system without actually 'stealing' or 'tampering'.  The
system was guilty of 'inviting'.  But as such it is only a minor offense.

dik t. winter, cwi, amsterdam, nederland

Utrecht (Re: Cooper, RISKS-11.53)

Richard A. O'Keefe <>
29 Apr 91 02:11:10 GMT
In article <>,
several people commented on the Utrecht break-ins.  Brinton Cooper,
for example, wrote
> The time has come to put this debate behind us.  Clearly, as with
> burglary of an unlocked home or theft of a car with keys hanging
> from the ignition, carelessness by the owner does not set aside the
> guilt of the perpetrator.

The underlying attitude here is that some blame _does_ attach to the
sites that were broken into, for being the equivalent of an "unlocked
home".  I'm responding because this attitude rather worries me, and I
think the analogy is invalid.  Let me cite three homely examples.
 1) When I was flatting in Edinburgh, I locked myself out once.
    The locksmith took *seconds* to get in.
 2) When I was working in Palo Alto, I locked my keys in my car once,
    with the engine running.  The AA man took *seconds* to get in.
 3) The flat I'm in now has (several) locks on both doors, and special
    fasteners on the few windows that open.  But a man with a crowbar
    (or a good strong knife) would take seconds to break in, and someone
    who was prepared to smash a window would have as little trouble.

I put it to the readers of comp.risks that running a SparcStation or a
UNIX/386 system "straight out of the box" but with passwords on all
accounts is the moral equivalent of an ordinary home with Yale locks and
latched windows, _not_ the equivalent of an "unlocked" home.  Ordinary
locks hardly even slow down a determined and knowledgable intruder.
They are, in effect, little more than "keep out" signs which block the
idly curious.

I put it to the net that "well known" security holes in operating systems
as shipped to customers are the moral equivalent of a lock manufacturer
shipping locks known to be defective, _not_ the moral equivalent of a
home-owner failing to install any locks.  I have been using UNIX since the
days of V6+.  But I have no idea what the "well known" security holes are
(well, setuid scripts, uucp setup, a few things).  What is J. Random Luser
to do with his UNIX/386 box that doesn't even come with a C compiler?  Why
should _he_ be held negligent if there are "well known" holes in his O/S,
and why should such people be kept off the net, either by law or by fear?
(I should point out that I have an actual charity in mind, that would
benefit quite a bit from a TCP link, but they aren't programmers, and I
certainly haven't the skill to make their system secure.)

RE: rude behavior

Andrew Marchant-Shapiro <marchana@UNVAX.UNION.EDU>
Thu, 25 Apr 91 08:52 EDT
Bill Murray's suggestions regarding disconnecting rogues from the net make good
sense; they are similar in form to a Japanese management technique called
``reverse feedback'' — in an auto plant, each station has the responsibility
to determine whether or not what it receives from the previous station is
acceptable.  This means that if you send me (for example) a damaged engine
block, I have a choice.  I can either accept that damaged part and do my bit of
work on it, or I can send it back to you and insist that YOU fix it.  If I
choose to do my work and send it on, even though I know it is damaged (or
because I fail to notice) and the next station DOES notice, they can send it
back to ME (and I have to fix it; since I didn't back it up to you, it is now
MY responsibility).

This approach make quality control a distributed process with real
responsibility, instead of a centralized process that can't attribute
responsibility.  In general, it results in higher-quality products.

The same can be said for controlling net access.  If I find a rogue coming in
through UCB, then I should cut off UCB until they do something about it.  UCB
should then take steps to eliminate the problem from their system and back it
up, and so forth.

This sort of distributed responsibility might at first cause problems (a lot of
people might lose the internet all at once!) but would help create a sense of
responsibility relatively quickly.

I don't know if any of this is useful to you, but I did want to say that I
endorse your approach to maintaining the net (at least in theory!).

Andrew Marchant-Shapiro / Departments of Political Science & Sociology
marchana@union.bitnet / / marchana @unvax@union

Re: RISKS of being rude...

Brad L. Knowles <>
Fri, 26 Apr 91 13:14:26 EDT
    I have read the recent comments on Bill Murray's article in RISKS
Digest 11.53, and find I must side with Mr. Murray.  Perhaps he is being
more radical than I would be, but he has the right idea anyway.  If we
find someone who is committing a crime (trespass of either the physical or
electronic sort), and we cannot prosecute them (for whatever reason), then
must ostracize them.

    As an example, let us suppose that the front doors to the Pentagon are not
locked, and someone who has Diplomatic Immunity (DI), say Saddam Hussein's
little brother, slips in.  He is caught on the premises a number of times, and
each time he is escorted out of the building (sometimes rather forcefully).
Mind you, the halls of the Pentagon are supposed to be Unclassified, as we have
public tours that go through the Pentagon every thirty minutes from 9:00 in the
morning to 4:30 in the afternoon (when a crisis like Desert Shield/Storm is not
in progress).  Nevertheless, Mr. Hussein continues to try to break into the
Pentagon illegally.  What can we do?  Our only alternative is to send him
packing, and not grant him a Visa (no, not a CitiBank Visa) to return to the
US.  If everyone Saddam Hussein sends to the US under the cover of DI tries to
break into the Pentagon (or where ever), then our only course of action is to
send ALL of his diplomatic personnel back to Baghdad.

    Likewise, if someone from the University of Utrecht keeps breaking into
computers at Lawrence Livermore Labs, or NASA, or where ever, and we can't get
the local police to prosecute them (assuming that we have collected enough
information to identify the perpetrators), then our only course of action is
cut the Univerisity of Utrecht off from the Internet in the US.  If most every
University in the Netherlands has people trying this, then we must cut them all
off.  Since the Defense Communications Agency (DCA, soon to be changing our
name to Defense Information Systems Agency (DISA)) has the right, nay the
RESPONSIBILITY to keep up (and presumably to police) the Internet, then I think
I can safely say that this might actually get implemented if the folks over at
the University of Utrecht keep it up.  Also, the Internet Activities Board
(IAB) might get a little upset with these folks if they don't stop their

    We must *not* punish the victims for the crimes of others!  As was stated
by another author, some folks simply don't have a choice — they *must* stay
with an old version of their OS becuase some mission-critical piece of software
runs *only* with that version of the OS.  We must not say to these people
"Sorry, you have an old version of the deadbolt lock, and therefore any crime
of trespass or resulting from trespass will not be prosecuted."  Just think if
this were you, how would *you* react to that statement?  You would have two
choices — let the crackers continue to victimise you, or completely disconnect
yourself from the rest of the world.  Could you live with either result?

Re: Response to Rude Behavior

Tim Wood <>
Mon, 29 Apr 91 11:22:33 PDT
In Bill Murray's call for ostracism (sanctions?) against the University of
Utrecht, he may have found the limits of ostracism as an effective tool for
reform of the socially destructive.  Because of the high degree of
interconnectedness of producers and consumers in these times, to implement
fully a policy of ostracism against this institution is infeasible, politically
if not practically.

Recent history shows us the tenuousness of the ostracism argument.  It took
years of economic sanctions against South Africa before the leadership there
changed sufficiently to allow the beginning of the dismantling of apartheid.
It took far more time before there was the international will to begin imposing
them.  And those sanctions were accompanied by continual, vocal and sharp
denouncements of that social policy.  If there was such great difficulty in
coming to terms with the need to oppose such a manifestly cruel system as
apartheid, how long will it take to reach a consensus, from the currently
divided dialogue between reasonable people, that all computer cracking is
reprehensible and evil?

Development of the Persian Gulf war is just the most recent example of the lack
of faith in sanctions.  The UN, steered by the US, decided that the certainty
of mass destruction in Iraq was preferable to the uncertainty of months
(years?) of sanctions as the way to--rightly--dislodge Saddam Hussein's army
from Kuwait.  The argument against sanctions went to the effect that "the world
can't afford to wait for them to start hurting, nor afford the risk of their
not being uniformly observed."  If it is so easy to undermine a sanction by
selling a few tons of military equipment, how much easier is it to do so by
editing a few lines in a communications configuration file?

The call to impose sanctions on China after Tienamen Square didn't even
approach international consensus, let alone be implemented.

The ethics of computer use are not well-enough understood or agreed-upon for
there to be a political consensus against the University, IMHO.  And as long as
there is even one site that refuses to disconnect, that site can perform the
equivalent of money-laundering on whatever comes out of Utrecht, disguising
their mail and hiding the origins of the connections they make.  Once the
political will is there, the practical part is easy.  But without the will,
which is in short supply these days anyway, the technology can do nothing.

Tim Wood, Sybase, Inc. / 6475 Christie Ave. / Emeryville, CA / 94608
 {pacbell,pyramid,sun,{uunet,ucbvax}!mtxinu}!sybase!tim   415-596-3500

One-time passwords (RISKS-11.53)

Steve VanDevender <>
Sun, 28 Apr 91 15:36:04 PDT
Bill Murray's article on one-time passwords generated by what he calls "tokens"
describes an interesting approach to improving system security, although he
recognizes some of the ways in which such a system could be compromised, such
as stealing a user's password generator.

There are some ways of defeating a one-time password system that he did not
describe, though.  Although a password generator is individually seeded with a
unique value that makes its generated passwords unique, both the host computer
and the password generator must algorithmically generate password values for
comparison, or the host computer can extract the unique identifying information
from any random password generator password.  Therefore, one-time passwords
just add a layer of security by obscurity to the authentication process, since
determining the password generation algorithm allows one to generate correct
passwords without a password generator.

Security always depends ultimately on secret information.  A one-time password
system may reduce the possibility of security breaches by requiring password
crackers to hit a moving target, but I suspect that any widespread
implementation of one-time password systems would cause password crackers to
change from finding ways to obtain passwords to finding one-time password
generation methods.
                              Steve VanDevender

One-Time Passwords

Barry Schrager <>
26 Apr 91 11:04:41 EDT
Regarding:  Risks 11.53 - WHMurray - One time Passwords

Mr Murray is totally correct.  All enforcable security mechanisms are based
on the user knowing something (like a password) or having something (like a
badge for a reader) or having some personal attibute (handwriting, retina
analysis, etc.).  More modern security mechanisms identify the user first
and then via some mechanism determine what he can or cannot access.

But the basis for all this is knowledge of the identification of the user.
The more confident you can be of a user's identitiy, the more secure your
system will be.

The problem with signature, fingerprint, or retina analysis is that the
hardware is relatively expensive and on a diverse and unsecured network, one
can easily duplicate the information transferred via a simple program in
one's personal computer.

The token Mr Murray refers to — really a small specialized calculator — is
designed to take one key as input and produce another key as output.
Therefore, the security (or lack of it) of the network itself will not
jeopardize the security of the computer system attached to the network.
Someone can listen in on the network and will not be able to project what
the next password produced by the token will be.

I do happen to disagree with Mr Murray in that it is my opinion that the
tokenized challange and response should be in addition to a user changable
password which he knows.

Thus, we have a secure system in that user identification is based on
something the user knows (his password) and something he has (the token) and
this all works on a network that does not have to be absolutely secure.

And we have all this at a fairly small cost — much less than the costs of
the more sophisticated hardware devices for fingerprint, signature, etc.

Please report problems with the web pages to the maintainer