The RISKS Digest
Volume 11 Issue 34

Monday, 25th March 1991

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Re: A cautionary tale
David Shepherd
Re: What the laws enforce
Michael J. de Mare
Gene Spafford
Joe Morris
Neil Rickert
Mike Gore
Bob Johnson [2]
David A. Honig
Security - break-ins and punishment
Chet T Laughlin
Re: Thumbs and authors ...
Herman J. Woltring
Info on RISKS (comp.risks)

Re: A cautionary tale

David Shepherd <des@inmos.com>
Mon, 25 Mar 91 10:54:06 GMT
The problems with a rogue machine updating name service tables at SRC sounds
similar to problems I experienced when we were changed our internet address to
a class B address. On the relevant friday evening alll machines in the company
(then about 100 assorted Suns etc and a few microVaxes) were to be taken down
updated at the same time. This included about 4 SparcStations at our
manufacturing plant at Newport 30 miles away which is linked to our network.
Our system adminstrator here told the adminstroator in Newport that the switch
would take about half an hour so their machines should be shutdown between 6
and 6:30 and these would be updated over the weekend. Needless to say the
switch took longer (it didn't start till after 6:30 in any case) and when I
tried to reboot the SparcStation in my group half of them failed as they
decided to take they NIS information from a NIS server in Newport (i.e. one 30
miles away rather than just down the corridor - they had never done this
before!) which now contained old internet addresses which were completely
wrong.

After several reboots to try and get the machines to choose a Bristol
NIS server, which failed, I gave up and left the computer support staff
to get things sorted out over the weekend, which, fortunately, they did.

It is very infuriating to know that you cannot boot your systems because
of a rogue machine which you cannot switch off because its 30 miles away!

david shepherd: des@inmos.co.uk or des@inmos.com    tel: 0454-616616 x 529
                inmos ltd, 1000 aztec west, almondsbury, bristol, bs12 4sq


Re: What the laws enforce

"Michael J. de Mare" <demarem@clutx.clarkson.edu>
Sat, 23 Mar 91 15:27:40 EST
P.E.Smee@gdr.bath.ac.uk expressed concern that it would be inappropriate to
excessively punish hackers in spite of the amount of work required to check the
system even after simple trespass because the hacking has educational value.  I
think that it has value to the systems administrater as well as the hacker do
to security flaws revealed (although I have noticed that there tends to be
hesitation in correcting such flaws be- cause the fixes tend to take a certain
degree of flexibility out of the system).  I think that in spite of the
benefits, the costs make instances of hacking intolerable.

I think that sites that are uncomfortable with dealing strictly with hacker
should set up a machine specifically for hacking into.  The idea being that
anything on it should be considered at risk, but anyone who does there hacking
on other machines will be dealt with very severely.  The hacker would get
whatever educational value exists without imposing large amounts of work, cost
and inconvience on others, the system administraters could discover weakness by
monitering the hacking system.  The catch is that there is an overhead cost in
purchasing and maintaining the system which institutions would, justifiably be
hesitant to pay.
                                        Mike


Re: "What the laws enforce" (Godwin, RISKS-11.33)

Gene Spafford <spaf@cs.purdue.edu>
25 Mar 91 04:27:23 GMT
Mike Godwin makes the point that intent is part of the usual process of
applying criminal law, and that the usual "hacker culture" (an oxymoron?) does
not condone or encourage malicious intent.  Perhaps that is true, but that does
not convince me of anything — it's like saying that the majority of car thefts
are done by kids with no intent to keep the cars, so auto theft should be a
minor crime.  The act is what is illegal, not the intended result (although the
results may result in other charges, like vehicular homicide).  The intent
portion is demonstrated by breaking in to something/some place that the
offender has no permission to access.

I must defer to Mike on questions of law as he has a law degree and is (I
assume) a member of the bar in some state(s).  However, my understanding is
that cases where people only intended to frighten or intimidate another but had
their plans go awry do happen.  The intent may have been simple hooliganism
(say, assault or malicious mischief at worst), but the charges that result may
be battery, manslaughter, arson, or any number of other more serious charges.
The intent involved was that of doing something criminal, even if the result
was not what was intended; the charges are filed on the act, not on the claimed
intended act.

The same should happen with computer breakins.  The intent part is satisfied if
it can be shown that they intended to access the system and knew they were not
authorized to do so (via an authorized account or standard public mechanism).
There is no question that the intruders know they are straying into territory
they should avoid.

I don't think Neil Rickert's related experience is the same.  He possibly
exceeded his authorized access locally and reported it.  That isn't the same as
trespassing from outside.  I'm assuming he knew something about the local
security policy and had some legitimate access to begin with.  Those are
perhaps cases to be decided in a civil case, as per Mike's comments.

The problem comes from people who do not have legitimate access of any type.
They intrude on systems without authorization and may (or may not) cause the
owners difficulties.  If I discover that people have broken into my office or
my home, it is up to me to determine if they have altered the locks or stolen
my coin collection.  If I decide not to check, that is up to me.  However, that
doesn't alter the fact that breaking and entering has occurred, and is a
chargeable offense, does it?

I'm not entirely clear on the law here, but I believe that the act (breaking
and entering) is the same crime even if I fail to lock my door, correct?  Isn't
it the case that turning the doorknob and pushing open the door is considered
"breaking" under the law? My insurance company may raise my rates or fail to
reimburse me if I failed to use the lock, and as Mike noted, I can pursue civil
action to recover the damages.  I bear the costs and consequences of my
carelessness.  But the intent of the intruders is shown by their presence
without authorization, with the door being locked or not.

I also don't think the "attractive nuisance" argument makes any sense.  Suppose
I own a Ferarri, and I lock the doors, then some kids steal it for a joyride.
Are the thieves excused because the car would be considered an attractive
nuisance and I didn't hire an armed guard to protect it 24 hours a day?  I hope
not!  If I use standard locks on something generally accepted to be private
property, I should have protection of the law.  If I forget to lock the doors
some night should not excuse the theft of the car, even if the thieves return
the car with a full tank of gas.  Again, my insurance rates may go up or fail
to pay for damage if it can be shown that I failed to lock the car, but that
does not result in the case being thrown out of court, does it?

Whether or not the majority of break-ins have been largely benign in
intent *so far* does not mean they will continue to be so, or that
they are not sometimes damaging in ways not envisioned by the
culprits.  I agree that intent of malicious damage should make the
charges more severe.  However, I do not believe the lack of such proof
means the break-in by itself is a minor crime.  There is *no*
ethically sound reason for people to break into systems they know they
do not have permission to access.  Likewise, such activity should not
be treated lightly by the legal system.

Gene Spafford, NSF/Purdue/U of Florida Software Engineering Research Center,
Dept. of Computer Sciences, Purdue University, W. Lafayette IN (317) 494-7825


Re: What the laws enforce (Rickerts, RISKS-11.33)

Joe Morris <jcmorris@mwunix.mitre.org>
Mon, 25 Mar 91 12:21:05 EST
   [...]
>Yet,    some people in these discussions of intrusion, etc, would define my
>actions as computer crime, punishable by lengthy prison terms.  [...]

This isn't a valid situation to compare to a breakin by hackers:

(a) Your exploitation of the security exposure was limited to verifying that
    it did in fact exist.

(b) You were aware of the identity of the responsible manager (here, the
    sysprog) and, having verified that the exposure existed, sent that
    person sufficient information to identify and confirm the existence
    of the exposure.

(c) However unintended, the account *does* have special privileges in the
    security exposures.

Last, AND (IMHO) MOST IMPORTANT:

(d) You were authorized to be on the system, you were using the account
    issued to you for its intended usage, you became aware of the security
    exposure as a result of using your account for its intended purpose,
    and the computer center could identify you as the warm body behind
    the usage.

By definition, a user who logs into an account has access to the files
authorized to be modified by that account.  If the warm body who has
logged into the account isn't the authorized user, that fits a working
definition of "special privileges": access authority which the user should
not be given but which is available.

Several years ago one of my VMS operators managed to crash the VMS
system so hard that it took us two days to get everything back up to
a stable production status.  If this had occurred due to some unknown
and unauthorized outsider's actions I would have had to expend a
huge amount of resources to verify that no damage had been done to
the system itself or the users' data files, and I would treat the event as
vandalism.  Since the individual who triggered the failure was known to
us, was an authorized and trusted user of the system, and there was
a completely reasonable logic behind his actions there was no reason
to distrust the integrity of the VMS system itself.  (Reliability, of course,
is a different issue.  Thanks, DEC.)

Several other contributors to RISKS-FORUM have pointed out that much of
the cost of electronic intrusions is in the work required to identify what
(if any) files have been tampered with.  Too many observers don't seem to
understand that this is a major consequence of an intrusion, regardless
of the actual damage which might or might not have been done by the
intruder.

An apt comparison would be the recent cases in which Sudafed capsules have
been tampered with.  Sure, it's likely that only a few capsules actually
have cyanide in them, but if you were the manufacturer or seller would you
really want to sell them unless you had checked each capsule for tampering?
How about the Tylonol (sp?) tampering a few years ago?

You want to have some fun?  Go down to your local drug store and break the
anti-tamper seals on some Sudafed boxes, then put the boxes (no cyanide,
please...) back on the shelf.  You haven't stolen any property, and (you
claim) you haven't put poison in anything...but don't be surprised when
the cops invite you to take a guided tour of the local jail unless you
work for the drug store and discovered that the seals were broken during
shipment.

One of the standard RISKS of computerized systems is that of unreliable data
being used in automated systems.  An unauthorized user, not accountable
to the owner of a system, who deliberately enters a system and is in a
position to corrupt data makes that data unreliable and thus of significantly
reduced value to its owner.  That's vandalism, and that in turn is a crime.


Re: What the laws enforce

"Neil Rickert, N Illinois U, CS" <rickert@cs.niu.edu>
Mon, 25 Mar 91 12:23:12 -0600
>This isn't a valid situation to compare to a breakin by hackers:

 Maybe it isn't valid in your eyes, or in my eyes.  But the question is not how
you or I see it, but how lawyers, jurors and jurists, most of whom know zilch
about computers, will see it.  And they are wording and interpreting some of
these laws in very scary manners.

 -Neil Rickert


Re: "What the laws enforce" (Godwin, RISKS-11.33)

"Mike Gore, Institute Computer Research - ICR" <magore@watserv1.uwaterloo.ca>
Mon, 25 Mar 91 00:02:38 EST
>Oh, yes--*much* wiser to send some 19-year-old kid to prison on the basis of a
>bankrupt theory of deterrence than to make sure he doesn't get in in the first
>place.

    Yet is prevention our *only* concern ? I fully agree that a better
theory of deterrence should be sought as well as making it harder for someone
to break-in. Yet if we only focus at fixing the system to make it harder to
break-in we may fail to address another, and perhaps even much bigger, problem
of what leads some people to think breaking in is such a minor social trespass
in the first place. (Not to mention other issues that may be ignored in the
process)

    There is a risk here, that in trying to solve "the problem" that we
distil everything down to single issue and or one source of blame.

# Mike Gore, Technical Support, Institute for Computer Research
# UUCP:      uunet!watmath!watserv1!magore


re: "What the Laws Enforce" (Godwin, RISKS 11.33)

CDC Contractor Bob Johnson;SCSS; <robjohn@logdis1.oc.aflc.af.mil>
Mon, 25 Mar 91 01:20:47 -0600
> It is precisely this kind of mentality that was so common, once upon
> a time, in concentration-camp guards. It is amazing to me that one
> can admit this sentiment in public without embarrassment.

I'm one who likes to "stir the pot" once in a while - play the "devil's
advocate", so to speak.  I was intending to bring out the "other viewpoint",
that taken by many government and commercial installations.  A very real
problem with breakins is that the collateral damage usually outweighs the
actual damage.  I purposely took the stance I did in order to balance out
the discussion.  Sometimes I feel that we computer professionals tend to
take the distanced, analytical approach a bit too often.  If I offended, I
apologize.  I certainly didn't expect it would elicit such a venomous
response.  However, I'd wager that Mike would harbor (albeit secretly)
quite similar sentiments were he placed in the role of an administrator
resecuring a system after it was compromised.

> It costs much less to practice good security in the first place.

Mike should talk to my superiors.  They already think I'm completely paranoid.
If I practiced any better security, it would be a full time job.

> ... Few breakins have as much evidence of
> malicious intent as Bob's example here...it takes minimal understanding
> of the "cracker" subculture to determine that very few have malicious intent.

Ok.  Perhaps the malicious slant clouded the issue.  I was trying to point
out (through analogy) that we can't trust a hacker to truthfully tell us what
he/she was doing.  I do feel that I have more than a minimal understanding of
the "cracker" subculture, and I'm not inferring that phreakers/hackers (on
the whole) are malicious.  A royal pain, yes.  Malicious, no.

> If kids or anyone else damages a system accidentally and/or non-maliciously,
> the proper legal remedy is civil litigation...It is perfectly appropriate
> to sue to recover this amount [cost of resesuring the system-bj]. This is
> a civil proceeding, not a criminal one, however. Or did you think that the
> purpose of the criminal-justice system was to save you litigation costs?

Perhaps Mike has a very valid point - maybe we should be asking for better
civil laws instead of criminal ones.  However, the laws that we have used to
prosecute our intrusions are federal criminal laws.  If congress can't get a
good enough handle on computer crime to draft effective and timely laws, how
are we going to prosecute these cases in local civil courts?  And, most cases
cross interstate boundaries, which makes them federal jurisdiction anyway.
Perhaps we should try to prosecute crackers as federal civil cases?  On what
legislation and precident should we build our cases?

> It is apparent when one studies the criminal justice system in
> this country that we prefer to criminalize some activities rather than make
> responsible policy decisions.

Probably because our legislators feel that the laws should "scare the H*ll
out of 'em", and that'll keep them from breaking the laws.  The better way
to address adolescent and post-adolescent system cracking would be some
combination of education, instruction in ethics, and providing some other
means of satisfying their curiosity.  Unfortunately, even then there would
be those thrill-seekers who would keep doing it.  And there ARE malicious
crackers (probably very few, but if they're good, they don't get caught).
I guess we're still stuck with criminal laws, unless someone comes up with
a workable way to overhaul the criminal justice system in America.

> More than a million people are in jail (the
> highest rate of imprisonment in the world, when I last checked), yet we don't
> want to build new jails for them.

Irrelevant.

> Oh, yes--*much* wiser to send some 19-year-old kid to prison on the basis of a
> bankrupt theory of deterrence than to make sure he doesn't get in in the first
> place.  And let's remove the requirement of criminal intent for conviction,
> shall we? Let's expand the criminal law to include anything and everything
> that is inconvenient, or that we don't like.

Egad.  Ranting a bit, aren't we?  Must've touched a raw nerve or something.
Purposely connecting to a system, ignoring posted warnings, and hacking into
it by whatever means has been shown, in court, to signify criminal intent.
And, I know of no 19-year-old kid who has gone to prison for computer cracking.
A year or two of suspended sentence, some monetary reparations, confiscation
of equipment, and a few hundred hours of community service are the typical
sentences handed out to date.  Face it, federal judges realise that these
kids aren't hardened felons - many even have their records expunged if they
adequately live up to their sentences.

For the record, I support much of the work of the Electronic Frontiers
Foundation.  I am very concerned about issues of privacy, ethics, and
misuse of power and authority during investigations.  However, like Clifford
Stoll in "The Cukoo's Egg", I have been forced to expand my views to
accomodate "the other side" - ie: the establishment.  This problem is one
which is not going to be solved soon or easily, and one which must be brought
into open forums for indepth and thoughtful discussions.  Does anyone have
any ideas for attacking this problem?  Myself, I favor mandatory computer
classes in schools, with instructions on ethics, data ownership, and the
applicable laws.  Perhaps coupled with some sort of industry sponsored program
or competition for "gifted computer students" - to give them something to
work for.  But this still doesn't address the social pressures that entice
kids to enter the phreaking/hacking subculture in the first place...

Bob Johnson, Control Data Corporation (contractor to...)
Tinker Air Force Base, Oklahoma


re: "What the Laws Enforce" (Rickert, RISKS 11.33)

CDC Contractor Bob Johnson;SCSS; <robjohn@logdis1.oc.aflc.af.mil>
Mon, 25 Mar 91 01:24:11 -0600
> Begging your pardon, but there is a great difference between logging into a
> computer using an account with no special privileges, and being found in a
> high-rise with wire and detonators.

(Neil describes a couple of system security holes he found and investigated,
and notes that he felt ethical and proper in doing that.)

> Yet, some people in these discussions of intrusion, etc, would define my
> actions as computer crime, punishable by lengthy prison terms...still
> their definition of computer crime is wrong.  It is a head in the sand
> approach which pretends that if we just punish the 'criminals' the problems
> will go away.  They won't.

I feel Neil was justified in doing what he did, and did the right thing.  As
a valid user, he checked into the problem and reported it to the proper
authority.  Had he been of a "cracker" mindset, he would have kept the
knowledge to himself and shared it only with a few select friends.  Had he
used this newfound knowledge for personal gain, that would have (IMHO) been
computer crime.

As Neil points out, there is a real problem classifying types of illicit
usage of computing equipment.  Consequently - it all gets called computer
crime.  Ergo, Neil's activities could be construed by some as a crime.

As another example - a while back, there was a documented hole in the uucp
software in AT&T System V Unix.  A couple of otherwise-responsible unixoids
dialed up a well-known uucp phone number at the AT&T factory in Oklahoma
City where the 3B's and 5B's are built.  The system they called into is
connected to the entire AT&T computer network - even back to Bell Labs.
The unixoids in question exercised the security hole, gained root access,
poked around enough to determine that they were actually root, created a
file with a message documenting the hole (created as root), and mailed a
message to 'root' telling where to find the file.  They considered it to
be a public service, and they didn't break or change anything.

Recently, we hired one of the administrators who worked at AT&T during that
time.  It seems that the next day, the proverbial fecal matter hit the air
movement inducer.  The resulting crackdown and resecuring effort extended
far beyond the one machine, and must have cost AT&T tens of thousands of
dollars in manpower and programming.

Did they, in fact, do a public service - or did they commit computer crime?
Was there a better way to bring the security hole to the attention of AT&T?
Some have noted that hole was well-known for at least three years, and that
Berkeley had closed the hole in the BSD versions soon after it was found.
Apparently, "rubbing their noses in it" convinced them to fix it - a patch
was distributed very soon thereafter (according to my sources).

Bob Johnson
Control Data Corporation (contractor to...)
Tinker Air Force Base, Oklahoma


Re: "What the laws enforce" (Rickert, RISKS-11.33)

"David A. Honig" <honig@ICS.UCI.EDU>
Mon, 25 Mar 91 17:37:06 -0800
>. high-rise with wire and detonators.

Which made me think: Given the presence of serious security holes in basically
every operating system, is there such a thing as "an account with no special
privileges", esp. for someone who's already found a way to break into a system?

I think the "you find someone in your high rise at 3AM with batteries, timers,
and detonating caps" analogy might be amended to say, "you find someone in your
high rise at 3AM who manages to destroy his backpack before you get to him,
such that you have no clue what was in it and only his word as to his
intentions..."


Security - break-ins and punishment

Chet T Laughlin <chetl@sparc19.tamu.edu>
24 Mar 91 23:11:28 GMT
I am somewhat puzzled as to how one can press for prison terms when the system
that was entered is not even being protected by the administration and/or where
there has been no demonstration of malicious intent toward the system or users.
Maybe an example from "The Inner Circle" will help illustrate my confusion...

It seems that a teenage boy and one of his friends decided to trespass on Air
Force property.  I believe the base was NORAD headquarters under the mountain.
Now, they were caught and held in a room until their parents showed up.  They
were guarded by a soldier approximately 2-4 years older than they were.  Their
parents picked them up, both sets of authorities scolded them, and they were
taken home.  The Air Force knew exactly *how much of a threat* they were and I
would quess, exactly *why* they were there - all those "do not enter signs."
(EOE end-of-example)

Now.  Some 14 year old hacker-wanna-be stumbles into my system after seeing an
account name on the net.  He/she quesses there is no password and makes that
1-1quintillion lucky chance.  They stumble around trying "man this-that-the-
other" "catalog" "help" "who" "dir" etc.  Now, this isn't too hard to detect.
A simple program that keeps track of who performed intro commands and which
ones and when should flag this type of bumbling intruder.  And its my fault for
letting the system users have no password at all to start with.

Maybe this seems long winded, but pressing for time behind bars for this
(strawman) seems overkill.  Now, perhaps if I can demonstrate a DESIRE TO
DESTROY or MALICIOUS INTENT I could see otherwise.

   Chet Laughlin        chetl@sparc21.tamu.edu


Re: Thumbs and authors ... (Wonnacott, RISKS-11.24)

"Herman J. Woltring" <UGDIST@NICI.KUN.NL>
Mon, 11 Mar 91 09:23 MET
Joking apart, current US legislation does no longer require a readable Copy-
right statement: with the ratification of the Berne Author's Rights / Copyright
convention a couple of years ago by Uncle Sam, such formalities are no longer
required for Copyright to apply in the USA.  For  e n f o r c i n g  copyright
in a US  court of law, US citizens (both natural and legal persons) must,
however, register their claim with the US Copyright Office within, I believe,
a reasonable period from legitimate, first publication.  Non-US citizens do
not have to do so under Berne in order to maintain their rights in the USA.

I'm not sure what would constitute `(first) publication' of a finger print...
And what about a father or mother forbidding his/her offspring to `copy' them-
selves via an unfavoured marriage or other relationship?  Can (s)he ask the
court to impound and destroy the illigitimate copies?  I'm sure that the Fair
Use doctrine (US Copyright Act, Section 107) would outlaw such a request.

At any rate, the US Constitution introduces Copyright `to promote the useful
arts and sciences', while continental European Author's Right Law ("Droit
d'Auteur") is more subjectively concerned with the reputation and, secondly,
commercial position of the natural author who must be protected from pirating
publishers or unethical co-authors trying to modify his/her work against his/
her wishes.

This more subjective orientation, by the way, explains why the common "work for
hire" rule under Anglo-American Copyright does not apply in many European
jurisdictions: while an employer will usually be able to claim exploitation
rights (under Labour Law if not under Copyright Law), he will have a hard time
in claiming moral rights such as the right to determine whether and when to
publish, and the right to modify a work, especially if the work made under
employment has the author's personal character.  For `objective works' (com-
puter programs?), moral rights are less problematic.

Herman J. Woltring, Eindhoven, The Netherlands (Studycommittees on s/w & chips
protection / Computer crime,  Netherlands Society for Computers and Law)
Brussellaan 29,  NL-5628 TB  EINDHOVEN,  The Netherlands Tel. +31.40.480869

Please report problems with the web pages to the maintainer

x
Top