The RISKS Digest
Volume 3 Issue 62

Monday, 22nd September 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Massive UNIX breakins at Stanford
Jerry Saltzer
Rob Austein
Andy Freeman
Scott Preece
F-16 Software
Henry Spencer
1,000,000 lines of correct code?
Stephen Schaefer
Info on RISKS (comp.risks)

Massive UNIX breakins at Stanford

Jerome H. Saltzer <Saltzer@ATHENA.MIT.EDU>
Mon, 22 Sep 86 11:04:16 EDT
In RISKS-3.58, Dave Curry gently chastises Brian Reid:

> . . . you asked for it. . . Berkeley networking had nothing to
> do with your intruder getting root on your system, that was due purely
> to neglect.  Granted, once you're a super-user, the Berkeley networking
> scheme enables you to invade many, many accounts on many, many machines.

And in RISKS-3.59, Scott Preece picks up the same theme, suggesting that
Stanford failed by not looking at the problem as one of network security,
and, in the light of use of Berkeley software, not enforcing a no-attachment
rule for machines that don't batten down the hatches.

These two technically- and policy-based responses might be more tenable if
the problem had occurred at a military base.  But a university is a
different environment, and those differences shed some light on environments
that will soon begin to emerge in typical commercial and networked home
computing settings.  And even on military bases.

There are two characteristics of the Stanford situation that
RISK-observers should keep in mind:

     1.  Choice of operating system software is made on many factors,
not just the quality of the network security features.  A university
has a lot of reasons for choosing BSD 4.2.  Having made that choice,
the Berkeley network code, complete with its casual approach to
network security, usually follows because the cost of changing it is
high and, as Brian noted, its convenience is also high.

     2.  It is the nature of a university to allow individuals to do
their own thing.  So insisting that every machine attached to a
network must run a certifably secure-from-penetration configuration
is counter-strategic.  And on a campus where there may be 2000
privately administered Sun III's, MicroVAX-II's, and PC RT's all
running BSD 4.2, it is so impractical as to be amusing to hear it
proposed.  Even the military sites are going to discover soon that
configuration control achieved by physical control of every network
host is harder than it looks in a world of engineering workstations.

Brian's comments are very thoughtful and thought-provoking.  He describes
expected responses of human beings to typical current-day operating system
designs.  The observations he makes can't be dismissed so easily.

                    Jerry Saltzer


Massive UNIX breakins at Stanford

Rob Austein <SRA@XX.LCS.MIT.EDU>
Mon, 22 Sep 1986 23:03 EDT
I have to take issue with Scott Preece's statement that "the fault
lies in allowing an uncontrolled machine to have full access to the
network".  This may be a valid approach on a small isolated network or
in the military, but it fails horribly in the world that the rest of
us have to live in.  For example, take a person (me) who is
(theoreticly) responsible for what passes for security on up to half a
dozen mainframes at MIT (exact number varies).  Does he have any
control over what machines are put onto the network even across the
street on the MIT main campus?  Hollow laugh.  Let alone machines at
Berkeley or (to use our favorite local example) the Banana Junior
6000s belonging to high school students in Sunnyvale, California.

As computer networks come into wider use in the private sector, this
problem will get worse, not better.  I'm waiting to see when AT&T
starts offering a long haul packet switched network as common carrier.

Rule of thumb: The net is intrinsicly insecure.  There's just too much
cable out there to police it all.  How much knowledge does it take to
tap into an ethernet?  How much money?  I'd imagine that anybody with
a BS from a good technical school could do it in a week or so for
under $5000 if she set her mind to it.

As for NFS... you are arguing my case for me.  The NFS approach to
security seems bankrupt for just this reason.  Same conceptual bug,
NFS simply agravates it by making heavier use of the trusted net
assumption.

Elsewhere in this same issue of RISKS there was some discussion about
the dangers of transporting passwords over the net (by somebody other
than Scott, I forget who).  Right.  It's a problem, but it needn't be.
Passwords can be tranmitted via public key encryption or some other
means.  The fact that most passwords are currently transmitted in
plaintext is an implementation problem, not a fundamental design
issue.

A final comment and I'll shut up.  With all this talk about security
it is important to keep in mind the adage "if it ain't broken, don't
fix it".  Case in point.  We've been running ITS (which has to be one
of the -least- secure operating systems ever written) for something
like two decades now.  We have surprisingly few problems with breakins
on ITS.  Seems that leaving out all the security code made it a very
boring proposition to break in, so almost nobody bothers (either that
or they are all scared off when they realize that the "command
processor" is an assembly language debugger ... can't imagine why).
Worth thinking about.  The price paid for security may not be obvious.

--Rob Austein <SRA@XX.LCS.MIT.EDU>


Massive UNIX breakins at Stanford

Andy Freeman <ANDY@Sushi.Stanford.EDU>
Mon 22 Sep 86 11:07:04-PDT
Scott E. Preece <preece%ccvaxa@GSWD-VMS.ARPA> writes in RISKS-3.60:

    reid@decwrl.DEC.COM (Brian Reid) writes:
    The issue here is that a small leak on some [unknown]
    inconsequential machine in the dark corners of campus was
    allowed to spread to other machines because of the networking code.

    No, you're still blaming the networking code for something it's not
    supposed to do.  The fault lies in allowing an uncontrolled machine to
    have full access to the network.  The NCSC approach to networking has
    been just that: you can't certify networking code as secure, you can
    only certify a network of machines AS A SINGLE SYSTEM.  That's pretty
    much the approach of the Berkeley code, with some grafted on
    protections because there are real-world situations where you have to
    have some less-controlled machines with restricted access.  The
    addition of NFS makes the single-system model even more necessary.

Then NCSC certification means nothing in many (most?) situations.  A
lot of networks cross adminstrative boundaries.  (The exceptions are
small companies and military installations.)  Even in those that
seemingly don't, phone access is often necessary.

Network access should be as secure as phone access.  Exceptions may
choose to disable this protection but many of us won't.  (If Brian
didn't know about the insecure machine, it wouldn't have had a valid
password to access his machine.  He'd also have been able to choose
what kind of access it had.)  The only additional problem that
networks pose is the ability to physically disrupt other's
communication.

-andy             [There is some redundancy in these contributions, 
                   but each makes some novel points.  It is better
                   for you to read selectively than for me to edit. PGN]


Massive UNIX breakins at Stanford (RISKS-3.60)

"Scott E. Preece" <preece%mycroft@GSWD-VMS.ARPA>
22 Sep 1986 16:24-CST
    Andy Freeman writes [in response to my promoting the view
    of a network as a single system]:

>       Then NCSC certification means nothing in many (most?) situations.
--------

Well, most sites are NOT required to have certified systems (yet?). If they
were, they wouldn't be allowed to have non-complying systems.  The view as a
single system makes the requirements of the security model feasible.  You
can't have anything in the network that isn't part of your trusted computing
base.  This seems to be an essential assumption.  If you can't trust the
code running on another machine on your ethernet, then you can't believe
that it is the machine it says it is, which violates the most basic
principles of the NCSC model. (IMMEDIATE DISCLAIMER: I am not part of the
group working on secure operating systems at Gould; my knowledge of the area
is superficial, but I think it's also correct.)  
                   [NOTE: The word "NOT" in the first line of this paragraph
                    was interpolated by PGN as the presumed intended meaning.]

--------
        Network access should be as secure as phone access.  Exceptions may
        choose to disable this protection but many of us won't.  (If Brian
        didn't know about the insecure machine, it wouldn't have had a valid
        password to access his machine.  He'd also have been able to choose
        what kind of access it had.)  The only additional problem that
        networks pose is the ability to physically disrupt other's
        communication.
--------

Absolutely, network access should be as secure as phone access,
IF YOU CHOOSE TO WORK IN THAT MODE.  Our links to the outside
world are as tightly restricted as our dialins.  The Berkeley
networking software is set up to support a much more integrated
kind of network, where the network is treated as a single system.
For our development environment that is much more effective.
You should never allow that kind of access to a machine you don't
control.  Never.  My interpretation of the original note was that
the author's net contained machines with trusted-host access
which should not have had such access; I contend that that
represents NOT a failing of the software, but a failing of the
administration of the network.

scott preece
gould/csd - urbana, uucp:   ihnp4!uiucdcs!ccvaxa!preece


F-16 Software

<ihnp4!utzoo!henry@ucbvax.Berkeley.EDU>
Mon, 22 Sep 86 18:07:11 PDT
Doug Wade notes:

>   My comment to this, is what if a 8G limit had been programmed into
> the plane (if it had been fly-by-wire)...

My first reaction on this was that military aircraft, at least front-line
combat types, obviously need a way to override such restrictions in crises,
but civilian aircraft shouldn't.  Then I remembered the case of the 727 that
rolled out of control into a dive a few years ago.  The crew finally managed
to reduce speed enough to regain control by dropping the landing gear.  The
plane was at transonic speed at the time — there was some speculation, later
disproven, that it might actually have gone slightly supersonic — and was
undoubtedly far above the official red-line maximum airspeed for the
landing gear.  It would seem that even airliners might need overrides.

                Henry Spencer @ U of Toronto Zoology
                {allegra,ihnp4,decvax,pyramid}!utzoo!henry


1,000,000 lines of correct code?

Stephen Schaefer <schaefer%research.bgsu.edu@CSNET-RELAY.ARPA>
Mon, 22 Sep 86 19:15:31 edt
  The Plain Dealer (Cleveland), Tuesday, September 16, 1986
  Excerpted without permission.

  "Protecting the secrets of success"

  Dayton(AP) - [Most of article dealing with foreign contractors
  omitted] [Col. Thomas D.] Fiorino also said a Sept. 5 experiment using
  two satellites that measured the plume of a rocket exhaust in space
  and then collided was a success.  Some critics, noting the experiment
  took 1 million lines of computer code, said a full SDI system would
  take tens or hundreds of millions.
    Fiorino said there was a computer on board that processed 2
  billion operations a second, about four times faster than current
  "supercomputers."
    "It did not represent our full technological potential," he
  said, pointing out that it did not use very high speed integrated
  circuits still under development.

On the one hand, I am incredulous, but on the other, I'd be utterly
horrified to find them directing misinformation to the small number of
people knowledgeable enough to understand.  I hope this ruggedized,
portable, Cray class machine is commercially available in a couple
years.  Failing that, I hope the reporter was simply "innumerate"
and heard "billion" for "million" somewhere.

I must repeat the quote of Mark Twain by the original poster:
"Interesting if true - and interesting anyway."

Please report problems with the web pages to the maintainer

x
Top