The RISKS Digest
Volume 3 Issue 58

Wednesday, 17th September 1986

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Massive UNIX breakins
Dave Curry
Brian Reid
"Atlanta's been down all afternoon"
Alan Wexelblat
F-16 software
Herb Lin
Viking Project
Eugene Miya
Protection of personal information
David Chase
Autonomous Weapons
Ken Laws
Re: computers and petty fraud
Col. G. L. Sicherman
Info on RISKS (comp.risks)

Massive UNIX breakins

Dave Curry <davy@ee.ecn.purdue.edu>
Wed, 17 Sep 86 08:01:03 EST
Brian -

    I feel for you, I really do.  Breakins can be a real pain in the
neck, aside from being potentially hazardous to your systems.  And, we
too have had trouble convincing the authorities that anything serious
is going on.  (To their credit, they have learned a lot and are much
more responsive now than they were a few years ago.)

    I do have a couple of comments though.  Griping about the Berkeley
networking utilities is well and good, and yes, they do have their
problems.  However, I think it really had little to do with the
initial breakins on your system.  It merely compounded an already
exisiting breakin several fold.

    Two specific parts of your letter I take exception to:

    One of the Stanford campus computers, used primarily as a mail
    gateway between Unix and IBM computers on campus, had a guest
    account with user id "guest" and password "guest". The intruder
    somehow got his hands on this account and guessed the
    password. 

    Um, to put it mildly, you were asking for it.  "guest" is probably
the second or third login name I'd guess if I were trying to break
in.  It ranks right up there with "user", "sys", "admin", and so on.
And making the password to "guest" be "guest" is like leaving the
front door wide open.  Berkeley networking had nothing to do with your
initial breakin, leaving an obvious account with an even more obvious
password on your system was the cause of that.

    There are a number of well-known security holes in early
    releases of Berkeley Unix, many of which are fixed in later
    releases.  Because this computer is used as a mail gateway,
    there was no particular incentive to keep it constantly up to
    date with the latest and greatest system release, so it was
    running an older version of the system.

    Once again, you asked for it.  If you don't plug the holes, someone
will come along and use them.  Again Berkeley networking had nothing to
do with your intruder getting root on your system, that was due purely
to neglect.  Granted, once you're a super-user, the Berkeley networking
scheme enables you to invade many, many accounts on many, many machines.

    Don't get me wrong.  I'm not trying to criticize for the sake of
being nasty here, but rather I'm emphasizing the need for enforcing
other good security measures:

    1. Unless there's a particularly good reason to have one, take
       all "generic" guest accounts off your system.  Why let
       someone log in without identifying himself?

    2. NEVER put an obvious password on a "standard" account.
       This includes "guest" on the guest account, "system" on the
       root account, and so on.

       Enforcing this among the users is harder, but not
       impossible.  We have in the past checked all the accounts
       on our machines for stupid passwords, and informed everyone
       whose password we found that they should change it.  As a
       measure of how simple easy passwords make things, we
       "cracked" about 400 accounts out of 10,000 in one overnight
       run of the program, trying about 12 passwords per account.
       Think what we could have done with a sophisticated attack.

    3. FIX SECURITY HOLES.  Even on "unused" machines.  It's amazing
       how many UNIX sites have holes wide open that were plugged
       years ago.  I even found a site still running with the 4.2
       distributed sendmail a few months ago...

    4. Educate your police and other authorities about what's going
       on.  Invite them to come learn about the computer.  Give
       them an account and some documentation.  The first time we
       had a breakin over dialup (1982 or so), it took us three
       days to convince the police department that we needed the
       calls traced.  Now, they understand what's going on, and
       are much quicker to respond to any security violations we
       deem important enough to bring to their attention.  The
       Dean of Students office is now much more interested in
       handling cases of students breaking in to other students'
       accounts; several years ago their reaction was "so what?".
       This is due primarily to our people making an effort to
       educate them, although I'm sure the increased attention
       computer security has received in the media (the 414's, and
       so on) has had an effect too.

--Dave Curry
Purdue University
Engineering Computer Network


Massive UNIX breakins

Brian Reid <reid@decwrl.DEC.COM>
17 Sep 1986 0729-PDT (Wednesday)
The machine on which the initial breakin occurred was one that I didn't
even know existed, and over which no CS department person had any
control at all. The issue here is that a small leak on some
inconsequential machine in the dark corners of campus was allowed to
spread to other machines because of the networking code. Security is
quite good on CSD and EE machines, because they are run by folks who
understand security. But, as this episode showed, that wasn't quite good
enough.


"Atlanta's been down all afternoon" (!?)

Alan Wexelblat <wex@mcc.com>
Wed, 17 Sep 86 14:38:59 CDT
Last Friday, we attempted to phone (ATT) long distance to Atlanta.  After
two hours of busy signals we finally decided to try and reach the Atlanta
operator.  She said that Atlanta had been "down all afternoon."

Does anyone have any info about this?

Alan Wexelblat
ARPA: WEX@MCC.ARPA or WEX@MCC.COM
UUCP: {seismo, harvard, gatech, pyramid, &c.}!ut-sally!im4u!milano!wex


F-16 software

<LIN@XX.LCS.MIT.EDU>
Tue, 16 Sep 1986 17:43 EDT
I spoke to an F-16 flight instructor about this business concerning bomb
release when the plane is upside down.  He said the software OUGHT to
prevent such an occurrence.  When the plane is not at the right angle of
attack into the air stream, toss-bombing can result in the bomb being thrown
back into the airplane.  


Re: RISKS-3.57 Viking Project

Eugene Miya <eugene@AMES-NAS.ARPA>
16 Sep 1986 2213-PDT (Tuesday)
Sorry Dr. Benson, I wish to correct you on several points.  First off,
NASA is the CIVILIAN space agency.  NASA takes great pains to emphasize
this.  We are frequently accused of being puppets of the military and
we cannot deny that the DOD are customers and joint researchers, but
the DOD also causes us problems.  Many scientists in NASA (myself included)
work here to try an benefit ALL mankind.

The Viking Project, in particular, is not a military project and the scientists
that I know such as Conway Snyder and others would take great offense to
your implication.  (I think Sagan would be amused and offended, too.)
I can tell you there were bugs in the program.  Not
all was perfect.  Note the mission had redundency built into it.

What I can tell you about the physical systems is that spacecraft memories
at that period of time were very small and quite crude.  We are talking
hundreds of words of storage not K.  We are not talking sophisticated
programming either (more like hard coded routines). We are not talking
FORTRAN except for the trajectory and orbit determination programs (still
in use with 400K to 1M lines of code: Univac FORTRAN V and now VAX VMS
FORTRAN).  This code may be purchased from COSMIC (I think something like
$2K I can look this up).  Regarding other project documentation about
the nature of the Viking computers and their software, this is all in the
public domain in the form of NASA TRs.  (Don't ask for all, we are talking
TONS of documentation, you want to ask for specifics. and I might be
able to help a little [emphasis] by giving you contacts to phone at JPL).

(Un)happily? no Martians shot at the Landers.  I don't know how we would have
faired.  The system had no AI, it's really was not a concurrent system,
it had strictly local real-time processing, but not by choice (one-way
signal time to Mars is 7 minutes).

Valhalla: that place where Viking Project members go to retire.

--eugene miya
  ex-Voyager Program member
  ex-JPL/CIT
  NASA Ames Research Center


Protection of personal information

David Chase <rbbb@rice.edu>
Tue, 16 Sep 86 23:37:47 CDT
A friend of mine attending a large state university is preparing to
interview for jobs.  At this university the powers that bureaucratically
be "require" that you fill out a form that among other things has your
social security number and a statement that (if signed) authorizes release
of transcript to people who might wish to employ you.  Other things on
this form include percentage of college expenses earned, and similar rot
that one might wish to keep private.  No form, then no on campus
interviews.

Just to make things interesting, they wish to place this info in an
"experimental" database.

When faced with something like this, what does one person (out of 48000
students, most of them cooperating like sheep) do to get any assurance
that private information is not released to undesirables (where the set of
"undesirables" is defined by the one person, NOT the university)?  This
same problem pops up with utilities in this state also, and the bargaining
position is even worse than the student's ("I'm sorry sir, but we can't
turn on your power until I complete this form, and I can't complete it
without your social security number").

Does anyone have any experience with this sort of thing?  I read a little
blurb while waiting to get my drivers license that told all about how one
should most definitely keep one's social security number in confidence, so
handing out (without permission) even those 9 digits to an alleged
prospective employer is out of line.  Never mind that those same 9 digits
are your "student number" at this school.

(Perhaps this belongs on Human Nets, but I feel this is a risk--if nothing
else, it raises my blood pressure to dangerous levels to hand out private
information to pig-headed idiots.  I'd also rather prevent some of this
now than be the subject of an amusing/shocking anecdote later)

David


Autonomous Weapons

Ken Laws <Laws@SRI-STRIPE.ARPA>
Wed 17 Sep 86 07:10:43-PDT
Eugene Miya asks whether autonomous weapons can be considered moral.  Brief
thoughts (since Risks may not be the right forum):

Dumb weapons or those guided incompetently are no better — was the
accidentaly bombing of the French Embassy in Libya moral?

Autonomous vehicles (or, for that matter, bombs) are not smart enough
to perform trivial civilian duties in cooperative environments (e.g.,
driving to the grocery store or picking weeds in a corn field).
Someday they may be, in which case questions about their intelligence
and morality may be worth debating.  For now, the assumption is that
they are only to be used in situations where anything that moves is
a legitimate target and where taking out the wrong "target" is better
than taking out no target.  This is rather similar to the situation
facing nukes, and the moral choices in initiating use are the same.
The advantages of autonomous weapons over nukes should be obvious,
although there will always be philosophers and humanists who mourn an
isolated wrongful death as much as the destruction of a city.

                    — Ken Laws


Re: computers and petty fraud

"Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Wed, 17 Sep 86 15:33:21 EDT
In RISKS-3.54 Mark S. Day inquires why computerization encourages people
to defraud shop clerks.

>                            ... Thus, people will talk about switching
> UPC price tags who would view switching non-computerized price tags as
> fraud.

This is partly because it's less easily detected.  Replacing price tags
with bar codes means that the clerk has little or no opportunity to
consider whether the price is reasonable.  The effect resembles what
happened when hand calculators replaced slide rules.  By eliminating
the element of clerical surveillance, the manager increases efficiency
at the cost of security.  It's a typical trade-off.

As for the customers ... perhaps the general run of people were never
very ethical to begin with?

>         Similarly, people will read mail and data files stored on a
> timesharing system, even though it's unacceptable to rifle through
> people's desks.

There are two active changes here.  First, a time-sharing system is
perceived as a shared facility (even if it runs VM! :-), a commune
rather than an apartment house, so to speak.  This has been reinforced
by the development of message systems.  Second, the phenomenal progress
in communication in recent years has undermined public support for
privacy.  The subject of privacy has been vexing and misleading pundit
lately; the best treatment of it is to be found in _The Gutenberg
Galaxy_ by H. M. McLuhan.  (It's nothing like the typical liberal or
illiberal arguments one normally reads.)

A third factor, and I think a significant one, is the re-alignment of
popular loyalty.  Large societies are products of the age of print.
In particular, print provides the inspiration for uniform, stable
laws, language, and conventions; it also creates the necessary illusion
of commonality by virtue of the physical uniformity of print and the
impersonality of publishing.  (One could add that large states and
countries are perceptible chiefly by virtue of printed maps.)
In an age of fast, easy communication, artifacts like countries
grow to appear unreal and arbitrary.  People are coming to prefer
to deal directly with one another, and personal loyalties are out-
weighing loyalties to abstractions like country and society.  I do
not believe that this is a bad thing; it increases strife, but
reduces international war.

Please report problems with the web pages to the maintainer

x
Top