The RISKS Digest
Volume 7 Issue 74

Thursday, 10th November 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Air traffic control and safety margins
Steve Philipson
UK vehicle-identification systems
Chaz Heritage
Re: The Computer Jam — How it came about
Mark W. Eichin
The worm and the debug option
Steven Bellovin
Risks of unchecked input in C programs
Geoff Collyer
Worms/viruses/moles/etc. and the risks
Scott E. Preece
Nonsecure passwords/computer ethics
Christine Piatko
PGN
Phone-answerer/ voicemail security & voice-encryption
David A. Honig
University computing
James A. Schweitzer
Info on RISKS (comp.risks)

Air traffic control and safety margins

Steve Philipson <steve@aurora.arc.nasa.gov>
Wed, 9 Nov 88 12:19:13 PST
In rISKS-7.72, Jeffrey Mogul, Computer science unencumbered by fears about
cutting safety margins, submitted some quotes from "Airport" magazine":

>    Aviation Scientists in Britain, the US, France and West Germany are
>    now working on a data-exchange system which would reduce or even
>    eliminate the human element in air traffic control and in airport
>    approach, landing and take-off-slot technique.

   I'm not sure what "approach, landing and take-off-slot technique" means
(this is not an expression in use in the field), but I can tell you that many
of us in the field do not see terminal area control being handled by computer
control in the forseeable future.  The "human element" provides levels of
redundancy, error checking, and distributed reasonablness checks that would
likely be lost in a computer controlled system.

>    Machine-talking-to-machine would enable the system to improve
>    perhaps five-fold, because the precise nature of computer science
>    is unencumbered by fears about cutting safety margins too finely.  A
>    cold dish of comfort, perhaps; one which will not be available until
>    well after 2005.  And anyway, nobody knows yet how much such a system
>    will cost.  But we all know who's going to pay for it, don't we?

   One of the most significant losses in such a system is the loss of the
"party line".  Each flight crew hears the ATC instructions issued to other
aircraft on the frequency (and in their general area), and can recognize
instructions that put two aircraft in conflict.  We may never have a system
where an aircraft is cleared for takeoff by computer, where other aircraft
would not be aware of the issuance of that clearance.  A ground computer to
single aircraft computer system would DECREASE safety, as flight crews would
lose the knowledge of the "big picture", i.e., what other aircraft are being
told to do, and what the general flow of traffic is.  (It is of tremendos
import to know that another aircraft has been cleared for takeoff while you are
taxiing on the same runway.)  There are also substantial questions on the
viability of text-based (as opposed to human voice based) communications with
the attendant transfer from auditory to visual task loads and it's impact on
visual traffic scan duties.

   The author of the article seems to believe that computer systems are
inherently more reliable and efficient than human systems, so we can solve our
congestion and safety problems in one fell swoop just by using computer
systems.  RISKS readers recognize the falicy of that position, as do aviation
safety researchers.  On the other hand, he also seems to believe that the
computer system will be terribly expensive.  and that we'll be paying more even
though we see a "five-fold" improvement in capacity.  Sounds like a technically
naive writer to me.

                        Steve Philipson


UK vehicle-identification systems

<"chaz_heritage.WGC1RX"@Xerox.COM>
9 Nov 88 11:21:15 PST (Wednesday)
In his Fri, 4 Nov 88 01:43:49 PST RISKS-7.72 contribution Dave Robinson writes:

> Last night, no BBC's TOP GEAR programme, a device deliberately designed
to locate cars was described.
Essentially, it is a navigational aid designed to take into account traffic
congestion.<

What Mr. Robinson did not mention (though it has been given scant treatment
by the UK media) was the scheme for 'privatising' roads. This is part of
the general policy of the UK Government that as little as possible of the
national infrastructure should be publicly owned.

Though not necessarily all roads would immediately be sold off to
speculators, the majority of tunnels, bridges and newly-constructed roads
would, were the necessary legislation to be passed, become, or remain,
privately owned.

The Government feel that the present method of collecting tolls is
antiquated and causes congestion at vital points such as the Dartford
Tunnel. They therefore have conceived an automatic toll-collection method,
based, they say, on Japanese practice.

Every vehicle in the country would have to be fitted with what is described
as an 'electronic number-plate'. Descriptions of this equipment are few,
vague and couched in the usual patronising
'you-couldn't-possibly-understand-this' terms. However, its principle
appears to be that of IFF (Identification: Friend or Foe?).

When the IFF-equipped vehicle is driven through a toll point, its IFF is
interrogated by devices installed in the road surface. It then transmits,
by some means, the vehicle's registration number to the interrogation
devices. These communicate directly with the road owner's computer system.
Clearly this computer system must either be connected to, or share a common
database with, the Driver and Vehicle Licensing Centre at Swansea, which
holds all records of registered vehicles. This would allow the road owner
to bill drivers automatically. The Government claim (as they are wont to do
in such cases) that this is 'what the majority of people want'. There has,
of course, been no suggestion that the interrogation devices might also be
connected to the Police National Computer, since such a suggestion would be
either what the Government call 'irresponsible journalism' (if it were not
demonstrably true) or a breach of the Official Secrets Acts (if it were).
However, were I a senior Police Officer, I would find it difficult to
refuse such an opportunity for what is fashionably described as
'pre-emptive policing'.

It would, of course, have to be made a crime to drive without an IFF
device, or with a faulty one (how one is supposed to establish that one's
IFF is working correctly - when its principle of operation is apparently a
secret -  is not clear).   

It is curious that the Government, ostensibly anxious to allow maximum
commercial freedom, should on the one hand declare their intention of
selling off the roads to private investors, and on the other hand prepare
to prescribe for those investors a national system of automatic vehicle
identification. Were I the owner of a road or bridge I should resent being
told by the Government that I had to use a particular,
Government-prescribed toll system (tollbooths still work, and they're
cheap!). One could almost draw the conclusion that something other than
commercial efficiency had prompted the Government's decisions in this
respect.

However, there cannot possibly be RISKS in this system, since, on the few
occasions when it is publicly mentioned, there is always a qualifying assurance
to the effect that 'the innocent have nothing to fear'. Why such assurances
should have to be made in connection with an automatic toll system - totally
unconnected, of course, with the security forces - is not clear.
                                                                      Chaz


re: NYT/Markoff: The Computer Jam — How it came about

Mark W. Eichin <eichin@ATHENA.MIT.EDU>
Wed, 9 Nov 88 19:58:41 EST
The following paragraph from Markoff's article comes from a telephone
conversation he had with me at the airport leaving the Nov. 8 "virus
conference": 

>   But Morris reasoned that another expert could defeat his program by sending
>the correct answering signal back to the rogue. To parry this, Morris
>programmed his invader so that once every 10 times it sent the query signal it
>would copy itself into the new machine regardless of the answer.
>   The choice of 1 in 10 proved disastrous because it was far too frequent. It
>should have been one in 1,000 or even one in 10,000 for the invader to escape
>detection.

    However, it is incorrect (I did think Markoff had grasped my
comments, perhaps not.) The virus design seems to have been to reinfect with
a 1 in 15 chance a machine already infected.
    The code was BACKWARD, so it reinfected with a *14* in 15
chance. Changing the denominator would have had no effect.

Mark Eichin <eichin@athena.mit.edu> SIPB Member & Project Athena "Watchmaker"


The worm and the debug option

smb@research.att.com <Steven Bellovin> <hector!smb>
Wed, 9 Nov 88 23:01:29 EST
Sorry — in both Berkeley's and Sun's standard distribution, debugging comes
enabled.  That's perhaps defensible from Berkeley; they're distributing a
research system, to customers prone to tinker, and sendmail is certainly
complex enough to need lot's of debugging.  Nor can I necessarily criticize
it from Sun; it's often useful to be able to trace such a program.  The flaw
is not that debug mode was possible; rather, that sendmail's debug mode (a)
was accessible remotely; and (b) expanded the range of inputs accepted by
the program, rather than just providing extra trace data.  What's even more
amazing is the statement Eric Allman (the author of sendmail) was quoted in
the N.Y. Times as making:  that he added that code to get around restrictive
management policies.  That is, it was a deliberate back door, albeit one
with a nominally-limited intended scope.  10-Nov-88


Risks of unchecked input in C programs

<geoff@utstat.UUCP>
Thu 10 Nov EST 1988 03:13:37
A security bug in the 4.2BSD Unix finger daemon, which permitted its
invoker to obtain a shell with super-user privileges, was exposed during
the recent Internet worm discussion.  The bug was caused by use of the C
standard I/O routine "gets" which is a bug waiting to happen and which
should be stamped out.  (I have deleted gets from my standard I/O
implementation, and the folks at Bell Labs Research have deleted gets
from their C library.)  The bug was that the finger daemon used gets to
read a line of input from its network connection, and gets is unable to
check that the input line fits within the buffer handed to gets, so a
suitably-constructed line of input to the finger daemon steps on other
variables, confusing the finger daemon.

gets, as part of standard I/O, is a decade-old backward-compatibility
hack for compatibility with the Sixth Edition UNIX Portable I/O Library,
which was utterly replaced by standard I/O no later than 1979.  gets
takes one parameter, the input buffer into which a line of input from
the standard input stream is to be stored, and deletes any trailing
newline from the buffer.  Standard I/O contains an alternative to gets,
called fgets, which takes three parameters: an input buffer, its size in
bytes, and the stream to be read.  fgets does not strip trailing
newlines.  Converting programs from using gets to fgets is largely
mechanical, and stripping trailing newlines is trivial to code
yourself.  gets is inherently unsafe due to its inability to check for
overrun of the buffer provided to it.  There is no reason to use gets,
and there are good reasons to avoid gets.

Geoff Collyer   utzoo!utstat!geoff, geoff@utstat.toronto.edu


Worms/viruses/moles/etc. and the risks

Scott E. Preece <preece@xenurus.gould.com>
Thu, 10 Nov 88 09:38:49 CST
  From: "Clifford Johnson" <GA.CJJ@Forsythe.Stanford.EDU>
> As for the trends, despite first appearances, the "Star Wars" system
> would greatly add to U.S. vulnerability rather than to security by
> resting U.S.  strategic execution (as well as warning) upon a huge
> network of systems, much harder to secure than the present execution
> system.  The warning system also becomes much more complex.  The funded
> National Test Bed is in essence the
> development of such vulnerable networks for strategic warning and execution.

It's also interesting to note that many of the people defending the security of
the "really" secure systems pointed to their reliance on physical security --
the lack of network or remote access.  SDI, on the other hand, is going to
depend on space-based components which CANNOT be isolated from remote access.

scott preece, motorola urbana design center
uucp:   uunet!uiucuxc!mcdurb!preece


Nonsecure passwords/computer ethics

Christine Piatko <piatko@svax.cs.cornell.edu>
Wed, 9 Nov 88 15:56:43 EST
I would like to point out that the users themselves can make their passwords
more secure by not using `obvious' (i.e. English word, easily available in the
dictionary) passwords.  At the moment it is too easy to encrypt dictionary
entries and compare them to password files.  People are told this all the time,
but there are many people who use words that can be found in the dictionary.
I'm sure the situation is similar at other sites (even for root passwords).
People pick WORDS because they are easy to remember.  A better technique, to
come up with safer password, is to pick a phrase and use the initial letters
and numbers:

  'A stitch in time saves nine' for the password asits9.

Perhaps a program should be run every so often to check if people have obvious
passwords and remind them to change them.  If the message is ignored the user
could be inconvenienced by having the administrator change the password for him.

Of course this does not address other issues, like the 'bug' in sendmail (which
seemed more like a door that someone left open for himself) or other issues of
system security.  But this is one measure that users can take to protect
themselves a bit.

In defense of the alleged culprit R. Morris, I would like to say that I know of
people at several universities who have had similar escapades, although on a
smaller scale.  In this case I agree that the 'prank' got out of hand, but
there are many such pranks going all the time at any system site. For some
reason these kinds of holes are fascinating to some pretty intelligent people.
I think their fascination should be put to good use tracking down such holes.
I don't hold out much hope for completely secure systems (I don't believe there
are break-in proof safes or unsinkable ships either).  However this should
emphasize the fact that we are a community that has to work together, and
sometimes that means learning some very hard lessons together.  If just one
site had been affected, would the sendmail bug have been fixed nationwide?
Evidently not, since from what I've seen on the net this bug was known about
for at least 2 years.

As a community we have a lot to learn about how to work together.  It is
interesting to see so many different perspectives on how 'secure' computers and
networks should be.  I have been amused by people saying that we should require
CS students to take an ethics course.  Is it really so clear in the entire
community what is and isn't ethical behavior?  Obviously not, since some people
think this 'worm' incident was merely stupid, while others think it was
unethical.  We in the computer science community need to figure out a code of
ethics dealing with breaking into systems, just as we as a society are still
figuring out how to deal with people who break into our homes.

Christine Piatko 
usual disclaimers here, and no, I didn't know rtm very well.


Re: Nonsecure passwords/computer ethics

Peter G. Neumann <Neumann@KL.SRI.COM>
Wed, 9 Nov 88 13:09:18 PST
But don't forget that passwords traversing Ethernets and Arpanets are
vulnerable even if they are difficult to guess.  The net communications are
unencrypted and capturable.  Many years ago someone wrote a simple
ID-and-password capture program on the Ethernet.  It still works.  In UNIX,
the /dev/mem vulnerability (a "feature" to some) can be used to capture
passwords in unencrypted form.  Even the Gould UTX/32S C2 version of Unix
still has that vulnerability.  The bottom line is this: beware of relying on
passwords.  By the way, for Unix folks, the AT&T and Sun announcements of
vastly improved security (including multilevel security) should be of
considerable interest.  But they still don't solve all the problems.

     [Ironically, perhaps, it is the classical paper by Bob Morris (Sr.)
     and Ken Thompson, "UNIX Password Security: A Case History", Comm.
     ACM 22, 11 (November 1979), pp. 594-597, that really started the
     increased awareness about password vulnerabilities!]


Phone-answerer/ voicemail security & voice-encryption

"David A. Honig" <honig@bonnie.ICS.UCI.EDU>
Wed, 09 Nov 88 13:42:33 -0800
Unauthorized phone-answering-machine playback and unauthorized
centralized-voicemail message playback could be made more difficult by
encrypting the stored messages.  This could be done at the same time as data
compression preprocessing on digital systems.  (There are analog "encryption"
methods but these days everything's cheaper digitally...)

Of course, the original message could be bugged when recorded, and for
a central-voicemail system the encryption key would have to be sent
over the (unsecure) phone lines, so this is not a total solution.  But
it makes it harder for nosy voicemail sysops, including those with
warrants, to playback stored messages.  And it makes unauthorized
home-answering machine playback useless.

Encrypted voicemail and a more secure home answering machine most likely *are*
good selling points, so I will not be too surprised when they become
commercial.  Some of the (e.g., black) market desires these features now, and
when it becomes cheap, everyone will expect it.


University computing (Re: RISKS-7.71)

<"James_A._Schweitzer.STHQ"@Xerox.COM>
Thu, 10 Nov 88 10:23:19 PST
Peter, re: your comment that "But to assume that university computing should be
relatively wide open would be a serious mistake.  Unethical and other abuses
are not uncommon."(Sun, 6 Nov 88 22:01:17 PST).

At a professional meeting last week, we had a presentation by a university data
center manager on a Trojan Horse attack which had shut down his operation. The
last part of his talk was titled "Lessons Learned".  I was dumbfounded that
these "lessons" included only technical conclusions concerning security
controls. There was no thought of teaching the student users about computer
ethics and proper behavior once you are granted computer use privileges.

I told him I though it was similar to teaching a fifteen-year-old to drive a
car while neglecting to say anything about rules of the road, traffic signals,
and so forth.

Until the universities start telling people about proper behavior, they (and I
guess we) deserve what we get.

Jim Schweitzer

Please report problems with the web pages to the maintainer

x
Top