The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 5 Issue 57

Thursday, 12 November 1987

Contents

o Mobile Radio Interference With Vehicles
Steve Conklin
Bill Gunshannon
o Optimizing for cost savings, not safety
John McLeod
o "Welcome To My World", BBC1 Sundays 11PM -- A Review
Martin Smith
o Re: A simple application of Murphy's Law (Tape Labels)
Henry Spencer
o Overwrite of Tape Data
Ron Heiby
o Misplaced trust
B Snow
o Bar Codes
Elizabeth D. Zwicky
o Password truncation and human interfaces
Theodore Ts'o
o Re: UNIX setuid nasty
Geoff
David Phillip Oster
o How much physical security?
Martin Ewing
Alex Colvin
Mike Alexander
o Info on RISKS (comp.risks)

Mobile Radio Interference With Vehicles

Steve Conklin <b14!steve@uunet.uu.net>
11 Nov 87 08:47:55 CST (Wed)
  My father installed his two meter rig on his new G.M. car three years ago,
and every time he would key it to transmit, the engine would die. The
engine control computer was rendered useless by the radio frequency field.
The reason that the car manufacturers do nothing to fix this is that they
have no incentive to do so. The only laws applicable to this situation
are the ones concerning how much rf energy gets OUT of the computer in the
car. (A related note - this is why when the first P.C.s came out, if you
called Big Blue and said that when you turned on your TV/dryer/etc you got
garbage characters on the screen, they wouldn't help you, but if you said
that every time you turned the system on your neighbor's TV went crazy, they
would replace your keyboard, motherboard, and chassis.)

  My father never found a solution to the problem. Maybe as cellular phones
gain in popularity, the auto manufacturers will have an incentive to take
steps to prevent rf from affecting their computer systems. This also will
become a very important issue when functions other than engine control are
taken by the computer.

  One final issue is that of outside intentional interference. After this
happened, my father and I speculated that cars with computer engine control
could be disabled from another vehicle by the application of the appropriate
frequency of rf energy with a directional antenna. This could be used by
law enforcement agencies, or by terrorists wishing to kidnap someone, etc.

Steve Conklin, Intergraph Corp., Huntsville, AL 35807, (205) 772-6888
{uunet,ihnp4}!ingr!tesla!steve


Mobile Radio Interference With Vehicles

Bill Gunshannon <bill@trotter.usma.edu>
12 Nov 87 15:14:48 GMT
As a curious aside to Leo Schwab's article:

   A letter was published in an amateur radio oriented magazine called QST
a few years back by a ham who tried to install a UHF mobile radio in his
newly purchased Japanese import.  He too had problems with interference to
the electronic ignition in the car.  A call to the US Service Representative
for the cars manufacturer resulted in a very simple solution to the problem.
They told him "don't install the radio in the car".

  A novel approach to preventing interference.
                                                     bill gunshannon


Optimizing for cost savings, not safety (Re: RISKS-5.56)

John McLeod <jm7@pyr.gatech.edu>
Wed, 11 Nov 87 02:12:56 EST
A brief comment about the cost of items in automobiles.  Anything that costs
a nickel more per car, and does not affect performance under normal conditions
is very likely not to get done, as this will save $50,000 per million cars.  
The shielding of electronic ignitions and engine performance computers will
cost more than a nickel per car, and does not affect performance most of the 
time.

John McLeod VII, Georgia Insitute of Technology, Atlanta Georgia, 30332
uucp: ...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!jm7


"Welcome To My World", BBC1 Sundays 11PM -- A Short and General Review

<mcvax!minster.york.ac.uk!MartinSm@uunet.UU.NET>
11 Nov 1987 19:43:33 GMT
The BBC is currently showing a series called "Welcome To My World" which deals
with the future of information technology. It covers areas which seem to be
relevent to readers of this newsgroup so here is some information on it.

It portrays a world, not too far in the future though the exact date is not
given, where development of technology has taken place without proper
thought and control. Civil liberties are virtually nonexistent. A camera on
every street corner watches for crime, dissent or deviation from the "norm".
Computers direct almost every aspect of industry, commerce and war. Books
are curious collectors items, though knowledge is more widely available - if
you can pay for it.

The programme is introduced as a fictional documentary, interviews with real
and imaginary people are intercut with news footage of fictional events 
and it presents a very pessimistic view of what life may be like. I have yet
to decide whether this is because they sincerely believe it or because it
makes more interesting TV. The presenter, Robert Powell, likes to say that his
world doesn't have to be ours, if we make the right choices.

Extra topicality was gained a fortnight ago when one of the programmes, made
months earlier, considered the possibility of a worldwide stock market crash
caused by the global computer networks doing the dealing. Impressive pictures
of rooms full of cabinets were shown with the implication that there is
something intrinsically frightening in having computers handle money. What
frightens me is the way people handle it.

Another programme dealt with databases and secrecy. In this one a fictional
organisation called FREDI (freedom of digital information) hacked a top
secret database and released the contents to a public network. Added spice
was added by official denials that the database existed. An interesting
scene showed the presenter being stopped on the street and made to
state his ID number which was then checked on a terminal.

Last week a somewhat more unusual topic was chosen and interesting questions
were raised. Would artificial intelligence make artists redundant? If a 
computer produces a work of art who owns it? Should a film director be allowed
to electronically recreate actors to get certain scenes how he wants?

In conclusion then I do not think that much in this series would be new
to readers of this newsgroup but it is being shown on BBC1 with
a potential audience of millions. It does go out at 11PM on Sundays but I
can't be the only viewer! Even though I do not agree with the viewpoint of
this programme I regard it as one of the more thought provoking things
to hit the small screen recently.
                                        Martin Smith

Langwith College, University Of York, Heslington, York YO1 5DD England.


Re: A simple application of Murphy's Law (Tape Labels)

<decvax!utzoo!henry@ucbvax.Berkeley.EDU>
Tue, 10 Nov 87 00:26:42 est
> ...When
> you attempt to overwrite a labeled tape on our system an operator message
> appears asking if you really want to write to the tape. The operator must
> have answered yes to this question...
> This is of course an example of the seeing, hearing, reading what you expect
> to see, hear or read rather than what is actually there. There appears to be
> nothing that you can do to prevent this kind of error...

Actually, no, there are things that can be done to prevent this kind of
error.  I don't think you have diagnosed it quite correctly.  I strongly
suspect that the operator saw the question and understood it, but that
he/she sees that question a dozen times a day, and the normal answer has
become a reflex.  *That* behavior is fully predictable and a conscientious
interface designer will avoid such situations.  "Do you really want to do
this?" is a question that should never be asked unless there is truly a
good chance that the answer will be "no".

Note also that the question was directed to the wrong person:  the operator,
who probably doesn't know enough about the work to judge whether the request
is a reasonable one.  Since the system insists on asking him/her questions
that would require considerable investigation to answer intelligently, the
questions will quite predictably be answered unintelligently.  It is not
unreasonable to request manual intervention when major data destruction
is requested, but it *is* unreasonable to place the decision in the hands
of someone who gets paid for throughput, not thought.

                Henry Spencer @ U of Toronto Zoology
                {allegra,ihnp4,decvax,pyramid}!utzoo!henry


Overwrite of Tape Data

Ron Heiby <gatech!mcdchg!heiby@RUTGERS.EDU>
11 Nov 87 16:23:44 GMT
About ten years ago, I was doing some work where I had quite a few reels of
tape (and very little disk space by today's standards).  Also, I was working
in an environment where I couldn't trust the operators to *not* insert
write-enable rings in my tapes.  I also couldn't trust them not to mount my
tapes in response to a tape request from another user.  The information on
the tapes contained my master databases and selected subsets which were
monetarily expensive to re-derive from the masters, as I had to pay for my
resource usage.

After being burned once, and losing a tape full of subsets, I ran across a
tape accessory in a computer supply catalog called "write protect rings".
These were thin rings of red plastic that were to be inserted into the write
enable ring slot.  The idea was that they would interfere with the ability
to insert the write enable ring into the tape, yet would not activate the
switch in the tape drive, themselves.  These worked quite well for me and I
had no further incidents.  I took a peek at the November Inmac (major
computer accessory distributer) catalog and did not find these rings.  Now,
I can't recall from whom I purchased them.

These "write protect" rings still wouldn't stop an operator who was
determined to put a write ring in, as they were removable (with a
screwdriver or an overpriced "removal tool" sold by the same company).
However, the operator would have to go to some fairly extreme lengths.
That, coupled with a label threatening the loss of certain body parts if a
write ring were inserted in the tape would probably deter just about
anybody.  A similar approach could be used with the newer tape cartridges.
I'm currently using 3M DC600A cartridges, and they have a rotatable write
protect "notch".  A sticky red label could be placed over the turning slot
to help provide cues that it would be a big mistake to write on the tape.

Ron Heiby, heiby@mcdchg.UUCP    Moderator: comp.newprod & comp.unix


Misplaced trust [Banned AIDS?]

<BSnow@DOCKMASTER.ARPA>
Wed, 11 Nov 87 12:54 EST
An entertaining quote from the Washington Post of November 10, 1987.  It
is from a front page story on Idaho's drive to stop AIDS.

  "Doctors, hospitals, and laboratories would be legally required to
  report the name and address of anyone who tests positive, information
  that would be kept in a locked file and on COMPUTER." (emphasis added)


Bar Codes

Elizabeth D. Zwicky <zwicky@ptero.cis.ohio-state.edu>
10 Nov 87 04:51:57 GMT
>Bruce N. Baker <bnbaker@kl.sri.com>
>  The bar codes are identical ...

When you were comparing bar codes, did you actually compare bars, or only
the numbers across the bottom? UPC *does* encode more numbers than the ones
shown on the bottom; usually two digits, used for check digits.  This is
because in UPC there are four ways to encode a digit, left or right, and odd
or even; the left and right ones are used to tell you whether you read the
barcode forwards or backwards, but the odd/even distinction gives a
meta-code. That is, every time you read a character you have four facts
about it: 1) what number it was 2) whether it was right or left 3) whether
it was odd or even. The pattern of odds and evens can encode a digit. I
suppose that if you knew in advance what orientation the barcode would come
by in you could probably use the pattern of rights and lefts to encode
another digit.

To the best of my knowledge, this feature is not used by actual UPC (that
is, in the Uniform Price Code standard), but is used in EAN, the European
standard which uses the same bar codes.  If they use the check digits for
something else, then only they will be able to figure out what they are;
anything that reads UPC will reject it, since UPC specifies what the
odd/evens must be, anything that reads EAN will reject it because the check
digits are wrong, and programs that read both will read everything but the
numbers of interest, because they ignore odd vs. even.

Of course, they could be doing something even simpler, like printing
a number that is not the number encoded in the barcode above. This 
would probably be easier to see with the naked eye, though.

Elizabeth Zwicky,
The Ohio State University Dept of Computer and Information Science


Password truncation and human interfaces

Theodore Ts'o <tytso@ATHENA.MIT.EDU>
Tue, 10 Nov 87 00:28:59 EST
There is a similar problem with the (Massachusetts) BayBanks teller system:
it truncates your PIN to FOUR numbers (even though they tell you to pick a
PIN between four and six numbers).  Yes, it's still there.  When (or if)
they will ever fix it is unknown.
                        - Ted


Re: UNIX setuid nasty -- Watch your pathnames

<munnari!elecvax.oz.au!geoffw@uunet.UU.NET>
Thu, 12 Nov 87 17:01:26 EST
Sydney Uni's fate might be seen as an example of the risk taken
when an originally distributed function is centralised.
In the original give system developed at UNSW, each class
instals a copy of the give/take pair. The second of these
is setuid to the class account and constructs the destination
pathname from entirely validated components: the class directory,
assignment name and login name. The former are compiled into the
program while the last is extracted from the password file.
The purpose of give is to collect the student submission only.

Now the modifications made at SU removed the responsibility for
determining the target from the relative safety of take
to the total insecurity of give, while at the same time increasing
the destructive power of take. No wonder they got into trouble.


UNIX setuid stupidity

David Phillip Oster <oster%dewey.soe.Berkeley.EDU@Berkeley.EDU>
6 Nov 87 20:08:36 GMT
munnari!basser.cs.su.oz.au!steve@uunet.UU.NET (Stephen Russell)
describes a problem that happened to him as a result of a fundametal
misunderstanding he has about the way the Unix security system
works. His misunderstanding is so fundamental that he completely
misanalyzed his problem and the moral that should be drawn from it.

He describes a task: He needed a program that would copy a file owned
by a student into a directory owned by a teacher. 

The correct solution is: If the file has read access to all, then the
teacher, himself, could copy the file to his directory.  Unix has a
mechanism called setuid (it stands for "set user id") that lets a user
authorize a program to act as the user's agent. The teacher can write a
program to act as teacher's agent. The student can run it, and the file 
gets copied.

Mr. Russell made two mistakes.
1.) He made his program setuid "root" instead of setuid teacher. As
a result, the program let students copy into any place, not just those
places that the teacher was allowed to. This means that the damage caused
by his second mistake was not contained by the Unix protection system.

2.) When you make a program "setuid" you are giving the program the
ability to act in your name. That means that the program must check, just
as you would, that it is performing a legal act. Mr. Russell kindly explained
that he got this part wrong.

Now, all of the above works only if the teacher can read the student's
file. We need some way of arranging for the teacher to be able to read
it but not the other students. Unix also has a mechanism for doing
this. A "group" on unix is a list of users. Each file has both a user
id and a group id, and both user and group permissions. It is quite
reasonable to have a separate group for each student<->teacher pair.
If a student wants to give a copy of a file to a teacher, he runs a
program that:
1.) changes the group of the file to the student<->teacher group,
2.) runs a  setuid="teacher" program to copy the file to the teacher's 
directory.
3.) changes the group of the file back.

Now, if you have M students and N teachers, this means you need M*N groups.
Groups in turn are defined in a sequentially read text file, owned by root.
One can argue that having all these predefined groups would make the system
slow, but you can use a small, simple setuid=root program to dynamically
create a group with just the membership it needs, use it long enough to do
the copy, then destroy the group again.

The whole thing could almost be packaged as a standard utility for
user A to give copies of his files to user B. User A would run such a
program to give a file to user B. It would or would not do the copy
based on its execution of a set of rules, written by user B, defining
the circumstances that must be true for B to accept A's file.  (For
example: "must be smaller than 100k, must leave at least 1Meg free
space on disk, must not clobber any file already owned by B.")  The
problem with such a packaged utility is coming up with a reasonable
language for user B to express under what conditions he would be
willing to recieve files.

Now, why do I need to say this? Why wasn't this all obvious to Mr. Russell?
All of this is implied by the standard Unix manuals. Perhaps there should
be some test you must pass before they let you have the root password.

--- David Phillip Oster            --A Sun 3/60 makes a poor Macintosh II.
Arpa: oster@dewey.soe.berkeley.edu --A Macintosh II makes a poor Sun 3/60.
Uucp: {uwvax,decvax,ihnp4}!ucbvax!oster%dewey.soe.berkeley.edu


How much physical security? (Re: RISKS-5.48)

Martin Ewing <mse%Phobos.Caltech.Edu@DEImos.Caltech.Edu>
Sat, 24 Oct 87 01:59:00 PDT
In reply to Brent Chapman and PGN on the subject of Computer Center physical
security:

I also recall the situation at MIT in the early 70's.  The key punches and
job submission area were on the second floor, while the CPUs were on the
"secure" third floor.  (The elevator wouldn't stop there.)  This worked OK,
until Vietnam-related movements escalated.  (I was a minor participant.)  At
that point an actual guard was posted on the first floor, and you had to show
ID to go beyond.  Expensive, but apparently effective.

[The MIT CC was my first introduction to computer bulletin boards.  There
was a big newsprint pad on the wall, along with felt-tip pens with which
to vent your spleen.  The pads would disappear after some days and reappear
later with Staff's annotations added.  Good, appropriate technology.]

These days, I have some responsibility for a departmental facility (2 Vax
780s, a Convex C-1).  Inside doors are never locked, and an exterior door to
a loading dock is only about 6 feet away from the computer room.  I
don't foresee risks from political protests (astronomy being perceived as
benign, I think), but there is always the deranged ex-student or -employee,
not to mention old-fashioned vandals off the street.

There is some fractional risk from physical assault.  The cost of significant
improvements seems high.  The value of the facility, including data files,
is high also.  How does one rationally decide whether the risk is acceptable?

As far as I can see the absolute risk from power surges, flooding, and network
breakin is greater.  We have had instances of all of these.

My tentative answer is not to do anything about physical security.  The 
Institute is insured against equipment losses.  The one thing we don't do
is to keep copies of valuable files stored in an independent environment.  This
can be done for fairly low cost, although it goes against the grain for
researchers to make backups at all.

I'd appreciate comments.

Martin Ewing, Caltech Astronomy


How much physical security?

"Milton A. Colvin" <mac3n@babbage.acc.virginia.edu>
Tue, 27 Oct 87 10:21:21 EST
> In RISKS-5.45, Brent Chapman (koala!brent@lll-tis.arpa) writes:
> >Have there been any cases of terrorist or political attacks on comp centers?

At Dartmouth in 1969 the College was closed for a day of political activity.
Instead of attacking the computer center, hordes of students headed for
the terminals and used the computers to generate mail to Congress.
Dartmouth had always made an effort to demystify computers.

I have this on hearsay.  Perhaps someone who was there could comment.


How much computer room security?

<Mike_Alexander@um.cc.umich.edu>
Fri, 23 Oct 87 17:19:46 EDT
The various storyies in Risks recently about the effects of the student
unrest of the 60s and 70s on computer room security remind me of an incident
that occured at the University of Michigan during that period.  It is
somewhat amusing and might be of interest to Risks readers.

The University of Michigan was the scene of a number of student
demonstrations and other activities (SDS was founded at UM, for example, and
it was the site of the first teach-in), although there wasn't much physical
damage or other real violence here.  One of these incidents involved a small
group of students who were attempting to shut down the University by seizing
control of the University power plant, an effort that proved ineffective.
The incident I have in mind occured during this protest.

At the time, the Computing Center was directly across the street from the
power plant (which meant we had clean power, by the way).  While the
students were milling around outside the power plant, a few of them broke
off from the main group and headed toward the Computing Center.  Since
university computing centers elsewhere had been the object of some violence
by then, the CC staff members who were watching this were somewhat
concerned.  However, it turned out that the students were just going to pick
up some of their output, not to trash the Computing Center.  Fortunately,
that was as close as we came to real trouble at the Computing Center during
that period.

         [Thesef last three contributions were backlogged, and reflect old 
         history.  Nevertheless I think terrorism and vandalism represent an
         important area to be aware of, so I dusted them off.  PGN]

Please report problems with the web pages to the maintainer

Top