The RISKS Digest
Volume 7 Issue 29

Wednesday, 27th July 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Comparison of hazards
Henry Spencer
NASTRAN and the order-of-magnitude bug
David E. Bakken
via Mark Brader
"Person In The Loop"
Clifford Johnson
"Person In The Loop" — A BarCode example
David A. Honig
Security vs. Cost of Breakin
David A. Honig
Hacking central office switches - too easy?
Skip Montanaro
Re: PIN on PNB calling card
Roy Smith
Re: IRS Illinois Experiment
Allan Pratt
Info on RISKS (comp.risks)

Comparison of hazards

<attcan!utzoo!henry@uunet.UU.NET>
Wed, 27 Jul 88 14:04:40 EDT
>  * The hazards at risk in Star Wars should rule out its development.

Grace Hopper gave a talk here some years back in which she made a point that
is relevant to this discussion, and more generally as well.  I forget the
details, but she was in some bureaucratic situation where she was required
to take correspondence courses in *something*, and most of the possibilities
were ruled out because she was ineligible or had already taken them... so
she ended up taking War College courses, intended for training admirals and
such.  One of the exercises was to plan an invasion of an island, given some
details on overall situation, available manpower, etc.  Actually, that was
just the first part of the exercise.  The second part was "What would be the
consequences if your plan failed?".  The third, and most relevant, part was
"What would be the consequences of not attempting this plan?".  (Her
comment was that she'd seen many plans for information systems, very few
of which attempted to answer either of these questions.)

Comparisons of hazards should always be made against the real alternatives, not
against some hypothetical absolute standard (especially if the standard is the
mythical "absolute safety").  For example, using a jet fighter's ejection seat
is a dangerous act, with spinal injury not uncommon (ejection is a very violent
process), but it is usually preferable to the alternative of riding a crippled
aircraft down.  On the other hand, if the aircraft is near the ground and
upside down, it is safer to stay with the aircraft unless you have a very
modern ejection seat.  A realistic evaluation of the hazards of SDI must
compare them against realistic alternatives, rather than just saying "they are
too great".  Much of the popular support for SDI comes from the perception that
the alternative is a continuation of the current situation, which is perceived
to be unacceptably dangerous.  I don't think this an appropriate forum for
discussion of the accuracy of these perceptions, but one should not forget that
the alternative to risk often involves risks of its own.

Henry Spencer @ U of Toronto Zoology                  henry@zoo.toronto.edu


NASTRAN and the order-of-magnitude bug

David E. Bakken <bakken@hrsw2.UUCP>
19 Jul 88 21:23:43 GMT
I can't stand it anymore.

I was assigned the task of performing stress analysis on some roof bolts. I was
supposed to do the NASTRAN run on the UNIX machine because it has some special
math hardware, but I was in the middle of a game of rogue, so I used the IBM-PC
on my desk.  Unfortunately it was one of the really old ones, and it had the
divide by 10 bug. As a result, my caculations were off, and I'm afraid a
terrible thing happened.  As a result, effective Friday, I'm leaving Boeing, to
go work in a position where I can't hurt anybody.  Monday I start my new
position with Suzuki Motors Inc., in the suspension department.

Dave Bakken   Ex-Boeing Commercial Airplanes        (206) 277-2571

   [Assessment of which planes NOT to fly deleted by your moderator.  PGN]


"Person In The Loop" (Clifford Johnson)

Clifford Johnson <GA.CJJ@Forsythe.Stanford.EDU>
Tue, 26 Jul 88 13:43:05 PDT
>  I completely agree with Will Martin and Bill Murray when
>  they each insisted on adding to Zimmerman's piece a stronger
>  statement about HUMAN fallibility.

I completely disagree.  It's meaningless to construe the ignorance of a human
to know an unknowable fact as a human "failure."  Captain Will Rogers did not
fail - he made his guess, as was required.  Whether such a computer-ordered
guess should ever be required is the the real issue, and if human fallibility
is a factor, it lies in the stupidity of those who mandated this
computer-driven gamble in advance of its execution.

>  there WAS a man in the loop on the Vincennes — the computers
>  did not automatically fire the missiles without human
>  intervention.

This seemingly logical statement is wrongheaded and very dangerous since it is
a conception shared by congressional armed services committees and military
alike.  A Person-in-The-Loop (PTL) is as a matter of logic no more than a
random number generator when, because of the shortness of time, computers
provide essentially all of the information upon which an immediate "decision"
is *required* from that person.  In such a case, the real role that a person in
the loop plays is to gamble whether an attack warning is real.  To pretend that
the human element means any more than this is the stuff of fairy-tales — and
whether a President, military commander, or computer operators make the guess
is beside the point.

It is strictly incorrect to label such a response as NOT automatic.  The
response IS automatic.  True, it is *randomized* by the human element, but it
is certainly not made *discretionary* by the token interjection of a guess.
The definition of "automatic" (in my Oxford American dictionary) is "working of
itself without direct human control, done without thought, done from habit or
routine."  If time is insufficient for proper thought, it is improper to class
a procedure as not automatic.  True, routine participation of humans makes a
response "not mechanized", but this is different from "not automatic".  As
General Ellis stated with regard to nuclear launch on warning drills, which are
of the same computer-governed nature, "the purpose of that conference is to get
a decision" - i.e. automatically, procedures force a guess in time to act.

Captain Will Rogers did no more than perform the function of adding an element
of randomization to the Vincennes response, because he could not exercise
proper judgment in that time frame.  His response was inherently automatic.
The gamble was not sanctified because it was made by a Captain.  Rather, the
Captain's role was debased by his being required to gamble.

>  Congress's "man in the loop" mandate is an unthinking
>  palliative, not worth much, and it shouldn't lull people into
>  thinking the problem is fixed.

This portrayal is far too sangine.  The so-called "PTL amendment" is positively
nauseating — it states that 100% mechanical lethality for Star Wars is A-OK as
long as someone somewhere sometime switches it on.  Herb Lin informed me that
he lobbied to make a CINC responsible for switching on the auto-boost phase SDI
defense — and failed.  The amendment is a dumb green-light to automation,
masquerading as a restriction.


Person in the Loop

"David A. Honig" <honig@bonnie.ICS.UCI.EDU>
Tue, 26 Jul 88 17:01:09 -0700
Will Martin in RISKS 7.26 mentions how the popular media does not explain
the risks of using computers and the costs and benefits of including
humans in the control loop.  Here is a (true) homey anecdote illustrating
this principle that perhaps the press ought to be aware of:

I went to a supermarket and separated a soft drink from a package of them.
When the UPC label is scanned, a price of $2.00 shows up.  At one store, the
checkout woman didn't believe me when I interrupted her and said that the price
was wrong and sent someone to check; if I had bought a lot of groceries at the
time I probably wouldn't have noticed.  At another store, the checkout person
did notice the unusual price (ie, sanity checking) and automatically corrected
it --her vigilance was probably due to the fact that I again wasn't buying many
items and she was in the express lane where the throughput is lower.

The important point is, the "human in the loop" issue is NOT esoteric or
complex: the PROBLEM is that the popular press either does not understand this
or will not communicate it to the mobs; thus, the layman continues to
misunderstand and mystify computers.

And when an operator fails to interact with a computer correctly, no-one in the
public wonders whether the computer programmer knew anything about man-machine
interfacing and human factors engineering.

Another example: the term "computer virus" is a valid analogy but most laymen
don't understand viruses enough to see the similarity.  Why can't the press use
"self-copying program" or some other informative term?  (Because it makes less
exciting headlines!)


Security vs. Cost of Breakin

"David A. Honig" <honig@bonnie.ICS.UCI.EDU>
Tue, 26 Jul 88 17:01:09 -0700
There is no such thing as absolute security; one tries to make a
break-in as expensive as possible, more costly than the benefits of
success.  Relevant to recent RISKS issues, notice:

Encoding a bank-card PIN on the card magnetically IS secure for your
average person and your average wallet thief.  (Of course, a card
reader is a pretty simple device; also, a thief could go to a
bankcard-reading house (do they exist?), just like thieves go to pawn
shops that sell stolen goods and car thieves go to junkyards that sell
stolen parts.  But that's a lot of effort for limited (eg, $300/day)
returns, and besides, the owner will stop the card quickly yielding no return.)

Since NMR and CAT machines cost hundreds of thousands (and there are
no small versions of them, and they are expensive to run) it doesn't
matter if they can detect winning lottery cards.


Hacking central office switches - too easy?

Skip Montanaro <steinmetz!vdsvax!montnaro@uunet.UU.NET>
27 Jul 88 16:57:32 GMT
John T. Powers wrote concerning the problems Pac Bell was having with
crackers accessing their switches:

    A simple callback system (something I introduced at IBM about 10 years
    ago, and common now) would, if used correctly, make it *much* harder to
    gain unauthorized access to a CO switch.  In addition, it would probably
    warn of interest by unauthorized persons.  Today, much more
    sophisticated security systems are not only available but cheap.

The problem, as I understand it from the article that was posted in Risks,
is that the Pac Bell repair people need to dial in from wherever problems
exist, in order to set parameters, run tests, etc. Callback modems are only
useful if the party wishing access always calls from the same (or at most a
few) location(s). User A dials in, says "I'm user A", and hangs up. The
callback modem then calls the phone number associated with user A. A Pac
Bell repair person won't have a fixed location at which s/he can be called.

Skip Montanaro, GE Corporate Research & Development (montanaro@ge-crd.arpa)


Re: PIN on PNB calling card

Roy Smith <roy@phri>
26 Jul 88 14:10:46 GMT
In RISKS, Volume 7 : Issue 27, Mark Mandel <Mandel@BCO-MULTICS.ARPA> said:
> a credit card still provides a security barrier of sorts in the signature.

    Don't be fooled into thinking that signatures are any kind of
security barrier.  I had my AmEx card stolen once (well, actually, I think
I forget it on the table in a restaurant when I left, but that's another
RISK).  You would not believe the charges that came through (with AmEx you
get back one of the copies of the charge slip so you can see exactly what
is what).  Some charges came through with signatures which don't resemble
mine in the least.  Some came through with no signature at all.  One even
came through with "signature on file" hand-printed on the signature line.

Roy Smith, System Administrator
Public Health Research Institute, NY NY


Re: IRS Illinois Experiment

Allan Pratt <atari!apratt@ames.arc.nasa.gov>
Tue, 26 Jul 88 12:06:23 pdt
> [ Discussion of security issues in filling out tax forms online. ]

Forgive me if I'm wrong, but I thought the whole problem with computer
security was keeping unauthorized people out of sensitive information
and places where they can do damage.  On a computer where there IS NO
WAY to get access like that, what's the problem?

Set up a front end which fills out forms from the remote users.  Then dump a
day's forms to magtape, carry the tape to the processing computer, and
process it! The magtape is probably not necessary: any data channel will do.
The point is to leave no "trapdoor to the OS" commands on the front end...
There is no security door, just a blank wall!

The reason a system like UNIX is insecure, I thought, is that there are
trusted users (esp.  root) and non-trusted users, and ways for anybody to
masquerade as a trusty by guessing the password or otherwise violating
security AND GAINING PRIVILEGED ACCESS.  If there is NO SUCH THING as
privileged access, where can you go wrong?

The only hole I can see is using a bogus SSN to screw up somebody else's
taxes, but you can do that no matter how you get into the system or how
secure the actual access is.  I could do that on paper, too, until they
match the signatures.  How much flak would come down on the poor slob
before they figured that out?

If there is a fundamental flaw in my reasoning, please enlighten me.

Opinions expressed above do not necessarily — Allan Pratt, Atari Corp.
reflect those of Atari Corp. or anyone else.      ...ames!atari!apratt

    [Let me suggest a few problems.  Suppose it runs on a nonsecure system.
    You can now browse through the other returns stored on the system and
    not yet dumped to magtape.  Or, you might install Trojan horses that record
    other people's data even after dumped to tape, or delete some of their
    income or claim phony deductions if you wanted to cause them grief. 
    Or, you might change the program to accept State Disability deductions when
    the IRS had claimed they were nondeductible.  Or, suppose the program was 
    proprietary; you might purloin it and set up your own value-added-service.
    Also, see the comment in the previous note on eyeballing signatures.   
    Top-of-the-head stuff, but you get the idea...  PGN]

Please report problems with the web pages to the maintainer

x
Top