The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 4 Issue 86

Monday, 18 May 1987

Contents

o ATM Fraud
Chuck Weinstock
o Between Iraq and a Hard Place [Protect Your Phalanx]
William D. Ricker
o Wozniak Scholarship for Hackers
Martin Minow
o Information Overload and Technology?
David Chess
o Passwords, thefts
Andrew Burt
o Passwords, sexual preference and statistical coincidence?
Robert W. Baldwin
o Info on RISKS (comp.risks)

ATM Fraud

<Chuck.Weinstock@sei.cmu.edu>
18 May 1987 10:59-EDT
The Wall Street Journal (18 May 87) has a Page-One article about one Robert
Post, a 35 year old former ATM repairman who has beaten New York City ATM's
out of $86,000.  He'd spy over customer's shoulders to get their PIN, and if
they left the receipt he'd take it to get their account number.  Then he'd
go home and forge a card using a $1,800 machine he bought, and return to the
ATM and make withdrawals.

He was caught because his encoding of the account number and the PIN, while
good enough to work in the machine, was flawed.  Manufacturers Hanover
managed to program its network to detect the flawed cards and capture them.
After capturing two and verifying that they were fake, they reprogrammed the
machine to notify security when one was being used, and dispatched guards to
catch Mr. Post.

Mr. Post, who repaid $50,000 to Manufacturers Hanover, had expected to get
off with a hand slap.  So far that hasn't happened.  He contrasts himself
favorably with someone who mugs a customer and steals the card.  "I'm a
white collar criminal."  He was dismayed that bank officials didn't offer
him a consulting job.
                                          Chuck


Between Iraq and a Hard Place [Protect Your Phalanx]

William D. Ricker <wdr%faron@mitre-bedford.ARPA>
Mon, 18 May 87 13:20:48 edt
Today's Wall Street Journal (5/18/87) has this front-page item:

US Guided Missile Frigate hit by Iraqi missile, probably Exocet, in Gulf.

[Hearsay report from CBS Newsradio says the Phalanx close-in defense gun,
which the Boston Globe reports the class carries, is (for safety reasons)
turned on only when in free-fire zones--i.e., the fully-automatic computer
controlled weapon is not considered safe enough to tell when the ship is
under surprise attack (probably a good idea), but isn't used to inform the
crew when it needs to be enabled..]
                                           --Bill Ricker


<minow%thundr.DEC@src.DEC.COM>
Sat, 16 May 87 15:39:58 PDT
      (Martin Minow THUNDR::MINOW ML3-5/U26 223-9922  16-May-1987 1831)
To: "risks@csl.sri.com"@src.DEC.COM
Subject: Wozniak Scholarship for Hackers

From the Boston Globe, May 16, 1987:

  Boulder, Colo. - Computer whiz Stephen Wozniak has donated $100,000
  for a University of Colorado scholarship aimed at developing
  excellence in computer hackers at his alma mater.  "The value of
  cracking security codes and understanding them is that it generates
  incredible knowledge," said Wozniak, one of the original hackers
  and co-founder of Apple Computer Inc.  Wozniak said he actually
  encourages the "mildly social deviants" to break access and security
  codes as a way to learn, The Denver Post reported.  The "Woz"
  scholarship program is two-fold; a tuition grant and a job working
  with the computer science department.

Martin Minow         

P.S.  One of the beauties of the English language is that you don't 
know whether Wozniak is encouraging (mildly (social deviants)) or 
((mildly social) deviants).

     [I was struck by "incredible knowledge."  Woz probably did not mean
     "knowledge of the incredible", but if the knowledge is incredible,
     there is the nice ambiguity between "so extraordinary as to seem 
     impossible" (particularly if true) and "unbelievable" (particularly
     if NOT true).  PGN]


Information Overload and Technology?

David Chess <CHESS@ibm.com>
14 May 1987, 12:37:07 EDT
Long Ago (Risks 4:66), Dave Taylor wrote
>   Overcoming Information Overload with Technology (Why It Can't Work)
> I'm especially interested in horror stories people could tell me about
> relying on information filtering systems and finding that they actually
> weeded out critical information...

This strikes me as not quite the right tack to be taking (although
I haven't read the full paper).   Certainly it's worthwhile to
gather "horror stories", for the purpose of improving information
filters, and making people aware of their limitations, but it
doesn't seem valid to conclude that "It Can't Work".

Everyone uses some information filter; this is pretty much tautological,
since there is much more information available in the world than anyone not
otherwise idle can possibly keep up with.  So we limit our intake with
techniques like

 - Choosing to ignore broad classes of information ("I don't have time to
   follow AI-list anymore" "Please drop me from...")
 - Never reading any article whose title doesn't immediately "grab the eye"
 - >Haphazard< filtering, caused by just reading whatever one happens to have 
   time to read.  This month I get around to reading RISKS, but don't have time
   for NL-KR.  Maybe next month if NL-KR shows up first, it'll be vice-versa.

Now all these filtering techniques (especially the last!) have in common a
relatively large risk of missing important stuff.  None of them is very
sophisticated, or very likely to work very well.  I would be *quite*
surprised if it turned out that computers (much less "technology") could not
make the process work better.  Certainly there will be horror stories about
the use of the technology, but (if we had a way to collect them), I suspect
there'd be even more about filtering *without* the technology...

This is the usual sort of meta-risk.  Certainly using computers to do X
won't work all the time, and we'll be exposed to risks; but doing X without
the computers is at least as risky!

Dave Chess, Watson Research Center

(Any opinions that might have snuck in here are my own, and are not
necessarily shared by my employer)


Passwords, thefts

Andrew Burt <isis!aburt@seismo.CSS.GOV>
18 May 87 20:58:10 GMT
The real attacks to be worried about are not password attacks.  As
administrator of the Unix security mailing list I see all the latest holes
(three easy steps to root, etc.).  Some of the holes are truly frightening.
Once a hacker has access to a system (as guest, whatever) he need not spend
much time trying to work out someone else's password -- might as well go
straight to root.

System administrators are also welcome to join the USML; to cut down the
number of invalid requests to join I ask that you send mail to me as root;
further validation is done after that.  (I apologize for the amount of red-tape
but the explicit nature of the information discussed demands some protection.)

              [Since it is so easy to dig the root, I wonder how many bogus
              requests you will get!  Like a pig rooting for troubles?  PGN]

>From:    Michael Wagner <WAGNER%DBNGMD21.BITNET@wiscvm.wisc.edu>
>Subject: Re: Computer thefts (Jerome H. Saltzer, RISKS-4.83)
>... These terminals had bolts, built into the base, ...

Here at DU we have the terminals bolted to long conference tables -- rather
hard to walk out with.  Far better, though, is that each unit is engraved
and painted with large "DU"s on each component in highly visible locations.
Makes them very hard to fence.  (Sure, you can use the innards, but then again
you could engrave the boards...)

I haven't heard of any terminals or PC's walking out since this was done.

Andrew Burt                             isis!aburt


passwords, sexual preference and statistical coincidence?

Robert W. Baldwin <BALDWIN@XX.LCS.MIT.EDU>
Wed 13 May 87 07:17:55-EDT
    I've been working part time on a case study of password usage
on MIT's undergraduate machines.  The fast password transform that is
currently available was developed by myself and improved with the help
of several people at other research centers.  It turns out that the
SALTing that prevents the use of DES chips can be implemented by five
instructions in each round of the DES F function.
    The case study should be available by the end of the summer,
but I would like to point out one risk that arises when a person
chooses a first name for a password.  This is an example of the
principle of guilt-by-statistical-coincidence.
    I tried a dictionary of 2000 first names against all 4100
accounts.  The program uncovered the passwords for seven percent of
the accounts.  This took 6 hours on a VAX/8600 and it was helped by
the fact that the 4100 accounts only use 1735 different SALT values.
    Moving into the domain of sociology, I examined whether people
chose names of the same or opposite sex.  I found that 80% of the
users chose passwords of the opposite sex.  An additional 7% chose a
variant of their own first names.  The remaining 13% had picked names 
of the same sex.
    The coincidence is that the student group, Gays At MIT, claims
that 10-15% of the undergraduates are homosexual.  The conclusion one
could draw is that anyone with a same-sex password is either narcissic
or gay.  Anyone who uses an opposite-sex password is heterosexual, and
if it is not the name of their current significant other they are having
an affair.  Send the police if they pick their mother's or father's name.  
Perhaps this could persuade people not to use names as passwords.
                                                                  --Bob

       [This message has some interesting background on risks of passwords,
       but the statistical conclusions are almost as accurate as this:
           About 25% of all people are males living in the East.
           About 25% of all people are females living in the West.  Therefore,
           most males living in the East are females living in the West. 
       But not quite.  

       At any rate, I hope the message is getting through that passwords
       can be relatively easy to break.  For a REALLY BEAUTIFUL DESCRIPTION
       of a horrendous implementation flaw in a well-known system (which
       is not named), see an article by Bill Young and John McHugh (Coding
       for a Believable Specification to Implementation Mapping), on pp. 
       141-142 of the Proceedings of the IEEE Symposium on Security and
       Privacy, April 1987.  The bug has presumably been fixed everywhere
       by now, but it permitted an easily constructed overly long password 
       to fake out the encrypt-and-compare algorithm.  I have known about
       this one for years, and am delighted to finally see it in print.  
       PLEASE dig up this article.  It is well worth reading.  PGN]

Please report problems with the web pages to the maintainer

Top