The RISKS Digest
Volume 5 Issue 61

Wednesday, 18th November 1987

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Risks of increased CATV technology
Allan Pratt
Bank networks
David G. Grubbs
Re: PIN Verification
John Pershing
Re: More on computer security
Info on RISKS (comp.risks)

Risks of increased CATV technology.

Allan Pratt <imagen!atari!apratt@ucbvax.Berkeley.EDU>
Mon, 16 Nov 87 13:23:05 pst
I would like to assess the risks of increased CATV technology.  I refer
to addressable converters.  Questions below are asked both for
information and to stimulate thought on this subject:

The cable company serving the community where I live is upgrading their
service to new, addressable converter boxes.  These boxes offer remote
control and time-programmable control to the user, so you can (for
instance) record programs from two different channels at different times
on your VCR.  (Formerly, only one channel could be programmed, because
the channel selector was strictly manual.) The box even reads the time
and date off the air! The box also offers parental lockout of
user-selected channels.  Those are the user-visible benefits. 

The more insidious side of this box is that it can be individually
addressed from any cable company office in the area.  I assume that the
box monitors a frequency band (like the vertical blank interval of the
cable-information channel) and watches for a command tagged with its ID. 
The only commands I know of are "enable/disable channel X." The box also
reads the time and date off the air (!). 

What other commands are available?

How much information can the box transmit back to the host? Can it say
what channel it is tuned to? (Instant ratings.) Can it report how many
times it was tuned to The Playboy Channel? I don't like the idea that a
person or persons unknown will have the opportunity to pass judgement on
my lifestyle like that.  (No, I don't really get the Playboy Channel,
but Showtime has its share of skin flicks.)

The box also has a mute feature.  Advertisers could interrogate this
to see what commercials I was actually listening to.

The Big Picture? This technology threatens to provide instant access to
advertisers and other interested parties to my television viewing
habits, and can even suggest whether or not I am home.  And it can do
all this *remotely* and for *everybody*.  Couple it with laser-scanned
shopping lists and computer-verified checks, and you can match person
with product with commercial-viewing.  The technology already exists
to target one small area with one commercial, and another with another,
and compare the results.

(Of course, I could always go back to broadcast-only TV.  I am not
paranoid enough to do that yet.  I speak of capability (present and
future), not intent.  Still, it does give one pause...

Opinions expressed above do not necessarily — Allan Pratt, Atari Corp.
reflect those of Atari Corp. or anyone else.      ...ames!atari!apratt


Bank networks

David G. Grubbs <dgg@dandelion.CI.COM>
Tue, 17 Nov 87 14:57:54 est
About four years ago I worked on the custom installation of a point-of-sale
(POS) system.  Point-of-Sale terminals are those little slotted boxes
retailers slide your credit card through when you make a credit purchase.

The software we started with was written years before (in assembly language).
Its main component was a scheduler whose job was to "switch" transactions from
the bank's customers (a cadre of retailers to whom the bank sold POS services)
to the credit card networks and back again with authorization numbers.  Our
job as installers was to write the drivers for the specific credit-company
network protocols and define some routing algorithms.  Timing, logging, report
generation and other services were already in the base system.

I learned all sorts of interesting things about the use of credit cards from
the bank's standpoint.  From simple items like checksum algorithms or number
assignments (VISA card numbers all start with 4, Master Card with 5, AMEX with
37, etc.) to oddities of the business, like the loud emergency alarms which go
off in the card company machine rooms when the synchronous network signals
failed.  Thick books detailed the protocols.

I also learned something I had not known to that point: Banks are not one
entity.  Each bank is really a collection of competing fiefdoms, whose arena
of competition is both physical and financial and whose competitors are bank
Vice Presidents.  Bank Vice Presidents may adequately be characterized by two
of their criteria for new computer systems: 1. It must fit in an area which
the responsible VP "owns" and 2. It must be as incompatible as possible with
other systems currently in use within the entire bank.  I considered this
revelation an answer to the question of why so many small, unusual computer
companies can keep making a profit.  To the bank VP mentality standardization
is just a way of losing control of the barony, the land and the serfs.

As part of the POS package, and since I was responsible for the credit card
company protocols, I had to deal with Telecredit.  Telecredit assumes the
financial responsibility in cashing a customer's check, for a price.

The retailer types in your driver's license number, the issuing state, your
name, and the amount of the check.  A packet containing the typed data is
massaged by the POS terminal driver into "standard internal" format and is
sent to the Telecredit driver for transmission.  The driver formats the data
into a packet and ships it to Telecredit.  Telecredit sends back an OK or
failure, which is reformatted and displayed on the originating POS terminal.
(I am simplifying.  There were also error conditions and some subtle failures.
A card company may return codes like "Keep the card."  We were asked to filter
this response, since it was originally intended for ATM's, not some poor clerk
at a POS asked to take Arnold Schwarzenegger's card away from him.  A friend
who works with these POS terminals tells me:

    "Keep the card" is definitely meant for retail personnel, who are
    often instructed to retain a credit card and return it to the card
    company, who promptly issues a check made out in the name of the clerk
    (for hazardous duty :-).  This happened to me a couple of times.  I
    made $25 each time.  In each case, the customer shrugged and handed me
    another piece of plastic.)

I tried to find out how Telecredit works, but the surliness of the Telecredit
people was amazing.  ("We're not about to tell you our business!")  It was not
difficult to envision how it works, though nothing was ever verified.  An OK
from Telecredit means that Telecredit will assume the financial risk in
cashing the check.  It is a service purchased by the bank.  My guess is that
unlike credit card companies, which expect transaction data to come from cards
provided at your request, Telecredit maintains a large database of data
collected only by interaction with the public.

Some time ago, they started with an empty database.  Using statistical data
collected from friendly banks (who don't usually publish such things) on the
number of bad checks passed, they gamble on license numbers for which they
have no track record.  After the first gamble, they have a track record for
that individual, positive or negative.  From then on they make better and
better guesses based both on individual and statistical analyses of their own
database. (Which is probably more accurate than one thinks, considering that
the data it collects is biased for "those who will put up with a Telecredit
check".)  One thing I could not determine was whether they absorbed other
databases containing data supposedly indicative of financial reliability, like
the standard credit references one can obtain on request.

More from my retailing friend:
    When we (at my store) were deciding whether to become a Telecredit
    customer, they were willing to tell us quite a lot about how they
    worked.  Generally your guess is correct.  They do work as you
    describe, except that what I was told is simpler than your reverse
    engineering.  They told us that any customer was considered innocent
    until proven guilty once.  A bad check (they covered these if they
    approved them; for us, it was a form of insurance) was good for 90
    days on their "bad-check passer" list.  They also told us they were
    independent of anyone else's data.

One day, I was approached by my manager (not a bank employee) and asked to
look at the Telecredit transaction log, not normally part of my job.  The
printout was five feet tall.  The manager smiled, thumbed through one stack
and said, "You don't have to look hard to see what I'd like you to see."  As
he thumbed through the reports, labeled "Telecredit Transactions — <date>
--", the column containing the check amount twinkled slightly in the last two
digits, but to the left of the decimal point, there was nothing, a blank.  We
kept looking.

Every transaction was for less than a dollar, all 160,000 of them.  The vast
majority were exactly zero.  And every transaction was OK'd by Telecredit.

You'd think that something in the software at Telecredit might catch this.  I
thought so too, until it was explained to me that it was common for retailers
to run a "$0.00" transaction simply to see if the driver's license was valid.
A certain misuse of the system.  An OK didn't mean the driver's license was
OK, it meant Telecredit had no current negative record for the license.

Many of the transactions were for large sums, as the bank found out when the
retailers deposited them.  Since the "bank's" software had taken valid check
amounts varying from zero to tens of thousands of dollars and had truncated
the dollar figure from every transaction, it was not Telecredit's fault.  I
believe (though I could not verify) that the bank lost hundreds of thousands
of dollars.

The reaction of the Bank officer was surreal, at least to someone like me, who
uses scientific notation for anything over a thousand.  They decided that it
was worth more to the bank not to withdraw the service, even temporarily, than
it cost them in bad authorizations.  So we were ordered to fix it live.

I found the bug in the POS terminal driver.  Some programmer I'd never hire
had tried to make a two pass loop out of an "ascii-to-integer" function and
zeroed the register within the loop.  It set a register to zero, parsed the
dollars, set the same register to zero and parsed the cents.  The register
then held binary cents.  I figured out a two instruction patch to the running
system and patched it live, made the corresponding change to the source,
compiled and installed it so the next restart would get the new code.

I think this illustrates many different computer risks:

1. One of my favorite computer risks involves the assignment of special
   human-oriented meanings to "zero" which are not fully accessible to the
   computer's understanding. (If you'll allow this loose usage of
   "understanding" as a synonym for "programming".)

   In different instances, Zero may mean:

    Uninitialized   False       Error       OK  Empty
    End-of-Line Balanced    Neutral     Test    Waiting
    etc.

   Here it meant "test", but it was also an "error".

2. Telecredit is another group collecting cross-reference data on us.  Like
   monitoring mail systems to get "known associates" lists.

3. The bank already had POS software, at least two different versions.
   The policy of internal competition and the maintenance of separate
   fiefdoms inside the bank creates a chaotic environment which knowledgeable
   persons may exploit.  And have.

4. "What about testing?" you may ask.  We did test.  We tested a lot.  The
   software at fault was unchanged from the base system, one of the few things
   we took for granted, since it had been installed in at least 10 other sites
   across the country.  The teams working on the other sites had never noticed
   this.  I was not told why.  It is possible that we were guinea pigs, or
   the software was sabotaged.  (I know it sounds paranoid, but in that
   environment it was plausible.  I left the consultant/contractor world from
   this experience, miserable in many respects.)

5. Our money is managed by people who care nothing for the details of
   an single transaction.  They sell financial services like the grocer
   sells apples.  So what if a few are dropped on the floor?  It's just a
   "cost of doing business."  As the throughput of transactions increases over
   time, each detail gets commensurately smaller.  It can only get worse.

David G. Grubbs, Cognition Inc., 900 Tech Park Drive, Billerica, MA 01821
UUCP: ...!{mit-eddie,talcott,necntc}!dandelion!dgg  
Internet: dgg@dandelion.ci.com                             (617) 667-4800


John Pershing <PERSHNG@ibm.com>
17 November 1987, 10:35:23 EST
I cannot speak for *all* ATM and POS systems, but the major banks
generally know what they are doing with respect to PIN security.  The PIN
number is *not* stored on your ATM card — it is stored in your bank's
database and, possibly, in one or more interbank clearinghouses.  This
makes it possible to have your PIN changed without getting the card
re-magnetized (assuming your bank has it's act together).  Note that your
account number probably isn't even written on the card — only a number
that identifies that particular card.

When you enter your PIN and transaction request, this data is put into a
message along with a serial number, encrypted, and sent up to the
computer center.  The proffered PIN is checked against the one in the
database, and the response to the transaction request (yea or nay) along
with another serial number is encrypted and sent back to the ATM.  There
are systems in operation (e.g., the major interbank clearinghouses) that
can consistently give near-instantaneous responses, so it may seem that
PIN verification is being done locally at the ATM.

If one were to go out and buy an ATM or a Point-Of-Sale cash register, it
wouldn't help much in "breaking" PINs unless you could get yourself wired
into a bank's network — something which is *quite* hard to do.  Even if
you could tap in, you would probably have a hard time guessing the (DES)
encryption keys needed for upstream and downstream traffic without
raising an alert on the network operator's console.

An aside:  The length of a PIN number, as we all know, is a compromise.
It needs to be short enough so that it is readily memorized, to
discourage the practice of writing it down (usually on the ATM card
itself!).  In my meager experience, 4 digits is even too long to satisfy
this constraint for the bulk of the population.  Keep in mind, too, that
one must have the physical "key" (the ATM card) in addition to the PIN;
so it is not quite as big of an exposure as, say, a short logon password.

John A. Pershing Jr., IBM, Yorktown Heights


Re: more on computer security

<[..., at contributor's request]>
17 Nov 87 11:27:33 PST (Tue)
Some recent security developments at my site (and others) are starting
to cause complaints.  Where military and some civilian sites have these
requirements pro forma, people have a greater feeling of doing work
in the public interest.  Recent fears of system crackers entering, reading
privacy data, potentially modifying any data, puts the Government
in an awkard position between police state and protector of public
interest.

This makes working in networked environments even more interesting.
A sample login session now appears as follows:

Welcome To [. . . that single word is part of the problem . . .]

[. . . this part of session removed . . .]

You are connected to a U.S. Government computer system.  Any unauthorized
ATTEMPT to gain access to this system may subject you to fine and/or
imprisonment.

Username: [. . .this part of session removed . . .]
Password:

        NOTICE:  This system does not provide access control
        protection sufficient for the storage, processing,
        or communication of classified, highly sensitive, or
        proprietary information.  Protection of other sensitive
        information is a USER RESPONSIBILITY.
<End-of-session>

We are just now getting complaints because of words like "imprisonment."
The justification for this coming from DOJ was that: "If you have a `fence'
that delimits property, don't come to us if you don't have warning signs
[space as such and such a distance and such and such a size.  We can't
prosecute."  Some of our users are using words similar to a recent net
posting mentioning Draconian police states, and hence, are thinking of going
to the ACLU.  These warnings are currently restricted to certain types of
systems, but technology changes potentially worsen the problem: networks
(from which they are threatening to disconnect totally), workstations and
personal computers, and so forth.

The text comes from Washington DC.  Middle management does not feel it can
explain all of the justification since the material comes from a document
under the new Sensitive classification.  They feel it was done with a bad PR
job.  But they also realize that Congressmen control overall spending: i.e.,
not responsible for Government property (computing), hence, take it away.

I am curious what other scientific sites are doing in similar ways.
Please skip the obvious (secure computers aren't on networks, or passwords
or encryption).  We are talking civil liberties as part of this discussion.

Please report problems with the web pages to the maintainer

x
Top