The RISKS Digest
Volume 16 Issue 46

Wednesday, 19th October 1994

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Risks of putting out RISKS
Software Bug Cripples Singapore Phone Lines
Lee Lup Yuen
Cellular Phone Scam
PGN
Barclays Bank Banks Big-Bang Bump-up (a success story)
Brian Randell
Data security in Iceland
Haukur Hreinsson
Memory Chip Theft
R. Szczesniak
Risks of not thinking about what you're stealing
Mark Brader
Calling Number ID debate
Phil Agre
Creating kidspaces on the net
Prentiss Riddle
And you thought one-letter passwords were RISKy ...
Dan Astoorian
Info on RISKS (comp.risks)

Risks of putting out RISKS

"Peter G. Neumann" <neumann@chiron.csl.sri.com>
Tue, 18 Oct 94 15:20:44 PDT
* I have been travelling extensively for the past few weeks (National
  Computer Security Conference, AAAS/ABA Conference on Computers,
  Ethics and the Law, National Research Council crypto study, etc.) and
  have had neither much net access and nor much free time.

* I returned to find my monitor fried as a result of the FOURTH recorded
  squirrelcide at SRI --- which brought down the entire institute last
  Wednesday (for something like eight hours) and created all sorts of internal
  power surges, despite the isolation supposedly provided by our cogeneration
  plant hookup.  [I presume this squirrel got turned into rodental floss.]

By the way, the RISKS book (Computer-Related Risks, ACM Press and
Addison-Wesley) is finally available.  I thank all of you whose contributions
to RISKS are noted therein.

PGN


Software Bug Cripples Singapore Phone Lines

Lee Lup Yuen <lupyuen@singnet.com.sg>
Thu, 13 Oct 1994 21:11:36 +0800 (SST)
>From The Straits Times, 13 Oct 94:

A bug in a newly-installed computer software knocked out two-thirds of
Singapore's telephone lines late yesterday morning.  Handphones, fax
machines, pagers and credit cards were all hit by the disruption which
began at 11.31am in the City Exchange.  It took Singapore Telecom's
engineers about five hours to get services back to normal again.

At a press conference last night, Brig-Gen Lee Hsien Yang, its deputy
president and executive vice-president for local services, said the
disruption hit 65 per cent of the lines.  Subscribers on the remaining 35
per cent of the lines had trouble putting calls through.

The problem started when a software bug corrupted one of the two common
channel signalling systems.  These systems link the different exchanges.
The problem soon spread to all but two of the 28 exchanges.  Only the Ama
Keng and Pasir Ris exchanges were not affected.  When Telecom engineers
shut down the affected channel, this caused the other to overload.

Big-Gen Lee said the new and the older systems run side by side and serve
as mutual back-ups.  If there were only one system, then the whole
island's telecommunications network would have been crippled.


Cellular Phone Scam

"Peter G. Neumann" <neumann@chiron.csl.sri.com>
Wed, 19 Oct 94 12:01:31 PDT
Clinton L. Watson, 44, was arrested on 18 Oct 1994, along with his son and a
family friend, and charged in San Jose, California, with three counts of wire
fraud and grand theft, with a possible prison sentence of 30 to 45 years.
Watson allegedly altered and sold more than 1000 cellular phones with
illegally acquired identifiers, whose use resulted in millions of dollars of
phone calls being billed to unsuspecting persons.  Legitimate cellular-phone
identification numbers were allegedly captured using scanners, and entered
into identity-reprogrammable clone phones that were fabricated from new
programmable chips --- which permitted the original identity numbers to be
replaced, and the new purloined identifiers to be easily replaced at future
times when they were disabled because of detected misuse.  (The Secret Service
noted that Watson is currently on probation from a 1988 conviction on 14
counts of wire and mail fraud in Missouri.)  [Source: Article by Maria Alicia
Gaura, San Francisco Chronicle, 19 Oct 1994, p. A11.]


Barclays Bank Banks Big-Bang Bump-up (a success story)

Brian Randell <Brian.Randell@newcastle.ac.uk>
Thu, 13 Oct 1994 15:50:56 +0100
Attached below is the full text of the front page summary of a longer
inside-page article in (UK) Computer Weekly for Oct 13. Nice to report on a
risk that apparently paid off! :-)

Client-Server Gamble Pays Off for Barclays, by Tony Collins

Barclays Bank this week went live with the UK's largest client-server system
in spite of internal documents warning of the "high-risks" of its Big Bang
approach.  Without any announcements to the public about the project, the bank
stopped up to 10,000 end-users accessing the bank's main customer systems for
the whole of Friday and Saturday morning.  And while end-users reverted to
manual procedures, the IT staff commandeered the bank's mainframes to build a
new database holding 25 million customer accounts ready to go live on Monday
morning.  More than 1,200 IBM RS/6000 servers had already been installed in
branches ready to link up with the new database.

Barclays refuses to disclose the cost of the project, but it is believed to
have invested fllOm in customer-based systems.  Andersen Consulting, which has
had an average of 50 consultants managing and helping to develop the new
system, was warned by the bank that it would not be paid - and may never work
for Barclays again - if the systems did not go live.  But by noon on Monday
about 800 of the 1,000 branches planned for that time had gone live. The
remainder had local problems configuring their RS/6000 systems, running the
Ingres database, to interface with the new IBM DB2 mainframe database.  Later
on Monday, nearly 1,100 branches had gone live, leaving only about 12 branches
with residual hardware and configuration problems.

The pioneering project's success vindicates the "Big Bang" approach for major
client-server systems, and may lead to other major users being more
forthcoming with similarly large implementations.  But Barclays briefing
documents issued before Monday had warned of the risks of opting for a Big
Bang rather than region-by-region implementation.  "It is a massive
undertaking but we have to use this [Big Bang] approach .  . . since to create
one master database we cannot have existing systems running concurrently,"
said one briefing document.

The new customer system, the biggest IT project in the bank's history,
replaces three incompatible databases - which previously covered customer
accounts, Barclaycard and financial products - with a single database.  Bill
Gordon, managing director of the banking division, said the new database
replaces "current systems and practices that have hampered our ability to
serve the customer".  The new system will allow bank staff to see customer
transfer files and approve or reject a new account within 30 minutes and will
use a computerised credit scoring to give an almost instant "Yes" or "No" on
loan and overdraft requests. Customer records will no longer be "owned" by a
particular branch or manager but by the bank as a whole.

Dept. of Computing Science, University of Newcastle, Newcastle upon Tyne,
NE1 7RU, UK  Brian.Randell@newcastle.ac.uk   PHONE = +44 91 222 7923


Data security in Iceland

Haukur Hreinsson <hauh@ismennt.is>
18 Oct 1994 19:23:48 -0000
I laughed when I heard this on the news about a month ago:

There's this relatively well known writer in here in Iceland named Thrainn
Bertelsson. He was working on a script for a movie that is supposed to be shot
sometime soon. Then somebody goes ahead and breaks into his office, taking
care not to forget the computer on the way out; nothing else was stolen. Now
the writer realizes that, not only does someone have his script, but that
someone has the *only* copy of it in the world. The man hadn't made a backup
from day one!

It's sad, I guess, especially when you consider that the guy was desperate
enough to immediately offer hard cash for the data and sent out a plea to the
perpetrator. It looks like he was listening because he gave the computer back.
...after wiping the entire hard disk. And that's Government wipe, not
overwrite once!

I haven't heard any more, I guess Thrainn is busy rewriting his little
script now. I don't know if I should laugh or cry.

Haukur Hreinsson  Hagamel 20  IS-107 REYKJAVIK  Iceland hauh@ismennt.is


Memory Chip Theft [Losing your memory is contagious?]

R.Szczesniak <u9236635@sys.uea.ac.uk>
Sun, 16 Oct 94 13:20:21 BST
In Manga Mania (Nov 94):

Thieves broke into a London finance firm recently and stole all of the memory
chips from the firm's computers - over UK#5000's worth! That night, three
other outfits also lost their memory to the same gang. The week before,
another London company was hit, and replaced its chips only to be hit again
three days later.  Unlike banknotes, chips have no serial numbers and are
almost impossible to identify.


Risks of not thinking about what you're stealing

<msb@sq.com>
Jack Decker (ao944@yfn.ysu.edu) posted this to alt.dcom.telecom
and comp.dcom.telecom.tech:

  So tonight I get a call from my friend who sells beepers.  He just
  opened up a store on Thursday, and let's just say that there were
  still a few holes in his security.  Anyway, a couple of kids decided
  to each grab a beeper and make off with it.

  Now, my friend's beepers are all pre-activated, which means that if you
  buy one, you walk away from the store with a working beeper that you
  can start using immediately.  And these beepers were indeed activated.

  So when my friend gets back to his office, he starts to wonder if there
  isn't some way to get his beepers back.  And then he remembers... he has
  an 800 number that's provided by a company called Arch Telecom.  And
  Arch captures the ANI (Automatic Number Identification) of the calling
  party on all 800 number calls.  So he dials up the "missing" beepers and
  punches in his 800 number as the callback number.

  And believe it or not, the thieves CALLED HIM BACK!  Both of them!  And
  of course when he got the calls, he informed the kids that he now had
  their home phone number and it just might be a good idea if they stopped
  by his store tomorrow and either returned the pagers or paid for them!

  And THEN, he called the Arch Telecom customer service office (this was
  on a Sunday, mind you) and they looked up the ANI of the calls he'd just
  received.  He called the numbers back and got the PARENTS on the phone,
  and had a little chat with them as well.

  He told me that if by some odd chance the kids don't return his pagers
  first thing tomorrow, he'll take further action, but he really expects
  to see his pagers come back to him tomorrow.

  I figure that anyone who steals a beeper is pretty stupid anyway, since
  they are pretty useless if you're not paying for the paging service (and
  you can bet that if my friend hadn't received those calls, those beepers
  would have been deactivated tomorrow!) but to steal a beeper and then
  start calling numbers that appear on it... well, I'll bet those boys
  learned a little lesson about the capabilities of modern telephone
  technology (then again, maybe they still haven't figured it out... thieves
  generally aren't all that bright to start with!).

  I just thought it was funny that those kids would be dumb enough to call
  an unknown 800 number that appeared on a pager they had just stolen.  I'm
  sure that there have been dumber thieves, but these two sound like they
  might have half a brain between them!  :-)

  If you, too, find this amusing, feel free to cross-post it to any
  appropriate newsgroup if you like (e.g. rec.humor.* - I only posted this
  to the telecom groups) but please keep the whole thing including my .sig
  file intact.

  Jack Decker   aa931@detroit.freenet.org =or= ao944@yfn.ysu.edu


There were several followups, including one by Stephan Piel
(spiel@unix.cc.emory.edu) who told of a murder victim whose pager
had also been stolen.  The police called the pager, and sure enough...

And Dale Farmer (dalef@bu.edu) wrote about an acquaintance whose job
involves lending out pagers for things like trade shows.  These go
missing often enough that he has developed a repertoire of messages
to send when it happens.  Dale quoted three:

  "This pager has been deactivated, please return it to [address]"
  "Stolen pager tracing activated"
  "Pager tracing successful, located in [city]"

Mark Brader, msb@sq.com  SoftQuad Inc., Toronto


Calling Number ID debate

Phil Agre <pagre@weber.ucsd.edu>
Thu, 13 Oct 1994 14:13:13 -0700
Calling-Number ID (abbreviated CNID [and sometimes misnamed Caller ID]) is a
technology that enables your telephone to digitally send its phone number to
the telephone of anybody you call.  Controversy about privacy issues in CNID
has swirled for years.  The NYT has an article on the subject:

  Matthew L. Wald, A privacy debate over Caller ID plan, *The New York Times*,
  13 October 1994.

The United States Federal Communications Commission recently proposed rules,
due to go into effect in April, to create uniform CNID protocols across state
lines.  While the FCC plan does protect privacy in some ways, e.g., preventing
a business that captures your phone number from selling it to others without
your permission, it does not mandate per-line blocking, which is necessary if
you never want to send out your phone number, or if you only want to send it
out when you enter a special code.

The article states clearly that the real reason for CNID is commercial.
Privacy advocates have been saying this for years, and for a long time they
have gotten patronizing lectures about how CNID is for residential use in
catching harassing phone callers.  But CNID is a poor way to catch harassing
phone callers.  Moreover, that single application wouldn't nearly make CNID
profitable.  The point is that CNID is a good way to let companies collect
marketing information and automate service interactions.

Which is fine.  Hardly anybody opposes CNID outright.  But in order for CNID
to avoid inadvertently giving away the phone number of someone who is being
stalked, or who otherwise needs to keep their number a secret, it needs a few
simple features:

 * per-line blocking — a simple, no-cost way to declare that this telephone
   should not send out its number when dialling

 * per-line unblocking — a simple, no-cost way to declare that this telephone
   now *should* send out its number when dialling

 * per-call blocking — a simple, no-cost way to declare that, regardless of
   whether this line is blocked, this particular call should not include the
   calling number

 * per-call unblocking — a simple, no-cost way to declare that, regardless
   of whether this line is blocked, this particular call *should* include the
   calling number

In order for people to get the benefit of these commands, some further rules
are needed:

 * All four of these commands should be entered with *different* codes.

 * Most especially, the blocking and unblocking commands should not be
   implemented with toggle commands (for example, *67 blocks the line and
   then another *67 unblocks it — or, wait!, did the first *67 unblock
   the line so that the next *67 blocked it?).

 * All of these commands (or at least the per-call ones) should take effect
   instantly, without requiring a pause before dialling a number, so that
   phone numbers stored in modems can include the codes.

 * All of the commands should be standardized everywhere.

 * All of the commands should be clearly and concisely explained in some
   convenient place in the phone book.  If at all possible, the commands
   should be listed on a simple cue card that can be attached to the
   telephone alongside the emergency numbers.  (Of course, if a telephone
   had a real user interface then cue cards would not be necessary.)

Don't all of these rules sound like common sense?  Of course they do.  They
allow everyone complete freedom of choice.  If you like CNID then you can turn
it on and forget about it.  If you want to refuse calls that do not include
caller numbers then you're free to do that.  If you don't care to call anyone
who requires a caller number then you're free to adopt that policy as well.
If you never want to send out your number because you're being stalked or are
running a shelter then you can do that.  Free choice.

So why do proponents of CNID go to extraordinary lengths to defeat these
simple, ordinary protections?  Because they're afraid that large numbers of
people would use per-line blocking, thus making the system less attractive to
the businesses who want to capture lots of phone numbers.  Like many schemes
for using personal information, then, CNID is founded on trickery — that is,
on the gathering and use of information without free choice, full informed
consent, and convenient, easily understood mechanisms for opting out.

You might ask, "doesn't per-call blocking alone provide the necessary choice?"
No, it doesn't.  Per-call blocking is like saying, "every single time you
drive your car into a gas station, your car instantly becomes the property
of the gas station unless you remember to say abracadabra before you start
pumping your gas."  In each case, the cards are stacked against your ability
to maintain control over something of yours, whether your car or your
information.

What can you do?  Write a letter to the FCC, with a copy to your state
attorney general and public utilities commission and to your local newspaper.
Send them the list of CNID commands I provided above.  Spell it out for them,
and provide answers for the obvious pro-CNID arguments.  Your state regulators
might even agree with you already, in which case they need your support.

For more information, send a message that looks like this:

  To: rre-request@weber.ucsd.edu
  Subject: archive send cnid

Or contact the organizations that are working on this issue:

  * Computer Professionals for Social Responsibility, cpsr@cpsr.org
  * Electronic Privacy Information Center, epic@epic.org
  * Electronic Frontier Foundation, eff@eff.org

Or start something of your own.  The best way to predict the future, after
all, is to create it yourself.

Phil Agre, UCSD


Creating kidspaces on the net

Prentiss Riddle <riddle@is.rice.edu>
Wed, 28 Sep 1994 12:38:13 -0500 (CDT)
Pardon the lengthy cascade, but I believe it illustrates some of the
RISKy thinking on this topic.  This is from a thread in
alt.internet.media-coverage and elsewhere:

Michael Dillon (mpdillon@halcyon.com) wrote:
: Bruce Robertson <broberts@sam.neosoft.com> wrote:
: > tappd@sam.neosoft.com writes:
: > > I mentioned to the folks at Neosoft that they should package a
: > > "parent-lock" account...just like their standard stuff, but the
: > > account owner could set the acceptable newsgroups at the server
: > > with a special password so that the client reader never saw
: > > them.  They expressed some minor interest
: >
: > There's no simple way for a service provider to do this.  Nor anyone
: > else, that I can see.
:
: I'm surprised that a Senior TECHNICAL Editor at such a magazine
: can't think of half a dozen ways to do this. Indeed it is
: technically feasible to make a "safe space" on the Internet
: for kids and in some ways it is almost a trivial thing to do.
:
: As for justification, do we hold school classes on the streets
: in a red light district or in a bar? No, we create a "safe space"
: for youngsters and hold classes there, in the school.
:
: The moderated newsgroup mechanism combined with controlled
: feeds (somewhat like clari.*) could be easily adapted
: to create newsgroups in which only registered people or
: registered sites could post. This would merely involve
: a bit of work with shell scripts or PERL scripts.
:
: Securing the Web or Gopherspace would be a little more challenging
: but a start would be a modified Gopher or WWW client that
: can only access authorised sites or sites registered with
: a central authority.

I think that this approach is naive, but not *entirely* unfeasible.
Some general points:

(1) There is no way to do this solely at the client side.  Existing
services on the Internet are not reliably marked as to their
suitability for children.  Only by creating centralized authorities on
the *server* side could one add the markers needed to limit children's
access to objectionable materials.  So in the context of what a service
provider like Neosoft can do, Bruce Robertson was quite right.

(2) If the net establishes widely used protocols to allow central
authorities to control what children can and can't see, then those same
protocols could be used to control what adults can and can't see.  We
should be aware of this consequence before we adopt such technology.

(3) There are fundamental problems of scale in any such system which
would necessarily turn a many-to-many interactive medium into a
few-to-many broadcast medium, thus eroding the principal benefit of
computer networks.  You and I would be unlikely to have this
conversation under such a scheme.

(4) The major attraction of distributed document retrieval systems like
the World Wide Web and gopher is that they are *distributed*.  Many
individuals and institutions working in parallel come up with a greater
variety of useful resources than could ever be planned and executed by a
single, centralized organization.  This is fundamentally at odds with
the concept of a central registry of "approved sites".  Furthermore, a
central approval authority would have to be not merely a registration
service for sites which educators had deemed harmless, but would have
to exercise *control* (through a contract or other means) over the
content at those sites.  Otherwise, one would continually find that
yesterday's kid-safe site had today been blemished by objectionable
material.

(5) One fallacy common to proponents of isolated child-friendly
networks is that problems only occur when children can communicate
freely with adults other than their teachers.  In fact, children are
very good at communicating objectionable material to other children.
(I'm sure we can all think of examples from our own childhood.) What's
more, I suspect that the average adolescent (male, at least) is far
more likely to publish objectionable material than the average adult,
which casts doubt on the advantages of isolated K-12 networks.  Only by
limiting the children's ability to publish information — denying
children access to E-mail, for instance — could one be sure that an
isolated educational network would be free of objectionable material.

(6) The existing Usenet moderation mechanism is notoriously easy to
spoof.  Any bright 12-year-old with access to the manuals could figure
out how to forge an "Approved:" line.  Any rating or registration
mechanism would have to have complex authentication mechanisms built
into it or face the likelihood of being foiled (probably by the kids
themselves).

(7) Any system based on adding a backwards-compatible restriction
mechanism to existing protocols runs the risk of being foiled by users
surreptitiously gaining access to unrestricted versions of the same
client software.  Thus a kid-safe network would either have to
guarantee that children couldn't download an unhobbled version of
Mosaic, say, or it would have to isolate itself further by using
purposely incompatible protocols.

For all of these reasons, I have my doubts about the usefulness of
attempts to build an isolated "kidspace" on the net.  If you can't deal
with the risk that some children at some time will run into some
objectionable material, keep them off the net and stick to CD-ROMs.
(Or, the cynical might say: keep them locked in a closet and don't
teach them to read.)

Prentiss Riddle Systems Programmer and RiceInfo Administrator, Rice University
2002-A Guadalupe St. #285, Austin, TX 78705 / 512-323-0708 riddle@rice.edu


And you thought one-letter passwords were RISKy ...

Dan Astoorian <djast@utopia.druid.com>
Tue, 4 Oct 1994 23:31:04 -0400
Another anecdote illustrating the RISKS of ill-designed user interfaces....

Our office E-mail system is Microsoft Mail.  A couple of weeks ago I was
chatting with a co-worker, and asked her whether she had read a certain
mail message that morning.  She told me she hadn't, because she was
having problems with her password in Mail.

She told me that in the middle of changing her password, the window just
"disappeared," and that now her old password wasn't working anymore.

After thinking a bit about how the "change password" dialog works, I
figured out what had happened.  The form has three fields:

             Old Password: ********
             New Password: ********
    Re-Enter New Password: ********

along with the OK and Cancel buttons.  The OK button was only active
when the New Password and Re-Enter New Password fields matched.

What she had done, of course, was type her old password and press Enter,
thinking this would move her to the New Password field (she should have
pressed Tab); unfortunately, Enter meant "OK" for the change password
dialog.  Since "New Password" and "Re-Enter New Password" were both
empty (and thus matched each other), pressing OK set her password to the
null string without her realizing what had happened.

Perhaps it's reasonable for some systems to allow their password systems
to be disabled by setting a null password, but being able to do it by
accident is somewhat scary; it's almost farcical that my co-worker,
having deleted her password, was now unable to log on (since she was
presented with a password challenge, the *only* correct answer to which
was "nothing", yet she had no reasonable way of knowing this).

Dan Astoorian, Mississauga, Ontario, Canada  djast@utopia.druid.com

Please report problems with the web pages to the maintainer

x
Top