The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 22 Issue 21

Tuesday 27 August 2002


VeriSign error teaches lawyer a lesson
Automation increases anxiety — with cause
Fuzzy Gorilla
Big Brother hiding inside cars' airbags
Monty Solomon
Keystone SpamCop summary and response
Edward W. Felten
SpamAssassin killed off RISKS-22.20
Danny Burstein
Re: "Homeland Insecurity"
Stephen Fairfax
Re: Your packets know the way to San Jose
Barry Margolin
Steve Wildstrom
Gene Wirchenko
R.G. Newbury
Re: YASST: Yet Another Silly Spam Trick
Re: Klez: The Virus That Won't Die
Scott Peterson
REVIEW: "Access Denied", Cathy Cronkhite/Jack McCullough
Rob Slade
Info on RISKS (comp.risks)

VeriSign error teaches lawyer a lesson

<Max <>>
Sun, 25 Aug 2002 08:16:20 -0700

Fremont, California, attorney Anu Gupta's Web site
was mistakenly transferred to a company in India, as the result of an error
by VeriSign.  (She helps people get visas, green cards and other documents.)
After five days of haggling with VeriSign, Gupta eventually regained control
of the site, but only after she threatened to sue.  E-mail sent during that
time disappeared, and could have included credit card and tax information.
[Source: Lawyer learns hard lesson on wild, wild Web, Peter Delevett, *San
Jose Mercury News*, 25 Aug 2002; PGN-ed]

From the article: VeriSign has garnered a reputation for shoddy customer
service and questionable marketing.  A federal court ruled in June [2002]
that the company had poached competitors' customers by sending them bogus
renewal letters, and several related lawsuits are pending. The Federal Trade
Commission is also investigating VeriSign's marketing. ...  The real pity
for Gupta and other disgruntled Internet users is that there's no
enforcement body standing up for them.

Automation increases anxiety — with cause

<"Fuzzy Gorilla" <>>
Sat, 24 Aug 2002 12:19:00 -0400

People are often worried about computerization for a good reason.  Even
though it is the same information, the potential risks from abuse are
increased.  Black lists, blackmail, and sending private information to the
wrong parties were all reported.  [FG]

There is considerable controversy in Japan at the moment over an attempt to
put personal information on-line in a family-registry database, along with
an 11-digit identifying number for everyone.  In addition to fears relating
to hackers and criminals, "one of their chief concerns is misuse of the data
by their own government."  Polls show huge majorities are against the
system.  [Source: Plans to Computerize Personal Data Ignite Firestorm in
Japan; Citing Privacy, Municipalities Defy Effort By Doug Struck, *The
Washington Post*, 23 Aug 2002, A18; PGN-ed]

From the article:

There is plenty of grist for public suspicion of bureaucrats. In May, the
Defense Agency admitted it had drawn up a list with names, backgrounds and
political views of citizens who had asked for public information from the
agency. Twenty-nine agency officials were punished. Last month, defense
contractor Fujitsu said it had gotten a blackmail demand from men who had
obtained personal information on military officers leaked from the company's
computers.  And just as Juki Net started up, embarrassed officials in the
city of Moriguchi in Osaka acknowledged they had sent personal information
about 2,584 individuals to the wrong people.

Big Brother hiding inside cars' airbags

<Monty Solomon <>>
Thu, 22 Aug 2002 19:09:12 -0400

On 11 Feb 2002 on Union Road in Trotwood, Ohio, a 1999 Pontiac Trans Am
skidded sideways off the road, went airborne for 110 feet, and eventually
hit a utility pole.  An estimate of the car's speed was upgraded after
examining an onboard electronic monitoring device in the airbag control
mechanism, which pegged the speed at 124 mph (in a 40-mph zone).  [Source:
*Dayton Daily News*, By Cathy Mong,; PGN-ed]

Keystone SpamCop summary and response (Re: RISKS-22.19)

<"Edward W. Felten" <felten@CS.Princeton.EDU>>
Mon, 26 Aug 2002 11:45:27 -0400

[HTML version (same text, but somewhat easier to read) available at]

I received 59 responses to my SpamCop narrative. Because there are so
many, I cannot respond individually to each one. Instead, I summarize
below the major arguments raised by the messages. I give sample text
from messages that asserted each argument, and I respond.

This posting is rather long, and some readers may not be interested in
the whole thing, but I think the people who sent me constructive
messages about the SpamCop incident deserved a response.

Argument 1: Blame the ISP, not SpamCop.


"The problem is not with Spamcop, but rather with your ISP. The ISP is
required to assert that they have dealt with the issue, not that they
have shut the website down. They can mark the issue 'resolved' with
spamcop and then work with you to discover the true nature of the
problem. The choice to shut the web site down now, and investigate
later, was your ISP's, not Spamcop's. "

Another sample:

"However, in this case, your ISP is responsible for bouncing your
domain, not SpamCop. All SpamCop e-mails come with a link to the
original report, so it was _your_ ISP who failed to research this and
_your_ ISP who is to blame for suspending your site."

My response: Certainly my ISP is the party who actually pulled the plug
on my site. The ISP was intimidated by SpamCop and seemed to be trying
to show that it was responsive to SpamCop complaints. Hence the quick
shutoff of my account.

Yet even after I convinced my ISP that I was not a spammer, they still
refused to reinstate my site, saying that to do so before SpamCop
removed the complaint against me from its site would put the ISP's other
customers at risk. This refusal to reinstate my account is what
convinced me that the ISP was afraid of SpamCop.

Whether the ISP was right to fear SpamCop, I cannot say. What I know is
that the ISP chose to anger a paying customer, rather than risking what
they perceived as the wrath of SpamCop. The fact that SpamCop engenders
such fear is a big part of the problem. For me, the bottom line is this:
if SpamCop didn't exist, my site would not have been shut off.

Argument 2: SpamCop doesn't block sites, ISPs block sites.


"SpamCop ( blocks nothing. SpamCop does have a
DNS-based blackhole list that ISPs have the option of using---for
example, I use it for all my domains as a backup to my own block list."

Another sample:

"The spamcop blocklist is supposed to be used in order to tag certain
email as possible spam. It is not to be used to block email (although
some ISP's do use it that way)."

Another sample:

"ISPs also use Spamcop, but it is the ISP, not Spamcop, that makes the
determination whether something listed by Spamcop is deleted, flagged,
or passed through. I happen to delete."

My response: Nearly everybody who made this argument followed it by
saying that they themselves do automatically block sites on the block
list, or that many others do. This is hardly surprising. Even a perfect
block list would do little good unless people used it to block. The
alternative use of shunting aside email from sites on the list, and
reading it later, doesn't do much to address the spam problem. As far as
I can see, there are only two sensible things to do with a block list:
you can ignore the list, or you can use it to block sites.

That's why they call it a "block list." That's why SpamCop's site gives
instructions for configuring common mail servers to block addresses on
the list. SpamCop can hardly be surprised to see ISPs following these
instructions and blocking the addresses on what SpamCop itself calls a
"block list."

Argument 3: SpamCop is just a clearinghouse for spam complaints and
simply routes complaints that could have been sent even in the absence
of SpamCop.


"SpamCop is a machine. It summarizes and reports what human individuals
feed it.

Another sample:

"SpamCop is primarily a _reporting_ service which allows a user to
easily report email abuse to the appropriate authorities. It has a
parser which cracks email header information and figures out the true
source of the email (as much as possible) despite forged header information.

This is just the same as manually email[ing] a complaint, but automates
the header analysis (which can save a lot of time when the headers are
intentionally obfuscated).

A user does not 'send an accusation to SpamCop' but uses SpamCop to
email a complaint to abuse or postmaster addresses."

My response: SpamCop does more than just forward complaints.  It anonymizes
the complainant's address, thereby making it harder for the ISP receiving
the complaint to judge the complaint's credibility.  SpamCop puts the
complaint on the Web for others to see. And SpamCop tries to find patterns
among its complaints, and adds addresses to its block list based on these
patterns. All of these factors contributed to my dilemma.

If SpamCop were merely a complaint router, then SpamCop would be
ineffectual. It is SpamCop's "value added" that caused me trouble.

Argument 4: Blame the person who erroneously reported the "spam," don't
blame SpamCop.


"The SpamCop user, not SpamCop itself, is ultimately responsible for
what is sent. Each report has been individually submitted by a user,
then individually selected by the user before sending."

Another sample:

"The 'mistaken' reporter of spam violated SpamCop's terms of service,
period. It doesn't matter if you call 911 to report a fire or a
burglary: at the end of the day, individuals are responsible for their
reporting, the telephone company is not to be blamed for prank calls to

My response: This is really just a variant of Argument 3, and fails for
the same reasons.

SpamCop is ultimately responsible for its reporting, too. The 911
analogy doesn't apply, since the phone company merely receives the
report but SpamCop repeats reports and amplifies them. SpamCop took what
would otherwise have been a private report, to be dealt with between the
reporter and my ISP, and posted it for the whole world to see. And it
gave the report increased credibility and force.

Argument 5: The attributes of SpamCop that Felten complained about are
necessary to prevent spam, or to prevent retaliation by spammers.


"In the absence of anti-spam laws with teeth, technical[ly] shunning
ISPs who deliberately harbor spammers is the only alternative to control

Another sample:

"[SpamCop anonymizes the complainant's address because] real spammers
might take action against a spam reporter, such as using their address
as the 'From' on a spam run."

My response: Yes, SpamCop's designers had good intentions. Yes,
effective spam-fighting was their goal. My point is that in their zeal
to fight spam, they built a system that overreacts to erroneous or
malicious spam reports. I for one would not be willing to accept that
kind of collateral damage, even if doing so would completely prevent
spam (which it cannot).

Several people said that SpamCop is slower to act against accused
parties than some other anti-spam services are. That may well be true.
If it is, then the other services are presumably causing even more
collateral damage.

Argument 6: Felten really is a spammer.

"[Quoting Felten: ]'Never mind that I had never sent a single e-mail
message from the site.'

Reply: Someone did:

Mail for is handled by (0)

If you look here, you will see two different headers that came from this
IP address, both of which are dated July, 31:

Those are only examples; there could have been many more spams reported
through that address."

My response: I did not send those messages. The writer apparently believes
that if the messages came from "my" IP address, then I must be responsible
for them. But it's not my IP address — it's shared by many of my ISP's
customers. Perhaps the cited messages came from one of them.

This argument nicely illustrates the problem with SpamCop. By collecting
complaints in one place and indexing them, SpamCop facilitates the making of
this kind of accusation. And by repeating allegations made by others,
SpamCop gives them more credibility than they deserve.

SpamAssassin killed off Risks Digest 22.20

<danny burstein <>>
Thu, 22 Aug 2002 21:44:29 -0400 (EDT)

I run SpamAssassin (RISKS-22.08-10) using the default settings. (I push
tagged mail aside into a spam box for leisurely review so I'm not too
worried about false positives.)

It didn't like the latest issue of RISKS.  Meaning, alas, that if people are
using SpamAssassin to reroute (suspected) spam to the trash pile, or worse,
if the ISP is using it ahead of the subscribers, many copies never got to
the intended recipient.

Re: "Homeland Insecurity" (RISKS-22.20)

<Stephen Fairfax <>>
Thu, 22 Aug 2002 20:05:10 -0400

Mann's article (RISKS-22.20) is indeed timely and well-written, but with all
due respect to both Mann and Bruce Schneier, I believe they miss some
important points.

It's fine to suggest that systems fail smartly, or well, or not be brittle,
but often designers are limited in choosing how systems fail.  Complex
systems have an annoying habit of exhibiting new and unforeseen failure
mechanisms.  Ultimately the failure mode is determined by laws of physics
(for machines) or human behavior and are not easily controlled.  That isn't
to say that one can't select the most robust method of mitigating the
consequences of failure, but practically speaking, the options are often
quite limited.

What is not so severely limited, and what I feel is largely absent from the
present approaches to security, is formal, quantitative analysis of what
happens AFTER the first failure.  My company applies the techniques of
Probabilistic Risk Assessment (PRA) to high-reliability power systems for
data centers, banks, hospitals, etc.  There are many lessons to be learned,
but one of the most important is that of layers.  Once you understand that
a particular failure can occur, you examine its consequences and make an
informed choice about whether the system should be designed to continue
functioning after that failure.  If so, you generally need to add either
redundant components, or some new system to handle the failure.  In both
cases, you need to take care that the cause of the initial failure is
unlikely to compromise the response.

One can take advantage of knowledge of the system state after the failure
in designing the next layer of protection.  For example, if utility power
fails, you can use the fact that most outages are brief, less than a few
minutes, to rely on battery back-up rather than immediately starting
standby engine-generators.  This saves wear and tear on the engines, and
helps one to select the appropriate discharge time rating for the
batteries.  If the outage lasts more than 2 minutes, the engines are
started, and now the operators know that the outage is likely to last at
least 30 minutes or longer, and can plan their actions accordingly.

PRA formalizes and quantifies this kind of thinking.  Applied to the
problem of airport security, it offers a way to evaluate the effectiveness
of various proposals.  It doesn't take much analysis to show that
successive  "random" screenings, using the same tools, techniques, and
personnel as the original, 100% screening of passengers, adds essentially
zero value.  (Aside: I always ask to see the dice, and never have.  RISKS
readers know full well the process is not random, but merely a concealed
method of selection.) On the other hand, a targeted screen, applied after
the 100% initial screen, by specially trained individuals, and using
different methods (such as pointed, face-to-face questioning, as practiced
by some non-US airlines) can yield large improvements.  You can trade off
the training and tools applied to the initial screen and the secondary
screen to get the best result for a given level of investment.

Nearly all security analysis seems to ignore or completely discount the
actions of lawful passengers after security failures, but the examples of
Flight 93 and the apprehension of the would-be shoe-bomber suggests that
this layer of defense is very robust and surprisingly capable.  The "good
guys" vastly outnumber the "bad guys," our thinking should take advantage
of that fact!

Guns in the cockpit represent an independent layer that does not
automatically fail when screens fail.  While there is heated debate about
the possibilities of negative consequences, a dispassionate analysis of the
probabilities of both success and failure offers rather overwhelming
evidence that on balance, armed pilots will reduce both the likelihood and
consequences of hijacking attempts.

In summary, while it is certainly important to have systems fail gracefully
when possible, it is not always possible.  That does not excuse the
architects of security systems from performing careful, quantitative,
reviewable analysis of their designs.  Like cryptography, public review and
discussion of the algorithms used in truly well-designed security systems
will not compromise their integrity.

Stephen Fairfax, President, MTechnology, Inc., 2 Central Street
Saxonville, MA 01701 1-508.788.6260

Re: Your packets know the way to San Jose (Purvis, RISKS-22.20)

<Barry Margolin <>>
Thu, 22 Aug 2002 23:08:41 GMT

I think they may be overestimating how much traffic goes through MAE-West.

All Tier-1 ISPs have private peering interconnects, we don't use any of the
public peering points to exchange data with each other.  I don't have any
statistics to back me up, but I expect that most Internet traffic goes
through these private interconnects, not the public ones, which are used for
connections to and between smaller ISPs.

Also, MAE-West is just one of several public peering points in the
continental US, and nationwide backbones usually connect to each other
using at least two (we make that a requirement of all our peering partners
-- an ISP that can't meet our criteria has to purchase normal ISP service
from us, rather than being a peer).

Destroying that building would certainly have an impact on the Internet, as
all its traffic would have to be rerouted, and would cause congestion at
the other interconnects.  For the most part, this would happen
automatically (I qualified this, because some ISPs have misconfigured
routers, so they don't advertise all their routes at all the exchange
points), although it would probably take several minutes to stabilize.

To deal with the congestion at the other exchanges, I expect that most of
the Tier-1's would relax their transit rules, so that some of it would be
shunted to those private interconnects I mentioned earlier.  We did similar
things last year in the wake of 9/11.

Barry Margolin,, Genuity, Woburn, MA

Re: Your packets know the way to San Jose (Purvis, RISKS-22.20)

<Steve Wildstrom <>>
Thu, 22 Aug 2002 20:44:11 -0400

MAE West is only the beginning. There are also MAE East, MAE Central,
MAE Chicago, MAE Los Angeles, MAE Paris [MAE OUI?], and MAE Frankfurt
-- all owned and operated by WorldCom.

I've been surprised by how little public discussion there has been about the
amount of critical infrastructure controlled by WorldCom. Should we be very
afraid? I know that WorldCom is operating more or less normally under
bankruptcy protection and it is in the interest of the creditors that the
Internet business remain alive as a going concern, but still, it is a
dangerous and potentially very unstable situation. At a minimum, there isn't
going to be any investment in these facilities at least until the future of
WorldCom is decided.Given the fact that potential buyers can't perform due
diligence until the auditors get to the bottom of the accounting mess, the
uncertainty could last a long time.

Steve Wildstrom   Technology & You Editor Business Week
1200 G St. NW Suite 1100  Washington DC  20005   1-202-383-2203

Re: Your packets know the way to San Jose (Purvis, RISKS-22.20)

< (Gene Wirchenko)>
Fri, 23 Aug 2002 03:31:33 GMT

> also see that MAE West is owned by WorldCom.
I think you left out "partly", Mr. Purvis.
At the bottom of it is "Southern Cross is owned by Telecom New
Zealand (50%), Optus (40%) and Worldcom (10%).".
That is just a bit different, no?

Re: Your packets know the way to San Jose (Purvis, RISKS-22.20)

<"R.G. Newbury" <>>
Thu, 22 Aug 02 21:26:04 -0500

IIRC, MAE East is part of a parking structure.... You can drive up and park
next to it.. I suspect it would not take more than a Volkswagen Beetle sized
car b*mb to inflict major disruption.

Do you think that *anyone* in the "intelligence business" (yes, I *know*
that that is an oxymoron) is worrying about the security of this portion of
the Internet???

Re: YASST: Yet Another Silly Spam Trick (Slade, RISKS-22.20)

<Tai <>>
Fri, 23 Aug 2002 09:26:25 +0000

My wife is convinced that hotmail is a spammer.  She created an account that
was never given out, and received spam all the time.  6 months later, so
forgot the password, and created another account.  This account does not
receive spam at all.

The difference?  The first acct belonged to a .usian with .us zip codes,
etc.  The second acct had an address in some third world country, ie, not
.us based.

Re: Klez: The Virus That Won't Die (RISKS-22.20)

Fri, 23 Aug 2002 13:15:30 -0400

Viruses are becoming more sophisticated, we know that.  We also know that
they will get worse as they become more and more advanced.  Here's a
thought: Imagine a Klez descendant with a small distributed-computing
payload.  Each infected system becomes a node in a neural net.  This net
would be slow, and the nodes would come and go, but it would be immense and
uncontrollable.  The possible implications are scary.  Science fiction
becomes science fact.

Re: Klez: The Virus That Won't Die (RISKS-22.20)

<Scott Peterson <>>
Thu, 22 Aug 2002 22:00:12 -0700

Maybe the even bigger irony is that Microsoft released a patch for Internet
Explorer that stops KLEZ dead in its tracks in March, 2001.  It's also
included in current service packs for it.

REVIEW: "Access Denied", Cathy Cronkhite/Jack McCullough

<Rob Slade <>>
Thu, 22 Aug 2002 10:06:22 -0800

BKACCDEN.RVW   20020604

"Access Denied", Cathy Cronkhite/Jack McCullough, 2002, 0-07-213368-6,
%A   Cathy Cronkhite
%A   Jack McCullough
%C   300 Water Street, Whitby, Ontario   L1N 9B6
%D   2001
%G   0-07-213368-6
%I   McGraw-Hill Ryerson/Osborne
%O   U$24.99 905-430-5000 800-565-5758 fax: 905-430-5020
%P   283 p.
%T   "Access Denied: The Complete Guide to Protecting Your Business

The introduction states that business leaders often lack the background to
deal with technical security issues, and that the book seeks to fill the
technical gap.  Ordinarily I am wary of such claims, particularly in such
slim volumes, but, after a poor start, this one works surprisingly well.

Chapter one concentrates on "hackers."  There is sensationalism, and there
are errors, such as confusing Clifford Stoll's "wily hacker" with members of
the Chaos Computer Club, but the text does at least divide security breakers
into various camps, rather than lumping them all together.  The discussion
of viruses and malware, in chapter two, is the all-too-common unreliable mix
of errors (the "Cokegift" prank is stated to be a virus) and reasonable
material.  A random collection of email dangers and netiquette makes up
chapter three.  Another miscellaneous list of Internet attacks and some
misinformation (a discussion of "poisoned" cookies) is given in chapter
four, but no means of protection.

After this, however, the book improves.  The review of encryption, in
chapter five, is a clear presentation for the non-specialist.  Chapter six
is a reasonable guide to backup.  Network security loopholes, and means of
protecting them, are in chapter seven.  Physical security is covered in
chapter eight.  Chapter nine looks at remote, wireless, and cellular
security.  Intrusion detection and documentation (suitable for presentation
to law enforcement) is in chapter ten.  The material on risk analysis, in
chapter eleven, is slightly facile, but is a good accompaniment to policy

The subtitle slightly overstates the case in terms of completeness, but this
work certainly is worthy of review by any manager without a technical
background, who nevertheless needs to make decisions about security.

copyright Robert M. Slade, 2002   BKACCDEN.RVW   20020604    or

Please report problems with the web pages to the maintainer