The RISKS Digest
Volume 21 Issue 16

Tuesday, 26th December 2000

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Power cut blocks emergency calls
Stuart Lamble
Important message from egghead.com CEO
Egghead.com
Security advisories becoming less open?
Chris Adams
Another tidbit about the new Microsoft advisory format
Richard M. Smith via Brian
Making something look hacked when it isn't
Richard J. Barbalace
The risk of a seldom-used URL syntax
Rob Warnock
Intelligence risks of e-mail auto-responses
Dan Birchall
Re: Voting by machine
Tony Finch
Re: ATM network for voting: a non-starter
Jeremy Epstein
Barry Margolin
Bill Stewart
Re: High Reliability
Matt Jaffe
Re: Another DMV Break-in, in Oregon
Simson L. Garfinkel
Re: Seattle Hospital Hacked
Todd Wallack
Kevin L. Poulsen
Jonathan Thornburg
Info on RISKS (comp.risks)

Power cut blocks emergency calls

<Stuart Lamble <Stuart.Lamble@lodbroker.com>>
Thu, 21 Dec 2000 09:49:46 +1100

Excerpted from http://www.theage.com.au/news/2000/12/21/FFX0NEBVXGC.html

"Emergency lines went dead for six minutes shortly after 1am [on Wednesday,
20th December] until an emergency generator restored services. ... While no
callers rang during the critical period, noone could have sought assistance
had there been an emergency. ... In case of power failure, the call centres
have a diesel generator. But Mr Bahr [technical representative for the
Bureau of Emergency Services Telecommunications] said he was aware of an
incident in the past when the back-up had failed."


IMPORTANT MESSAGE FROM EGGHEAD.COM CEO

<"Egghead.com Special Update" <specialdeals@PROMO1.EGGHEADLIST.COM>>
Sat, 23 Dec 2000 09:43:41 -0800

Dear Customer,

Egghead.com has discovered that a hacker has accessed our computer systems,
potentially including our customer databases. While there is no indication
that any customer information has been compromised, as a precautionary
measure, we have taken immediate steps to protect you by contacting the
credit card companies with whom we work.  They are in the process of
alerting card issuers and banks so that they can take the necessary steps to
ensure the security of cardholders who may be affected.

We wish to underscore that we have taken these steps as precautions.  We
have no information at this time to suggest that any credit card information
has been compromised. We are investigating this possibility, and we are
doing everything we can to proactively protect you. If you would like
further information, you may wish to contact the issuer of your credit card
to determine what steps they are taking. We regret any inconvenience this
may cause you.

We issued a press release on this matter earlier today.  [...]  If you have
additional questions, please call our customer service team at 1-800-EGGHEAD
(344-4323).

Respectfully,

Jeff Sheahan, President & CEO, Egghead.com, Inc.

  [The Press Release notes that Egghead has "retained the world's leading
  computer security experts to conduct a thorough investigation of our
  security procedures and an analysis of this breach."  PGN]


Security advisories becoming less open?

<"Chris Adams" <chris@improbable.org>>
Tue, 12 Dec 2000 22:45:18 -0800

Recently there has been some discussion on BUGTRAQ regarding some companies
attempts to change the way they publish security advisories.

Some background:
http://www.securityportal.com/list-archive/bugtraq/2000/Dec/0036.html
http://www.securityportal.com/list-archive/bugtraq/2000/Dec/0042.html
http://www.securityportal.com/list-archive/bugtraq/2000/Dec/0056.html
http://www.securityportal.com/list-archive/bugtraq/2000/Dec/0054.html
http://www.securityportal.com/list-archive/bugtraq/2000/Dec/0076.html
http://www.securityportal.com/list-archive/bugtraq/2000/Dec/0101.html

Basically, it started when Microsoft abruptly switched to a new advisory
format where the notification e-mail included only a cursory description
of the problem and a [malformed] URL for the actual report. Today, Elias
Levy announced that @Stake wanted to switch to a format with more
information in the message but still requiring a visit to their website
for the full advisory.

Some interesting RISKS:

  - Access for people with marginal Internet connections or browsers other
    than IE/Netscape is less convenient.

  - Information is unavailable if the webserver is down or overloaded, as
    might happen with an important advisory. It seems counterproductive to
    put important, time-sensitive material behind a single point of failure,
    particularly when the decision is to deliberately avoid using a free
    distributed, fault-tolerant distribution channel.

  - It makes it much easier for a vendor to change an advisory without
    notifying anyone, especially since changed or removed advisories won't
    be archived in anywhere near as many places as a mailing list such as
    BUGTRAQ. In addition to covering up bad work, this would also make it
    easier to remove or tone-down past advisories about companies the author
    is now aligned with.

  - It opens the prospect of tailoring content to the reader. This could be
    as simple (and annoying) as charging for access to some content or as
    complex as determining what to show based on where the request came from
    (e.g. competitors, vendors or journalists).  While this would probably
    be caught for something major, particularly at first, it would not
    surprise me to find at least subtle tampering happening regularly if
    this becomes commonplace.

I find it hard to ignore the above concerns given that the switch
provides no benefits of any sort to the reader, let alone enough benefit
to outweigh them. The only legitimate advantage from such a policy is
that it makes it easier for the author to change the contents of a
released advisory. For legitimate purposes, this is unnecessary:

  - Updates to current advisories can be published in the same fashion as
    the originals, ensuring that anyone who received the original will
    receive any updates.

  - It's extremely easy to add a link to the advisory which people could use
    to check for updates.

  - It discourages the use of proper change control, since it's easier to
    update an existing advisory than release a update.

  - It will cause old links to break if they ever move content
    around and neglect to install a redirect for the old URL.
    (Microsoft is notorious for this, especially with links that break
    only for visitors using something other than Internet Explorer)


[BUGTRAQ] Another tidbit about the new Microsoft advisory format

<Brian <ang_mor@CAM.ORG>>
Fri, 08 Dec 2000 00:38:47 -0500

Now we are 'Bugged by Microsoft'.  A nuisance.  Brian

Date: [accidentally deleted]
From: "Richard M. Smith" <rms@PRIVACYFOUNDATION.ORG>
Subject: [BUGTRAQ] Another tidbit about the new Microsoft advisory format
To: BUGTRAQ@SECURITYFOCUS.COM

One thing that I noticed about the new Microsoft security bulletins is that
they now contain Web bugs.  The bugs look like they are used to count the
number of people coming to read the bulletins.  Here is the URL for one of
these bugs:

http://c.microsoft.com/trans_pixel.asp?
source=www&TYPE=PV&p=technet_security_bulletin

I didn't see a <IMG> tag for the bug, so I'm assuming it is generated by one
of the JavaScript files included on the page.

One thing that Microsoft is learning here is what bulletins people consider
important.  With the older format, where all the info was in an e-mail
message, they did not get this feedback.

I don't see the use of Web bugs here as a big deal, but still interesting.

Richard


Making something look hacked when it isn't

<"Richard J. Barbalace" <rjbarbal@MIT.EDU>>
Sat, 16 Dec 2000 15:03:27 -0500

A brief e-mail has been getting forwarded around our campus which reads:
  Check out breaking news at CNN:
  http://www.cnn.com&story=breaking_news@18.69.0.44/evarady/www/top_story.htm

At first glance, this appears to be a genuine article on CNN, but a quick
read reveals that a cute joke.  Most people who have seen the fake article
have immediately assumed that www.cnn.com has been hacked in some manner.

Those more familiar with HTTP specification, however, will notice that the
URL is completely valid, and does not lead to or redirect from any cnn.com
computers.  No machines have been hacked.  Instead, the e-mail just plays
with your expectations of what a URL should look like.  The risk here is not
a computer one at all, but a social risk that even (or perhaps especially)
knowledgeable people will assume something has been hacked when it hasn't
been.

An even sneakier URL might be:
  http://www.cnn.com&story=breaking_news@306511916/evarady/www/top_story.htm

For those of you still pondering why that URL works, read the HTTP
spec and try the equivalent:
  http://username@18.69.0.44/evarady/www/top_story.htm

Richard J. Barbalace <rjbarbal@mit.edu>


The risk of a seldom-used URL syntax

<rpw3@rigden.engr.sgi.com (Rob Warnock)>
Mon, 18 Dec 2000 21:09:19 -0800 (PST)

Recently, a mailing list I'm on forwarded a report of a "hack" of the
CNN.com site.  Upon looking closely, I found that the CNN site hadn't
been hacked at all — it was the *minds* of readers of this hoax "report"
that were being hacked! Rather cute, actually, but it exposes what is
perhaps a larger RISK, so please bear with me while I set up the story...

An MIT student named Eric Varady took a parody news article from
The Onion <URL:http://www.theonion.com/onion3637/bush_horrified.html>,
edited the layout to resemble CNN's format, and copied it to his own site
<URL:http://salticus-peckhamae.mit.edu/evarady/www/top_story.htm>.
(Note that multiple threatened legal actions have since forced him
to remove the original content, but an explanation page is still there.)

He then passed around a "report of a hack of the CNN site" with a URL
[which I *do* hope makes it through the mail-to-HTML scripts at Catless!] of
<URL:http://www.cnn.com&story=breaking_news@18.69.0.44/evarady/www/top_story.htm>.

If you look very closely, you'll see that the actual host named by this URL
is not "www.cnn.com", but "18.69.0.44" (a.k.a. salticus-peckhamae.mit.edu).
That is, for IP-based/Internet URL "schemes" such as HTTP or FTP, the
general format defined in RFC 1738 is:

    <scheme>://[<user>[:<password>]@]<host>[:<port>]/<url-path>

The "user" field is very rarely used, and even then is more often seen with
FTP than HTTP. But since it contained an at-sign before the first slash,
the hoax URL was really <URL:http://18.69.0.44/evarady/www/top_story.htm>
with the (ignored) user field of "www.cnn.com&story=breaking_news". Cute, eh?

More serious scams of this sort are possible, given the number of users
who (1) have *no* idea what the formal syntax of a URL is, and (2) routinely
access the Web through "portals" which often create complicated indirection
URLs to aid with logging or tracking to support advertising revenue, e.g.:
<URL:http://www.foo.bar.com/logger.cgi?http://www.other.place.com/some_article>

The RISK is that users are being bombarded with these monstrosities so
often that they've grown used to it, and that they'll fail to recognize
when they're being sent someplace they might not really want to go!!
(Perhaps when it's not a joke, such as being sent to a porn site while
working at a company with a "no tolerance" policy.)


Intelligence risks of e-mail auto-responses

<djb0x77376989@scream.org (Dan Birchall)>
20 Dec 2000 01:54:13 GMT

For some time, I have been associated with organizations that maintained
e-mail lists for communication with customers.  Each customer mailing
generates some quantity of e-mail responses to the mailing address or a
specified reply-to address.  Heuristic filters handle the most frequent
types of responses, generating automatic replies or redirecting mail to
appropriate addresses.  There are, though, always some messages which the
filters can't adequately handle, so my involvement tends to involve
eyeballing them.

The workload is by no means immense - for every 6,000 outbound messages
sent, I manually handle one response.  Some are questions the filters didn't
catch, which I pipe to various scripts.  Some are bounce messages.  Some are
chain letters - I grep those for From: headers and bounce them to the
appropriate administrators; nothing to spread holiday cheer like a corporate
policy smackdown.  A good many are auto-responses.

Within the set of auto-responses, a significant minority pose non-technical
risks.  Users who are going to be "away from mail" or "out of the office"
for even a single day frequently leave instructions on who should be
contacted in their absence, and their responses often include other
information that could be considered sensitive.

Their expectation is, of course, that they will receive mail from co-workers
and colleagues who already know where they work, what they do, and have some
need for the information.  However, if they are subscribed to mailing lists,
it is quite possible that the information they provide will be seen by
completely unrelated users at other organizations.

  "I will be away from [government laboratory] from [departure date]
  and will return on [return date].  If you need to reach someone
  from the IT Security staff, Please contact [coworker] at [number]
  or e-mail to [address]."

Congratulations.  You've just told me what department you work in and where
you work (the combination of which might not be the sort of thing you don't
just go blabbing around), and given me a co-worker's name and direct contact
information.  The potential for a social engineering hack is giddying.

This is, of course, a somewhat extreme example.  But for each one like this,
there are hundreds of others from people in business, academia and
government.  People who're perfectly willing to send total strangers
information about their personal schedules - who they are, what they do,
where they do it, when they're leaving, when they're coming back, where
they're going, how they can be reached while they're gone, or who to contact
instead.

Perfectly normal information to give to a co-worker, colleague or neighbor.
Somewhat risky information to give to strangers, in an era of competitive
intelligence, corporate and other espionage, etc.

Workarounds?  You could hack your MTA/MUA/MDA to only send responses to
certain domains, or omit all personal information from your auto-response.
A more balanced approach would involve not re-stating information authorized
users already know, and delivering necessary information in a minimal form.
Ergo, instead of:

  "I will be away from GovLab from December 22 and will return on
  December 26.  If you need to reach someone from the IT Security
  staff, Please contact John Smith at 809-555-1212 or e-mail to
  jsmith@govlab.gov."

send something like this:

  "I am currently away from work.  If you need to reach someone,
  please contact John <jsmith> at 555-1212."

The logic, of course, is that an authorized person already knows
where you work, what you do, your e-mail domain, and your area code.
Nobody needs to know how long you'll be gone, if there's someone
else who can help them.

Dan Birchall - Palolo Valley - Honolulu HI - http://dan.scream.org/
Corporate Holidays 2001 - http://208.184.171.20/articles/262573.htm


Re: Voting by machine (Cohen, RISKS-21.15)

<Tony Finch <dot@dotat.at>>
Thu, 21 Dec 2000 21:12:22 +0000

One of Fred Cohen's requirements for an election process to be deemed
trustworthy was this:

> 9) Each voting location must be able to have unique vote layouts and
> candidates to accommodate the wide range of elections that run both
> simultaneously and sequentially.

One of the aspects of the American election that is very strange to someone
used to the British election process is that so many different votes are
made on the same ballot paper. If one ballot paper is used for each vote
then the counting process can be made much simpler, faster, and more
reliable. The ballots can be quickly split into piles per candidate, and
then those piles can be counted in parallel.  Votes for different issues can
also be counted in parallel. Separate "ambiguous" piles can be examined more
closely if necessary.

This also means that statewide elections can use the same ballot paper
across the state without affecting the need for counties or cities to have
their own ballots for their own elections.

Tony  f.a.n.finch    fanf@covalent.net    dot@dotat.at


Re: ATM network for voting: a non-starter (Jefferson, RISKS-21.15)

<Jeremy Epstein <jepstein@monumental.com>>
Wed, 20 Dec 2000 17:50:23 -0500

David Jefferson's well thought out critique of ATM-based voting misses
one small but important point.  Depending on where you live, it is not
necessary to provide any authentication, or even to sign, in order to
vote.  Until this year, in Virginia all I had to do was state my name
and address to the election official, and that was sufficient.  Given
the number of voters in each precinct (thousands in a presidential
election), it's likely that I could have voted several times using a
different (valid) name each time.  If I knew the names and addresses of
people in other precincts, I could therefore vote for them as well.

The law changed this year, and now you either have to present some form
of picture ID, or you must sign an affidavit.  But they don't have a
signature to compare to at the voting booth, so in the best case they
find out after the fact that *someone* voted illegally (but they can't
tell which vote it was).

I mention this because all of the discussions about electronic voting
(ATM-based, Internet-based, or otherwise) presuppose a requirement for
strong authentication.  If we're trying to model the paper world, that's not
necessarily so.  [Recognizing, of course, that there's not enough time to
vote 1000 times in the same day in the paper world, under the assumption
that I'd have to rotate between precincts to escape detection, but I can
certainly vote electronically 1000 times in the same day.]

Jeremy


Re: ATM network for voting: a non-starter (Jefferson, RISKS-21.15)

<Barry Margolin <barmar@genuity.net>>
Wed, 20 Dec 2000 23:13:05 GMT

>A related issue is voter authentication.

We've heard this mentioned repeatedly in discussions about online voting.
I don't know how voting works in your community, but there's virtually no
authentication in my town.  I go to the polling place, they ask me my name
and address, and they cross it off the voter list; another person does the
same thing when I turn in the completed ballot.  None of the people there
know me, yet they never ask for any proof of identity.  A teenager trying
to get into a bar at least has to have a fake ID; he wouldn't need that to
vote as someone else — all the information he needs is in the phone book.

However, if I tried to vote multiple times in the same precinct I suspect
they would recognize me.  So in order to stuff the ballot, I would have to
go to different precincts, which would be quite tedious.  With an
electronic system, the door is opened to massive fraud by a single
individual.

Barry Margolin, barmar@genuity.net, Genuity, Burlington, MA


Re: ATM network for voting: a non-starter (Jefferson, RISKS-21.15)

<Bill Stewart <bill.stewart@pobox.com>>
Thu, 21 Dec 2000 00:43:08 -0800

There's only one positive feature to using the ATM network for voting, which
is that you can pay bribes to voters right on the spot.  Other than that,
it's totally impractical - ATMs use common physical formats for cards and
for currency, but they don't have common processors, programming
environments, network protocols, or user interfaces, and the real
interoperation is done by the host processors that feed the networks and
common banking standards, not because there's any interoperation out at the
edges.

Most of them use some variant on SNA, which was relatively appropriate
technology at the time ATM networks started evolving, and are some variant
on a PC - I've seen Dead-MSDOS messages on out-of-order ATMs, some have run
OS/2, some Unix, while others probably have custom or other production
operating systems.

Bill Stewart <bill.stewart@pobox.com>


Re: High Reliability (Shostack, RISKS-21.15)

<Matt Jaffe <jaffem@pr.erau.edu>>
Thu, 21 Dec 2000 01:02:29 -0700

 >> — at least in comparison to the software industry, which typically
 >> has from 6 to 30 errors per line of code.

His (appropriately sardonic) comment was:
 > Not to mention the journalism industry, with an error rate of 1 error
 > per 6 to 30 bits when reporting technical information... :)

My (equally sardonic) comment is about the original (hidden) assumption
that one can somehow compare chip error rates with software error rates.  I
was not aware that there was a published conversion factor converting
transistors to lines of code.  Perhaps I'm using the wrong metric and it's
errors per pound of silicon that can be converted somehow to an equivalent
in errors per line of source code.  In any case, I'll bet if we first
converted lines-of-code to pixels-to-display-a-line-of-code, software and
hardware error rates would start to look a lot more similar.

...Matt			http://backoff.pr.erau.edu/jaffem


Re: Another DMV Break-in, in Oregon (PGN, RISKS-21.15)

<"Simson L. Garfinkel" <slg@walden.cambridge.ma.us>>
Wed, 20 Dec 2000 22:07:24 -0500

In the mid 1990s, Pitney-Bowes developed and demonstrated a system for
digitally-signed drivers licenses. I believe that the system was called
VERITAS, but I could be wrong. The system provided for a 2D barcode on the
back of each driver's license. The barcode contained a digitized copy of the
driver's photograph, name, address, height, age, etc. The 2D barcode was
signed with the digital key of the day, which itself was signed with the
system key. I believe that the system key was changed every year.

The company's business plan, I believe, was to basically give away the
identity systems to state governments and then to sell verifiers to stores,
restaurants, bars, etc. You would slap a person's driver's license down onto
the verifier and it would display their photograph and tell you if they were
old enough to drink, etc. It would also verify the signature.

The Pitney-Bowes system was specifically designed to prevent the
break-in-and-steal-it problem. Each morning the systems in the field would
call up and get their key-of-the-day signs by the system-key. If a system
was stolen, those systems wouldn't get signed. If they actually issued
fraudulent cards, you could blacklist those cards and distribute the
blacklist to the verifiers. You could even use caller ID to make sure that
you wouldn't issue certs the phone number it was calling from wouldn't match
the caller ID, and the system wouldn't issue a key.

I saw this system at the RSA conference in 1993 or 1994. I was quite
impressed. But Pitney-Bowes never sold it. I believe that there was a patent
infringement problem.


Re: Seattle Hospital Hacked (RISKS-21.14,15)

<Todd Wallack <twallack@sfchronicle.com>>
Wed, 20 Dec 2000 15:36:12 -0800

I just spoke to Walter Neary at the university of Washington. He confirmed a
9 Dec 2000 report in *The Washington Post* that hackers gained access to
confidential medical files. He said it was a good summary of the
incident. (Other newspapers and television stations also reported on the
incident as well.)

But the statement you distributed was issued two days earlier. At that time,
Neary said the college didn't know whether to believe the hackers' claims
that they had accessed confidential data. He said the Washington Post and
other reporters later obtained proof — the records themselves — that show
that the hackers did indeed break into the computer.

But he still disputes an Internet report, referenced in the statement, which
claims that hackers "took control'' of the university's computers.

Todd R. Wallack, Business Reporter, San Francisco Chronicle
(415) 764-2815


Re: Seattle Hospital Hacked (Wallack, RISKS-21.16)

<"Kevin L. Poulsen" <klp@securityfocus.com>>
Thu, 21 Dec 2000 17:43:04 -0800 (PST)

*The Washington Post*, and a local TV station, obtained the "proof" from me,
after the medical center sought to dismiss the incident as a rumor.  Though
I should hardly have to say it, I confirmed every aspect of this story
before breaking it. (Even we "Internet reporters" do that sort of thing.)
The hacker took command of large portions of the medical center's internal
network.

The University of Washington Medical Center later reluctantly acknowledged
the accuracy of my report.

http://www.washingtonpost.com/wp-dyn/articles/A46320-2000Dec8.html
http://www.nytimes.com/2000/12/08/technology/08HACK.html
http://www.msnbc.com/news/499856.asp
http://dailynews.yahoo.com/h/ap/20001208/us/med_center_hacker_3.html
http://www.komotv.com/news/qtmovie.asp?ID=8157

Kevin L. Poulsen, Editorial Director, SecurityFocus.com, Washington D.C.
(202)232-5200


Re: Seattle Hospital Hacked (RISKS-21.14, 21-15)

<Jonathan Thornburg <jthorn@galileo.thp.univie.ac.at>>
Thu, 21 Dec 2000 18:24:12 +0100

In Risks 21.15, "Lynda Ellis (LabMed)" <lynda@mail.ahc.umn.edu> wrote
> The following statement is for attribution to Tom Martin, director and chief
> information officer for University of Washington Medical Centers Information
> Systems:
>
[[...]]
>   we do know for certain that
>   no one has ever gained unauthorized entry into our separate and highly
    ======     ====
>   confidential patient-care computer systems.

I find this highly implausible.  I could perhaps accept a claim that
these systems are very secure, but to say that _no one_ has _ever_
gained unauthorized entry strains credibility.  How does Mr. Martin
know this?  More to the point, how could _anyone_ know this?  In the
real world, all software has bugs in it, and all systems have loopholes.
If nothing else, authorized users can be bribed, coerced, or fooled by
"social engineering".

>  we take extraordinary measures to protect our clinical-based systems that
>  go well beyond the high security employed, for example, by most community
>  hospitals. These measures include the latest hardware and software,
                                     ================================
>  encryption technologies, and strong host-based security.

Using the "latest" hardware and software does _not_ boost my confidence
in "extraordinary" security.

Some of the RISKs here:
* Wildly over-optimistic claims like these suggest that (as usual)
  management is pretty clueless when it comes to computer security.
* Managers who actually believe the systems are _perfect_ don't have much
  incentive to improve them, or even examine/audit them too closely...
* Line employees who might actually be very competent are more likely
  to quit when confronted with pointy-headed bosses.
* And oh yes, the UW statement was right: claiming "our systems have
  never had unauthorized access" probably _does_ boost the chances
  of further attacks.

Jonathan Thornburg <jthorn@thp.univie.ac.at> Universitaet Wien / Institut
fuer Theoretische Physik http://www.thp.univie.ac.at/~jthorn/home.html

Please report problems with the web pages to the maintainer

x
Top