The RISKS Digest
Volume 24 Issue 21

Thursday, 23rd March 2006

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…


More SAT errors
Jeremy Epstein
Texas voting recount halted
David Lesher
Baby dies after untrained doctor presses wrong button
Adam Hupp
Tax Data for Sale?
Chris Hoofnagle
Fidelity laptop with customer data stolen
Bob Heuman
Fidelity loses laptop, recovery effort looks like phish
Larry Stewart
Risks: adoption vs. abortion?
Harry Hochheiser
How risky are preapproved credit card applications?
Steve Summit
Mark Brader
Re: Crime to Delete Files
Sidney Markowitz
Re: Excel garbles microarray experiment data
Fernando Pereira
Dimitri Maziuk
Tim Duncan
Nick Malcolm
Olaf Seibert
Risks of frequent publication
Rob Slade
OSDI '06 CfP
Geoff Voelker
Call for Participation - Team Software Process Symposium
Carol Biesecker
REVIEW: "Network Security Tools", Nitesh Dhanjani/Justin Clarke
Rob Slade
Info on RISKS (comp.risks)

More SAT errors

<Jeremy Epstein <>>
Thu, 23 Mar 2006 06:59:42 -0800

In RISKS-24.19, there were three reports about the College Board reporting
problems with SAT scoring.  First, the College Board said that about 4000
tests were misgraded, with results off by no more than 100 points.  Then
the College Board admitted some were off as much as 200, or maybe even 400
points (out of 2400 total).

Today, the College Board admits that there were an additional 27,000 score
sheets that weren't rechecked, and they found 375 more students who received
incorrect scores.  There was no disclosure of how far off these results
were.  The article notes "The College Board said that from now on all answer
sheets would be scanned twice, among other new precautions, and that it
would retain consulting firm Booz Allen Hamilton to perform a comprehensive
review within 90 days."

Two things struck me about this sequence of revelations:

(1) Does the College Board even have a *legal* obligation to disclose this
information?  Could it be that this has happened in the past, and without
the increased scrutiny caused by the disclosures of personal information
leakage, they might never have told the students or the public?

(2) On the positive side, it's a good thing there's paper to double-check.
If these were votes on paperless DREs instead of SAT scores, there would be
no way of knowing that they had been miscounted.

As a parent whose oldest child went through the process last year, I'm
relieved that she's not having to deal with this headache - and I feel sorry
for any student who made decisions on where to apply based on SAT scores.
(I know we used my daughter's scores to help find target schools - if they
had been off by a few hundred points, she might not even have applied to the
school she selected.)  While colleges can reexamine the applications in
light of corrected SAT scores, there's nothing that can be done for
applications that weren't submitted based on incorrect results.

Karen W. Arenson in *The New York Times* today is reporting that the College
Board has now admitted that the maximum error was 450 points (out of 2400).
The College Board had previously claimed 100, then 200, then 400.  Her
article included this wonderful quote:

  "Everybody appears to be telling half-truths, and that erodes confidence
  in the College Board," said Bruce J.  Poch, vice president and dean of
  admissions at Pomona College in Claremont, Calif. "It looks like they
  hired the people who used to do the books for Enron. My next question is
  what other surprise we're going to hear about next."

  [Lauren Weinstein noted that in its statement, the Board said Pearson
  would ensure that all answer sheets were "acclimatized before scanning"
  and would scan each answer sheet twice.  Pearson will also improve its
  software to detect whether answer sheets have expanded because of
  humidity.  PGN]

    [Jeremy's point about the paperless DREs is apt, but this case reminds
    us once again that even paperfull media such as optical scanning can
    have serious problems that require oversight and the willingness to
    perform meaningful recounts — which are of course impossible with
    the current breed of paperless DREs.  PGN]

Texas voting recount halted

<David Lesher <>>
Wed, 22 Mar 2006 18:16:21 -0500

Court-at-law recount suspended; Electronic machines not providing all info
Paul A. Anthony, 21 Mar 2006

On orders from the Texas Secretary of State's office, the recount for the
Tom Green County Court-at-Law No. 2 race has been suspended midway through
its second day.  About 1:30 p.m. today, county Republican Chairman Dennis
McKerley stopped the recount after workers found discrepancies of as much as
20 percent between what was counted Monday and what was reported Election
Night.  "We're having some trouble with the electronic equipment," McKerley
said.  Apparently, McKerley said, new electronic voting machines provided by
vendor Hart InterCivic are not printing ballots for every vote cast on the
machines.  During recounts, which must be done by hand, the machines are
designed to print out separate ballots for every vote.,1897,SAST_4956_4559073,00.html

Baby dies after untrained doctor presses wrong button

<Adam Hupp <>>
Tue, 21 Mar 2006 09:17:49 -0600

"A baby boy died after an untrained doctor pressed the wrong button on his
bypass machine because it was a less `horrid' colour than the other, an
inquest heard yesterday.  ... [The doctor] was unaware how to use the
machinery, as were most of the team."

Tax Data for Sale?

<Chris Hoofnagle <>>
Wed, 22 Mar 2006 08:39:21 -0800

  "The *Philadelphia Inquirer* reports that the IRS has proposed rule changes
  allowing tax-return preparers, like H&R Block, to sell an individual's
  return information to marketers and data brokers.  The proposed rule,
  which does contain some substantive protections for the processing of
  electronic returns, was published in the Federal Register on December 8,
  2005.  The official comment period has passed, but hearings will be held
  this month."  []

Well, kind of.  Under the new rule, disclosure of tax return information is
broadened if the customer gives her affirmative consent.  If consent is
given, the FULL RETURN can be given to other entities for marketing
purposes, and the tax preparer does not have to even ensure that these other
entities are legit or following the preparer's privacy policy.

I'm basically telling people not to use storefront tax preparation at all,
because if they don't trick you into selling your data, they'll use it
themselves to market bogus refund anticipation loans.  Unfortunately, even
the tax preparation software tries to market that to you now.

  [Added note: The current rule allows sharing with affiliates if the
  consumer gives opt-in consent.  The new rule expands sharing to any third
  party, but requires a more explicit opt-in.  However, once the data are
  shared, the preparer has no responsibilities for how it is used.]

Fidelity laptop with customer data stolen

<Bob Heuman <>>
Thu, 23 Mar 2006 14:00:43 -0500

This one seems to impact Hewlett Packard employees in the U.S. - I do not
know if those in Canada and elsewhere in the world are impacted. No word on
use of encryption to protect the data, so I suspect it was NOT protected at
all. Will they ever learn?

  A laptop computer belonging to Fidelity Investments and containing
  sensitive data on about 196,000 retirement-account customers was stolen
  last week, the company said.

Fidelity loses laptop, recovery effort looks like phish

<Larry Stewart <>>
Wed, 22 Mar 2006 19:27:01 -0500

Evidently Fidelity lost a laptop containing the HP retirement records.
No explanation why it was reasonably on said laptop.

To their credit, they sent UPS letters to everyone, but:

a) The letters contain an 800 number to call
b) The 800 number wants you to key in your social security number before
   talking to a person.

Well that is not a very good design!

At least the folks at the main Fidelity number knew how to confirm the
special number.

I was calling to tell them I got someone else's letter at my address in
addition to my own, but I was seriously surprised by the "enter ssn".

I would note:

- Poor security practices (data on laptops)
- Inability to learn from other companies previous misfortunes
+ An apparently serious response
- A poorly designed response
- Bad database data  - will I get this fellow's pension too?

Risks: adoption vs. abortion?

<Harry Hochheiser <>>
Tue, 21 Mar 2006 07:33:11 -0500

Here's another example of problems with automated language processing.,70453-0.html?

Amazon Changes 'Abortion' Queries said Monday it had modified the way its search engine handles
queries for the term "abortion" after receiving an e-mail complaint that the
results appeared biased.  Until the recent change, a user who visited the
Seattle Internet retailer and typed in the word "abortion" received a prompt
asking, "Did you mean adoption?" followed by search results for "abortion."

Spokeswoman Patty Smith said the automated prompt was purely based on
technology, and that no human had made the decision to show the question.
"Adoption and abortion are the same except for two keystrokes," Smith
said. "They also, in this case, happen to be somewhat related terms."
Still, Smith said she and other company officials decided to remove the
question after receiving an e-mail complaint and deciding that it raised a
valid concern.

People who type in the term "adoption" do not see a prompt asking "Do you
mean abortion?"

How risky are preapproved credit card applications?

<Steve Summit <>>
Thu, 16 Mar 2006 16:44:46 -0500

If you're concerned about privacy, you may be worried about merely throwing
away those preapproved credit card applications that come in the mail,
especially when they're pre-filled with your name and other information.
Indeed, the Federal Trade Commission and many banks recommend tearing up
those applications before discarding them.  But Rob Cockerham, my favorite
empiricist, decided to test how well that strategy actually works.  He tore
up an application, taped it back together, and mailed it in.  Did the bank
process the application and issue him a card anyway?  One guess.

Re: How risky are preapproved credit card applications?

< (Mark Brader)>
Thu, 16 Mar 2006 17:05:00 -0500 (EST)

> He tore up an application, taped it back together, and mailed it in.

Ahem.  He tore up an application, taped it back together, filled it out
*with a change of address requested*, and mailed it in.

Re: Crime to Delete Files (RISKS-24.20)

<Sidney Markowitz <>>
Sat, 18 Mar 2006 14:08:45 +1300

The spin on the story in RISKS-24.20 was "how awful that a judge says it's
illegal to use a secure delete program." But how is this different from a
disgruntled employee shredding the only copy of paper files of valuable
customer information before quitting to start his own business in
competition with his former employer?

It should make a difference, of course, whether the deleted files were
valuable to the company, and if they were the only copy of the information.
The ex-employee made the additional argument that his employment contract
specified that he was to return or destroy data upon leaving the
company. The company asserted that he had broken the contract and so those
the authorization implied by those terms were no longer in force.

But the story reports that this was an appeals case. Based on the story, it
appears that the judge did not say that files were deleted illegally, only
ruled there could be facts in the case which would cause the deletions to be
considered as damage and unauthorized. The case was sent back to the lower
court so that these facts could be determined.

Sidney Markowitz

Re: Excel garbles microarray experiment data (Deltuvia, RISKS-24.19)

<Fernando Pereira <>>
Sat, 18 Mar 2006 01:43:32 -0500

Here's Microsoft's own description of Excel from online book that
came with my copy of the software:

> Microsoft® Excel 2004 for Mac®
> Use this analysis and spreadsheet program to evaluate, calculate, and
> analyze data. Make use of the improved charting and page layout
> capabilities to illustrate your data and make it look good in print.

An "analysis" program, designed to "analyze data". No mention of
accounting. For a scientist, to "analyze data" involves computing
statistical summaries and plotting, not silent conversions of data
labels. Furthermore, I don't think the work in question used Excel as a
database program, but rather as a program to analyze the results of
microarray experiments. This task is entirely within the job description for
Excel quoted above.

Re: Excel garbles microarray experiment data (Risks-24.20)

< (Dimitri Maziuk)>
Sat, 18 Mar 2006 13:26:15 -0600

Re: Deltuvia
Actually, if you follow the references in the original article, both
bioinformatics programs are written in Java with SQL back-ends.
"Tab-delimited file suitable for loading into spreadsheet programs" is one
of their listed output options.

So the problem was introduced by the authors of the original report when
they decided to load those output files into Excel for viewing.

Re: McCormick"
> ... it will often ignore the double-quotes that are intended to
  distinguish character from numeric fields.

Yes, it does that. Note, however, that there is no standard for CSV
format. Some applications allow special characters (such as newlines: record
separator) inside double-quoted values, some don't.  Some applications
escape a double quote inside double-quoted values with a backslash
(C-style), some use a second quote (SQL-style), some simply can't handle
it. There is no way to disambiguate non-text values, such as 20060318. MySQL
outputs null fields as ",\N," whereas most others do just ",,". And so on.

Which is not as bad as tab-delimited files (output of the two bioinformatics
programs in question) where on top of all of the above, a single tab may
replaced by 8 (or some other number) of consecutive spaces and there is an
option to "not treat consecutive spaces as one". (I.e. to treat "\t\t" as a
null field.) Of course, to most parsers a whitespace is just a whitespace,
be it "\t" or a " ", so the end result is you get 8 extra null columns
because you previously looked at the file in some helpful text editor that
quietly replaced tabs with spaces for your viewing pleasure.

Re: Excel garbles microarray experiment data (RISKS-24.19)

Sat, 18 Mar 2006 19:32:36 -0800

My company often gives clients data in CSV formated file that doesn't end in
.CSV.  This data is usually imported into an accounting system but sometimes
users want to look it over in Excel (if it isn't in Excel it isn't data to
some people) so they open Excel and then open the file thus bringing up the
"Text Import Wizard".  The wizard is pretty straight forward, you select
delimited then select comma as your delimiter and click Finish.  Here is the
catch; Excel brings all the columns in using the "General" format, not the
"Text" format unless you specify this on the last screen (3 of 3) of the
wizard which is often skipped.  Thus data that starts with a zero or has a
lone 'E' with numbers is often mis-represented.  You would think that data
brought in via a TEXT Import Wizard would be treated as text but
unfortunately this is not the case.

Re: Excel garbles microarray experiment data

Mon, 20 Mar 2006 15:44:17 +0000

While working on a joint UK / German product development we discovered that
the 'standard' separator employed in many German CSV files is the semi-colon
';' - I do not know why.
This property is defined in the Regional and Language Options of the Machine
as described in the Microsoft Excel Help (in case anyone should need it) :

Change the separator in a CSV text file
1. Click the Windows Start menu.
2. Click Control Panel.
3. Open the Regional and Language Options dialog box.
4. Click the Regional Options Tab.
5. Click Customize.
6. Type a new separator in the List separator box.
7. Click OK twice.

Note  After you change the list separator character for your machine, all
applications will use the new character. You can change the character back
to the original character by using the same procedure.

Naturally, on my machine (Windows 2000) the above 'Help' was found like this:

Change the separator in a CSV text file
1. Click the Windows Start menu.
2. Click Control Panel.
3. Open the Regional Options dialog box.
4. Click the Numbers Tab.
5. Click Customize.
6. Type a new separator in the List separator box.
7. Click OK twice once.

Re: Excel garbles microarray experiment data (RISKS-24.19)

<Rhialto <>>
Thu, 23 Mar 2006 14:30:06 +0100

  For example, the RIKEN identifier "2310009E13" was converted
  irreversibly to the floating-point number "2.31E+13."

That should have been 2.31E+19. Error of the original author, or even
further error of Excel?

(the original page doesn't seem to offer access to the e-mail addresses;
I had wanted to copy the authors too)

Olaf 'Rhialto' Seibert  rhialto/at/

Risks of frequent publication

<Rob Slade <>>
Wed, 22 Mar 2006 20:37:01 -0800

Copyright Gone Mad (copyright Robert M. Slade, 2006)
(with that little (c) symbol thrown in for good measure)

I got asked to do a 20-year retrospective on computer viruses for a tech
magazine.  There were a few oddities about the request, such as a demand for
graphics.  I normally don't do graphics, but I had such a fun time doing the
article that I gave in, and finally put together quite a piece, I thought.
It was a gas going back over all the stuff I've seen over the years.

You may never see it.

See, I got this phone call from the magazine today.  It seems that some of
the wording in my article bears a striking resemblance to a site on the
Internet: "Robert Slade's Computer Virus History" at

This is surprising?

I've been writing articles, series, and books about viruses since the darn
things started.  As a matter of fact, it's a bit surprising that they didn't
find more sites with my stuff on it, especially since there have been dozens
of examples that I've seen myself, over the years, where people have used my
material and passed it off as their own.

But it seems that this outfit has a policy where they won't publish anything
that has already appeared on the net.

I suppose that's fair enough.  Everybody is getting really antsy about
copyright violations these days, and, as somebody who does an awful lot of
writing, I suppose I should approve.

Except I don't.  The crackdown (and crankdown) on copyright and copying is
making it hard for a lot of us who are relying on our own research and
writing.  After all, who else am I going to use for material on virus
history?  Oh, lots of people were there, but who else wrote it down?  I do
go back (and did go back, for this article) and check on specifics, and even
made corrections on items we've found out more about.  But, by and large, if
I want to generate a decent timeline of what happened, I have to rely very
heavily on my own stuff.

Except, now I can't.

Well, like I said, you may not get to see the history article.  Or, if they
are willing to bend their policy a bit, you might.  But I'm willing to bet
that their policy is more important to them.  After all, they can always get
another writer to do it for them.

Of course, in all probability he won't know anything about the history of

Or, he can read my stuff.  And reuse it.

copyright Robert M. Slade, 2006
(with that little (c) symbol thrown in for good measure)    or

  [Ironic.  I keep Robert's copyright line in his reviews, despite the RISKS
  info file that once upon a time said that by default everything that
  appears in RISKS is fair game if used with appropriate credits.  I just
  discovered that the relevant wording in the risksinfo file somehow got
  deleted somewhen along the way, and I suppose I'd better fix that.  Or
  perhaps it is better to leave it unspecified so that others can quote
  Robert without his permission!  PGN]

OSDI '06 CfP

<Geoff Voelker <voelker@CS.UCSD.EDU>>
Wed, 22 Mar 2006 22:18:37 -0800

         OSDI '06 Call for Papers [Adapted for RISKS by PGN]
  7th Symposium on Operating Systems Design and Implementation (OSDI '06)
                Seattle, WA, USA,  November 6-8, 2006,
	 Sponsored by USENIX, in cooperation with ACM SIGOPS

The seventh OSDI seeks to present innovative, exciting work in the systems
area ... on the design, implementation, and implications of systems
software.  The OSDI Symposium emphasizes both innovative research and
quantified or illuminating experience.  OSDI takes a broad view of the
systems area and solicits contributions from many fields of systems
practice, including, but not limited to, operating systems, file and storage
systems, distributed systems, mobile systems, secure systems, embedded
systems, networking as it relates to operating systems, and the interaction
of hardware and software development.  We particularly encourage
contributions containing highly original ideas, new approaches, and/or
groundbreaking results.  [Full papers are due by 24 Apr 2006.]

Call for Participation - Team Software Process Symposium

< (Carol Biesecker)>
Thu, 23 Mar 2006 19:29:26 +0000 (UTC)

Team Software Process Symposium
18-20 Sep 2006, Omni Hotel, San Diego, California
Web: http://www.sei.cmu/edu/tsp/symposium.html
Theme: Measurable Improvements in Team Performance
Deadline for abstracts 28 Apr 2006

The first Team Software Process (TSP) Symposium will include all yearly TSP
activities.  The conference will bring together users, adopters, and
developers of the TSP, those involved in its development and transition, and
those who are new to the technology and eager to learn more.  Attendees will
have the opportunity to exchange ideas, concepts, and lessons learned
concerning the experiences, best practices, and suggested introduction
strategy for the TSP methods and practices.

**** All inquiries to ****
  Jodie Spielvogle, TSP Team
  Software Engineering Institute, 4500 Fifth Avenue, Pittsburgh, PA 15213
  Phone: 412 / 268-6504  FAX: 412 / 268-5758   E-mail:

REVIEW: "Network Security Tools", Nitesh Dhanjani/Justin Clarke

<Rob Slade <>>
Tue, 21 Mar 2006 10:48:31 -0800

BKNTSCTL.RVW   20051204

"Network Security Tools", Nitesh Dhanjani/Justin Clarke, 2005,
0-596-00794-9, U$34.95/C$48.95
%A   Nitesh Dhanjani
%A   Justin Clarke
%C   103 Morris Street, Suite A, Sebastopol, CA   95472
%D   2005
%G   0-596-00794-9
%I   O'Reilly & Associates, Inc.
%O   U$34.95/C$48.95 800-998-9938 fax: 707-829-0104
%O   Audience a- Tech 2 Writing 1 (see revfaq.htm for explanation)
%P   324 p.
%T   "Network Security Tools"

The preface states that the audience for the book is comprised of
anyone who wants to program their own vulnerability scanners, or
extend those already available.  It assumes familiarity with six of
the major tools in that class, as well as Perl.

Chapter one deals with writing plug-ins for Nessus.  It covers the
installation and quick use of the program, and then outlines the
Nessus Attack Scripting Language, including a few sample scripts.  The
Ettercap network analyzer and its plug-ins (in the C language) are in
chapter two.  (An overview of authentication for the ftp protocol is
provided in order to discuss looking for ftp passwords.)  The Hydra
password sniffer (and SMTP authentication) is described in chapter
three, as well as the Nmap port scanner.  Chapter four looks at
plug-ins (in Perl) for the Nikto Web scanner.  The Metasploit
Framework generic exploit development platform is examined in chapter
five, which also has a brief explanation of stack overflows.  Chapter
six discusses analysis of (mostly source) code for Web applications in
a search for vulnerabilities, reviewing the PMD Java analysis tool,
and reprinting pages of Java source code.

Part two turns to writing network security tools.  Chapter seven is
primarily a tutorial on Linux kernel modules.  Using Perl to write a Web
application scanner is in chapter eight.  SQL injection, and testing for
error message responses, is examined in chapter nine.  Chapter ten covers
the use of the libpcap library for producing network sniffing utilities.
Packet injection, using the libnet library and AirJack device driver, is in
chapter eleven.

While a lot of sample code is given in this text, ultimately it is
about using a bunch of tools.  The examples and exploits are
interesting, and do provide an indication of limited types of testing
utilities that could be developed.

copyright Robert M. Slade, 2005   BKNTSCTL.RVW   20051204    or

Please report problems with the web pages to the maintainer