The RISKS Digest
Volume 23 Issue 40

Thursday, 3rd June 2004

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Problems due to misfiled fingerprints
PGN
Building the A380: Just Like Software
Rex Black
eVoting standards and testing
Rebecca Mercuri
Re: Risks of believing in testing
Ken Knowlton
Spencer Cheng
Users, learning from history, social engineering, planning
Gadi Evron
Detectives follow the money trail to tackle spam
NewsScan
Are passwords passe'?
NewsScan
Re: Boa triggers blackout in Honduras
Ralph Barone
The lighter side of electronic voting
Marcus L. Rowland
Re: New GAO Report on Government Data Mining
Robert I. Eachus
Data Mining: Federal Efforts Cover a Wide Range of Uses
Monty Solomon
Daft security questions
Ian Chard
Info on RISKS (comp.risks)

Problems due to misfiled fingerprints

<"Peter G. Neumann" <neumann@CSL.sri.com>>
Wed, 2 Jun 2004 00:12:10 GMT

If you Google "Sanchez Rosario fingerprints", you get a remarkable case of a
man named Rene Ramon Sanchez who bore absolutely no resemblance to Leo
Rosario, a drug dealer and deportation candidate, but whose fingerprints
seemingly matched those of Rosario.  To make a long story very short, the
problem was that Rosario's fingerprints had been substituted for Sanchez's.
A judge insisted that fingerprints never lie, causing Sanchez an enormous
amount of grief.  In fact, Sanchez was arrested three times for Rosario's
crimes, spending two months in custody and threatened with deportation.
Thus, a single error propagated and the justice system failed to correct it.
[Source: Benjamin Weiser, Can Prints Lie? Yes, Man Finds to His Dismay, *The
New York Times*, 31 May 2004; PGN-ed]

  [Please dig up the full text.  This is a grueling parable for our times,
  as was the Brandon Mayfield case noted in RISKS-23.38 and 39.  PGN]


Building the A380: Just Like Software

<Rex Black <rexblack@ix.netcom.com>>
Sun, 30 May 2004 18:10:49 +1200

I don't have the references right now, but I read an article in the *Asian
Wall-Street Journal* on 28 May 2004 that talked about the A380 development
project.  (The A380 is Airbus' superjumbo, doubledeck jet.)  Apparently,
lots of new, previously-unused technologies and materials are involved.  The
project management methodology: Commit to fixed delivery deadlines for major
clients like Singapore Airlines before the composite wing struts and
supports were fully tested.

Hey, if it's good enough for software that ends up in critical information
infrastructure, it's good enough for a 500+ passenger airplane, right?

Rex Black Consulting Services, Inc., 31520 Beck Road, Bulverde, TX 78163 USA
www.rexblackconsulting.com  +1-830-438-4830 rex_black@rexblackconsulting.com


eVoting standards and testing

<"R. Mercuri" <notable@mindspring.com>>
Mon, 31 May 2004 07:54:35 -0400

On eVoting Standards and Testing, see two articles from 30 May 2004:

Elise Ackerman of the *San Jose Mercury News* reported on CA Secretary of
State Kevin Shelley's difficulties in obtaining information about the voting
systems' so-called "federal testing" by the so-called "independent" testing
authorities:
  http://www.mercurynews.com/mld/mercurynews/news/politics/8797832.htm

*The New York Times* also ran an editorial pointing out the lack of
transparency in the election system testing process.
  http://www.nytimes.com/2004/05/30/opinion/30SUN1.html

It is important to note that the U.S. Federal government does NOT oversee
the testing of election equipment, this is a privatized process paid for by
the voting system vendors.  Nor are there any true Federal "standards" for
election systems — the FEC's guidelines are not Federally required and have
not been mandated by all of the states.  Opposers to voter verified paper
ballots (VVPB) claim that the ballot printers are not implementable because
there are no "Federal standards" — yet there is NOTHING in the FEC
guidelines that PREVENTS a state (or a vendor) from ADDING security features
(such as VVPB) — so this argument is just a smokescreen!  In fact, the FEC
declined to include such guidelines in their 2002 VSS document, despite
having been alerted to the necessity of independent auditability for
electronic voting systems by numerous scientists who had been requested to
provide formal remarks (see my comment at
www.notablesoftware.com/evote.html). California, thankfully, is creating its
own VVPB standards — other states should be encouraged to do likewise.
(Note that there are FEC guidelines for optically scanned ballots, which are
VVPB.)

The new Election Assistance Commission has yet to establish its technical
commission — the group that was supposed to create a new set of standards,
presumably BEFORE states were to rush out and spend their HAVA money on
voting equipment — but now the $3B in HAVA funding is a true
cart-before-the-horse porkbelly (to mix metaphors!) and the $30M that NIST
was supposed to receive for their participation in the formation of the HAVA
standards has yet to materialize.  The IEEE is working on a voting system
standard, but it is currently limited to vote casting equipment and the
committee is heavily vendor controlled.
  http://grouper.ieee.org/groups/scc38/index.htm

Voting system standards are of increasing concern, because vendors and
election officials claim that products are safe because they have been
"Federally certified" yet investigations have revealed uncertified software
used in equipment in California, Indiana, and Georgia. The standards lack
security controls that would enable tracking of software changes, but even
so, they are being ignored in blatant violation of state election laws.
This situation will bear monitoring, and it is helpful that some major press
agencies have now begun to give it a closer look.


Re: Risks of believing in testing (Jewell, RISKS-23.39)

<Ken Knowlton <KCKnowlton@aol.com>>
Sat, 29 May 2004 21:30:29 EDT

Well, I've turned "store" into a macro that redundantly puts data in the
intended location and those before and after it, just to be super-safe --
because some other part of the program may contain an off-by-one error when
trying to fetch it.  It has tested perfectly.  Could I still be in trouble?

  [Certainly.  The data you are storing could be already corrupted.
  This is a problem with voting machines whose vendors claim that there
  are THREE INDEPENDENT VERSIONS OF YOUR VOTE, when all three come from
  the same possibly erroneous program.  PGN]


Re: Risks of believing in testing (Jewell, RISKS-23.39)

<Spencer Cheng <spencer@morphbius.com>>
Fri, 28 May 2004 22:36:02 -0400

While Chris Jewell is quite correct, the state of the industry on S/W
testing is very inadequate today. I would be very happy to have any S/W
that has actually being tested properly for functionality even if there
are latent errors in the S/W.

There is at least one testing technology available which can do proper
functional testing by generating optimally minimal test suites using
formal interface specification but the acceptance by industry has been
very slow.

I have looked long and hard at proof of correctness in a previous job
but other than safety critical systems, the cost is difficult to
justify. In addition, proof of correctness is not very useful if the
implementation changes even slightly.

I have become a firm believer over the years in proper black box
testing since -

1) it is the only way I know to establish that mythical S/W contract
that people have been talking about for the last 20 years.

2) Properly done, it can shield the applications from changes in the
underlying system. A complete and optimal set of black box test cases
*IS* the S/W contract since it defines the semantics. Any
implementation changes that changes interface behaviour would fail the
black box tests.

3) Requirement traceability can be satisfied by tracing the requirement
to the interface specification and onward to the test cases.


Users, learning from history, social engineering, planning

<Gadi Evron <ge@linuxbox.org>>
Sun, 30 May 2004 20:29:40 +0200

Apparently, although the security industry came a long way and proper
working procedures, risk analysis and billions of USD in products every year
do exist, we still didn't find a way to effectively attack the simplest of
problems.

Any problem can be addressed, but usually running a firewall or running
after a virus outbreak take precedent over methodology and planning.
Security professionals many times ignore the simplest of issues, being
over-whelmed by work and believing the basics are covered.

Here are a few examples.

Although different policies are being implemented in many organizations,
as well as many organizations do extensive security planning, users
still find ways to get by any restriction and go with the easiest
solution which would allow them to circumvent security and work in ease.

I truly believe that if we provide solutions that a user won't find easy
to use and integrated into the day-to-day job, the user will give us
hell and do anything and everything to compromise the security that is
set in place.

Why should users circumvent our security? Not because we have malicious
users (or do we?) but rather because the average user does not
understand the risks. Nor should he/she. Finding solutions to problems
is our job (see comment on user education below). The average user wants
to work, and if security gets in the way... users will find a way around
our solutions.

Security should be implemented in such a way that a user can do his or
her work as they always do and if that is not possible and security
*must* take precedence then the easiest solution for the user to work
with must be found.

Social engineering is another serious risk which usually doesn't get
addressed very well, if only for the difficulty in trying to address it.
However, I'd think we have learned something since the 1960's, apparently
they had the very same problems, only with different systems (see below).

Educating users to understand the risk would in the long run save serious
investment in damage control and extra security measures. As I mentioned,
users want to do their jobs but I doubt that users would chose to become a
security risk if things are explained to them.

I am partially contradicting myself. Should or shouldn't user education
be implemented? Absolutely, yes. Should we rely solely on education? No!
But education or not, if we work against the user we'd lose every time.

A third issue I'd like to discuss is learning and implementing
conclusions. Nowadays we are facing security issues on our cell phones
and printers, 5 years ago it was the beginning of the real problems with
the Internet and 10 years ago it was the Personal Computer and the BBS's
(I can't sign on the exact years, but you get my drift).

Security issues follow us around to whatever systems we use, and it
always seems like we run around putting out fires rather than build
systems secure from the get-go (naturally, I'm generalizing, not
everybody goes "live the moment" with security. Generalizing is wrong
but I think you see where I am going with this).

Not everybody implements security planning and procedures. As a matter
of fact, most don't. Those who try many times face immense bureaucratic
barriers as well as budget and time-frame issues that have to be dealt
with and taken into consideration.

The very basics of security [such as choosing a good password] are still
the biggest issues we have to deal with as security professionals.
Although this was discussed before, here is a paper which recently hit
/.: http://www.ftp.cl.cam.ac.uk/ftp/users/rja14/tr500.pdf.

Side note: how many of us use the same passwords for many different
applications?

Here is a very recent /. story about a widely known "password" used in
nuclear silos in the 1960's (00000000):
http://www.cdi.org/blair/permissive-action-links.cfm
("1, 2, 3, 4, 5? That's the exact same combination on my luggage!")

To make a long e-mail short, I believe the very basics, while not being
ignored, are not given enough attention thus leading to very serious
security issues and incidents.

Also, to state the obvious even if only for the simple reason it keeps
being over-looked; as long as we do not plan our future services and
projects with security in mind (not to mentions learning from lessons in
history), we are doomed to keep repeating our mistakes.

Oh. And educate users and try to find solutions they'd find easy to use.
Nearly forgot that one after my overzealousness.


Detectives follow the money trail to tackle spam

<"NewsScan" <newsscan@newsscan.com>>
Tue, 01 Jun 2004 11:10:11 -0700

It seems like spammers have been working overtime since the federal antispam
legislation took effect Jan.1, and the government is now turning away from
technical fixes offered by software engineers in favor of private
investigators' expertise to boost their efforts to stem the deluge of
unsolicited e-mail. In an unusual arrangement, the Direct Marketing
Association has paid $500,000 to hire 15 investigators to work alongside the
FBI agents and other government officials in a program known as Project
Slam-Spam. The project has built a case against 50 spammers, mostly by
following the money trail and relying on informants. "Spammers are more than
willing to rat each other out," says Microsoft investigator Sterling
McBride. "The most useful information is who pays for various aspects of the
spam operation," says attorney David Bateman, who represents Microsoft in
spam cases. "To spam, you need four or five things — a hosting service, a
domain name, mailing software, mailing lists and so on. Each one you have to
purchase from someone." Microsoft has filed 53 civil cases against spammers
in the last 15 months, based on the work of its investigation team. "The
real key is trying to figure out how to connect the virtual world" with
"someone you can hold responsible for this," says McBride. Once you've
nailed that down, "you can use all the tools of a normal investigation."
[*The New York Times*, 31 May 2004; NewsScan Daily, 1 June 2004]
  http://www.nytimes.com/2004/05/31/technology/31spam.html


Are passwords passe'?

<"NewsScan" <newsscan@newsscan.com>>
Tue, 01 Jun 2004 11:10:11 -0700

Scandinavian countries are at the forefront of a movement to ditch
conventional passwords in favor of so-called two-factor authentication.
These "password-plus" systems use things like disposable cards with
scratch-off codes in conjunction with the usual four-digit PIN for online
banking and other secure transactions. Each code is used once, and the bank
replenishes the supply by sending a new card when the customer is running
low. "A password is a construct of the past that has run out of steam," says
Identix CEO Joseph Atick. "The human mind-set is not used to dealing with so
many different passwords and so many different PINs." Other "password-plus"
options include Vasco Data Security International's pocket-sized device that
issues a random second code each time you type your regular password in. Or
MasterCard International's system, which requires swiping your "smart"
credit card through a special reader and entering your PIN to obtain a
single-use password good at Office Max, British Airways and a dozen other
merchants. And while U.S. banks are well aware of the perils of password
theft, they're "all afraid of making the first step," says a Gartner
analyst. "They don't want consumers going to other banks because it's too
hard."   [AP/*The Washington Post*. 1 Jun 2004; NewsScan Daily, 1 June 2004]
  http://www.washingtonpost.com/wp-dyn/articles/A5693-2004Jun1.html


RE: Boa triggers blackout in Honduras (Luntzel, RISKS-23.39)

<"Barone, Ralph" <Ralph.Barone@bchydro.bc.ca>>
Mon, 31 May 2004 09:33:30 -0700

Of course, the humourous aspect of this is that the "Ads by Google" section
below the article contains three advertisements for feather boas.  Ad
selection by algorithm has a way to go yet...

  [So have automatic pun detectors and pun generators!  PGN]


The lighter side of electronic voting (Miller, RISKS-23.39)

<"Marcus L. Rowland" <forgottenfutures@ntlworld.com>>
Fri, 28 May 2004 20:58:59 +0100

> forgetting to check the capacity of the electrical infrastructure is
> exactly the kind of bird-brained planning failure that would surprise no
> one on RISKS.

The number of outlet sockets isn't necessarily an indication of capacity.

I work as a technician in a London school. Two weeks ago we ran our AS level
physics practical exam; it's an important qualification, one of the last
exams before university.

One of the questions called for the use of boiling water, and suggested
providing one kettle between two students doing the experiment. We had
twelve students and it was one of two experiments, so at any given time six
students would need hot water. So we followed their advice and provided
three kettles.

We then found, as the exam started, that if all three were switched on
simultaneously the main breaker for the lab blew. Repeatedly. Somehow a lab
with 60+ 13A mains sockets had all of its electricity connected to one 16A
circuit breaker, and couldn't take three 10A kettles. In fact three out of
four labs on that floor are like that, the smallest of the four has a 32A
breaker. Amazingly none of these labs has ever tripped a main breaker
before.

In the end we ran extension cables in from the corridor and the adjoining
lab and gave the students a few extra minutes. Our report to the examiners
will make interesting reading.

Marcus L. Rowland             http://www.forgottenfutures.com/
LJ:ffutures     http://homepage.ntlworld.com/forgottenfutures/


Re: New GAO Report on Government Data Mining (Steinhardt, R-23.39)

<"Robert I. Eachus" <rieachus@comcast.net>>
Sat, 29 May 2004 14:09:06 -0400

If these are worrisome to the ACLU, my reaction is to say go away and stop
bothering me.  Let's turn the scare quotes around, and see what the purpose
of these programs is.

Verity K2 Enterprise seems to be reorganizing information on foreign
terrorism activities so that it can by looking up the name of one of the
individuals involved.  Yes, there is a potential for abuse where individuals
share the same name, but at the same time it can also be used to exculpate
such individuals.  If your namesake was in Botswana in February, and you
were in Cedar Rapids, then either the investigator or your lawyer should
twig to the fact that there are two individuals involved.

Incidentally, in today's environment, it is a very good idea to "Google"
yourself occasionally to find out what your namesakes are up to.  As a
case in point, one of my friends found a parole violation by a murderer
in Alabama of the same first and last name, but much younger...

Analyst Notebook I2, "Correlates events and people to specific information."
This seems to be the reverse of the above.  If a suspect was in Botswana in
February were there events of interest such as terrorist meetings that he
might have been involved in?  Again, yes, subject to misuse, but the purpose
is clearly not prosecutorial, but alert analysts to POTENTIAL correlations,
when an individual is a suspect for other reasons.

PATHFINDER: This is clearly a metadata tool.  Its purpose is to help
understand the structure and data dictionary of existing databases.  Here I
probably need to preach a bit as well.  There is an issue in some people's
minds that government access to sophisticated computer resources could have
serious adverse consequences.  However, there are lots of public resources
available to me as a person, such as on-line databases, and data searching
tools such as Google.  To say that citizens can use these resources, but to
try to prevent the government from using the same resources is ridiculous.

What does make sense is to limit or restrict the government's ability to use
its powers to collect data which individuals would prefer to keep private.
Or when such data is needed by a government, to restrict sharing of the data
to its original purpose.

If this tool were being used to say, integrate state driver's license
databases, that would be one thing.  But given the agency involved (the
Defense Intelligence Agency) I have to believe that the intent is to
integrate public (and possibly non-public) data on weapon systems, and who
currently owns them.

Case Management Data Mart - DHS. "Assists in managing law enforcement cases"
Sounds scary taken out of context, but legal proceedings today can get very
complex, in part due to privacy laws.  If you do have privacy concerns, you
do want the government to do a good job of tracking data involved in a case,
especially otherwise private data that was revealed in response to a
subpoena.  Again, the risk is not in having tools to manage the data, but
when the government or others does a bad job of managing it.

Realize that part of the job of managing data is to ensure that all copies
are destroyed when that is appropriate.  And no, I am not talking about
white collar crimes here.  For example, if the government subpoenas your
bank or phone records, wouldn't you rather that all of the copies they made
were destroyed when the case was closed?  If really necessary, the originals
can be subpoenaed again.


Data Mining: Federal Efforts Cover a Wide Range of Uses

<Monty Solomon <monty@roscom.com>>
Tue, 1 Jun 2004 11:11:51 -0400

Excerpt from Recent GAO Reports and Testimony

Data Mining: Federal Efforts Cover a Wide Range of Uses. GAO-04-548, 4 May.
http://www.gao.gov/cgi-bin/getrpt?GAO-04-548
Highlights - http://www.gao.gov/highlights/d04548high.pdf

Both the government and the private sector are increasingly using "data
mining" — that is, the application of database technology and techniques
(such as statistical analysis and modeling) to uncover hidden patterns and
subtle relationships in data and to infer rules that allow for the
prediction of future results. As has been widely reported, many federal data
mining efforts involve the use of personal information that is mined from
databases maintained by public as well as private sector organizations. GAO
was asked to survey data mining systems and activities in federal agencies.
Specifically, GAO was asked to identify planned and operational federal data
mining efforts and describe their characteristics.

Federal agencies are using data mining for a variety of purposes, ranging
from improving service or performance to analyzing and detecting terrorist
patterns and activities. Our survey of 128 federal departments and agencies
on their use of data mining shows that 52 agencies are using or are planning
to use data mining. These departments and agencies reported 199 data mining
efforts, of which 68 are planned and 131 are operational. Of the most common
uses, the Department of Defense reported the largest number of efforts aimed
at improving service or performance, managing human resources, and analyzing
intelligence and detecting terrorist activities. The Department of Education
reported the largest number of efforts aimed at detecting fraud, waste, and
abuse. The National Aeronautics and Space Administration reported the
largest number of efforts aimed at analyzing scientific and research
information. For detecting criminal activities or patterns, however, efforts
are spread relatively evenly among the agencies that reported having such
efforts. In addition, out of all 199 data mining efforts identified, 122
used personal information. For these efforts, the primary purposes were
improving service or performance; detecting fraud, waste, and abuse;
analyzing scientific and research information; managing human resources;
detecting criminal activities or patterns; and analyzing intelligence and
detecting terrorist activities. Agencies also identified efforts to mine
data from the private sector and data from other federal agencies, both of
which could include personal information. Of 54 efforts to mine data from
the private sector (such as credit reports or credit card transactions), 36
involve personal information. Of 77 efforts to mine data from other federal
agencies, 46 involve personal information (including student loan
application data, bank account numbers, credit card information, and
taxpayer identification numbers).

http://www.gao.gov/new.items/d04548.pdf


Daft security questions

<Ian Chard <ian@chard.org>>
Wed, 02 Jun 2004 16:18:10 +0100

I recently registered with (yet another) online banking service, and it
seems that the quest for 'secure' authentication challenges is generating
some very odd decisions.

As part of the registration process, as is commonplace these days, I had to
think of an answer to four security questions, and was instructed only to
give answers that would be known to me and 'not, for example, to a member of
your family'.  The questions were:

- Memorable place?
- Memorable date?
- Memorable name?

These three 'memorables' are difficult to answer with things that aren't
memorable to anyone else I might know.  The first thing that occurred to me
is "birthplace, mother's date of birth, mother's maiden name", i.e.
questions that other institutions commonly ask.  I'm sure that plenty of
people would assume that was what was wanted.

- Favourite singer?

This is fatally flawed as (a) there are lots of singers I like, (b) everyone
knows my tastes in music, and worst of all (c) they change over time.  You
might as well ask 'what did you have for breakfast this morning?'

- Secret question / answer? (for password resets)

I'm not a secretive kind of guy, and with the 20 character limit on the
question I just had to give an arbitrary answer, hoping that I'd never
forget any of this other stuff.

The risk here is that, in an attempt to make the sign-in process more
secure, this institution has made it all but impossible to provide secure
responses.  The only other forms of authentication used at sign-in are the
username and password, making all of this messing about with 'memorable'
responses a bit pointless.  Ironically, the only opportunity the user has to
set something *really* secure (the secret question and answer) is wasted, as
this isn't used by the sign-in process.  Asking the user to set several
question/answer pairs would need more thought on his part, but ultimately it
would be far more secure.

Please report problems with the web pages to the maintainer

x
Top