The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 24 Issue 52

Thursday 21 December 2006


Report blames Denver election woes on flawed software
Digital cameras converted to weapons
Mark Brader
Secure Passports and IT Problems
Diomidis Spinellis
RFIDs in Malaysian license plates
An Ominous Milestone: 100 Million Data Leaks
ACM TechNews
Risks of using spelunker's tools inside the genome
Denise Caruso
Re: Yet another canceled public sector IT project
Steve Taylor
Richard Karpinski
Re: Flat train wheels
Olivier MJ Crepin-Leblond
Re: Trig error checking
Mike Martin
Richard A. O'Keefe
Dik Winter
USENIX Annual Tech '07 Call for Papers
Lionel Garth Jones
Info on RISKS (comp.risks)

Report blames Denver election woes on flawed software

<"Peter G. Neumann" <>>
Fri, 15 Dec 2006 10:30:17 PST

  [Thanks to Gene Spafford for spotting this one.]

November 2006 Election Day problems in Denver were attributed to flawed
ePollBook software from Sequoia Voting Systems ("decidedly subprofessional
architecture and construction").  A consultants' report said "The ePollBook
is a poorly designed and fundamentally flawed application that demonstrates
little familiarity with basic tenets of Web development."  Local election
officials were also slammed for their "casual approach" to important
technology.  Source: Todd Weiss, *ComputerWorld*, 13 Dec 2006

Digital cameras converted to weapons

< (Mark Brader)>
Tue, 12 Dec 2006 17:29:25 -0500 (EST)

One of the quotes in my signature collection reads: "Every new technology
carries with it an opportunity to invent a new crime."  That was Laurence
Urgenson (an assistant chief US attorney), speaking in 1987 about the first
arrests for what was later called cellphone cloning.

Well, here's another example of criminal technological improvisation:
electric shock weapons, like a Taser, produced by teenagers from disposable
digital cameras!

Mark Brader, Toronto,

Secure Passports and IT Problems

<Diomidis Spinellis <>>
Wed, 13 Dec 2006 13:12:38 +0200

In 2003 Greece, in response to new international requirements for secure
travel documents, revised the application process and contents of its
passports.  From January 1st 2006 passports are no longer issued by the
prefectures, but by the police, and from August 26th passports include an
RFID chip.  The new process has been fraught with problems; many of these
difficulties stem from the IT system used for issuing the passports.  On
December 12th, the Greek Ombudsman (human rights section) issued a special
22-page report on the problems of the new passport issuing process.  The
report is based on 43 official citizen complaints.

In the report's introduction the Ombudsman stresses the sinister symbolism
of transferring the authority for issuing passports to the police - a body
organized under quasi-military principles: international travel has nowadays
become mainly a security issue.  The Ombudsman details many procedural
problems of the new process. At least three of them appear to be related to
the new IT system handling the passport application.

1. The system used can't handle the correct entry of some names, apparently
because it doesn't support some characters or symbols, like the hyphen.

2. If a passport application is rejected, and the citizen subsequently
appeals successfully against that decision, the IT system doesn't offer a
way to resubmit the original application; a new application has to be
completed and submitted.

3. The passport IT system appears to have been linked against databases
containing the details of wanted persons, such as fugitives and those with
pending penalties.  Thus persons appearing in the wanted person database get
arrested when they go to a police station to apply for a passport.
According to the Ombudsman, this is problematic for two reasons.  First, the
data in the wanted person file may be wrong.  Second, through this procedure
the police performs a blanket screening of all citizens that wish to
exercise their right to travel outside the country.  Paradoxically, one
other database, that listing persons actually prohibited to leave the
country, is not consulted when the application is filed.

In sum the Ombudsman finds that the new system of issuing passports
emphasizes the security of the travel documents at the expense of citizens'
rights, decent governance, and efficiency.

The report also contains recommendations for minimizing the effects of the
current seasonal rush, which has resulted in queues forming at 3:30 in the
morning.  The Ombudsman recommends a system for setting up appointments by
phone and the addition of seasonal staff.  However, an obvious way of
streamlining the process is overlooked.  Currently citizens fill-in data
entry application forms.  Police officers then enter the details from the
forms into the IT system; typically at a snail's pace, because most of them
can't touch-type.  The whole process can easily last 15-20 minutes for a
single application.  Allowing the citizens to complete the forms on-line,
would allow the police officers to print the forms from a reference number
supplied by the applicant, and have them signed in person.  This would speed
up many of the applications and would also eliminate transcription errors.

Diomidis Spinellis - Athens University of Economics and Business

RFIDs in Malaysian license plates

<Peter G Neumann <>>
Sat, 9 Dec 2006 12:55:02 -0500

  [Thanks to Marc Rotenberg for this one.]

Malaysia to embed car license plates with microchips to combat theft
The Associated Press, 8 Dec 2006

Malaysia's government, hoping to thwart car thieves, will embed license
plates with microchips containing information about the vehicle and its
owner, a news report said Saturday.  With the chips in use, officials can
scan cars at roadblocks and identify stolen vehicles, the *New Straits
Times* reported.  The "e-plate" chip system is the latest strategy to
prevent car thieves from getting away with their crimes by merely changing
the plates, the report said.  (Nearly 30 cars -- mostly luxury vehicles --
are stolen every day in Malaysia.)  ...  The microchips, using radio
frequency identification technology, will be fixed into the number plates
and can transmit data at a range of up to 100 meters (yards), and will have
a battery life of 10 years.

An Ominous Milestone: 100 Million Data Leaks

<TechNews <technews@ACM.ORG>>
Mon, 18 Dec 2006 14:05:25 -0500

*Wired News* senior editor Kevin Poulsen announced on his blog last Thursday
that with announcements from UCLA (800,000 records stolen), Aetna (130,000
records stolen) and Boeing (320,000 records stolen), over 100 million
records had been stolen since the ChoicePoint breach almost two years ago.
While perpetrators of the Aetna and Boeing laptop thefts were probably not
after personal records, the same cannot be said for the UCLA data theft,
where a hacker had been accessing the university's database of personal
information for over a year before being discovered.  A Public Policy
Institute study, using data from the Identity Theft Resource Center, showed
that of the 90 million records stolen between 1 Jan 2005, and 26 Mar 2006,
43 percent were at educational institutions. ...

Risks of using spelunker's tools inside the genome

<Denise Caruso <>>
Wed, 20 Dec 2006 10:47:40 -0800

I just published a book called 'Intervention: Confronting the Real Risks of
Genetic Engineering and Life on a Biotech Planet.' (Details at  It focuses on the flaws in risk
assessment methods for innovations in science and technology, specifically
the scientific uncertainties that biotech risk evaluations dismiss or

While a lot of the issues are pretty much straight-up biology and
public-policy atrocities, there are several technical foibles in the brave
new world of industrial genomics that are in serious need of some
attention.  I ended up cutting most of them out of the book because of
excessive nerditude from the layperson's perspective, but I thought RISKS
folk might find them interesting.

1. 95 percent of the gene-disease links that make headlines every time
  they're reported (i.e., the gene for diabetes, Alzheimer's, obesity,
  schizophrenia, depression, and many others) are false positives,
  attributed to the speed and efficiency with which new equipment can
  automatically sequence and analyze genes.  Since "reading" genes can take
  about a day now instead of several months, thousands of them at a time can
  be scanned quickly.  But because the sequences are analyzed in bulk and
  quickly, some of them by chance alone seem linked to a disease in a
  statistically significant way even though they aren't.

2. There are no standards for PCR equipment, the machines that can
  synthetically "amplify" or reproduce a single DNA sequence into a few
  bazillion identical sequences.  It's so key to research that it's been
  called the "duct tape" of genomics.  Virtually every genetics experiment
  uses PCR.  But PCR is ultrahypersensitive, a situation that's exacerbated
  by way the equipment itself performs.  It's not just that results of DNA
  measurements from experiments performed on different PCR platforms are not
  necessarily comparable. One NIST staffer says that results may not be
  repeatable *even with the same equipment.*

3. For another lab workhorse, the "gene chip," the problem seems simply to
  be that it isn't sensitive enough.  Gene chips are based on a different,
  far less sensitive technology than PCR called hybridization.
  Hybridization is like having 20-20 tunnel vision -- it does great within
  its limited range, but it can detect absolutely nothing outside it.  What
  gene chips can produce is a false negative result.  False negatives in
  other kinds of tests would indicate that test subjects don't have HIV,
  when they do.  Or that anthrax DNA isn't present on the envelope in the
  Senate mailroom, when in fact it is.  In the context of risk and in the
  most obvious example -- genetic contamination -- the ability to detect a
  specific DNA or RNA sequence, or to be able to notice that a certain gene
  is not being expressed, would be a key element in determining whether or
  not there's cause for alarm.

With so many different points in the scientific process where the tools
themselves can introduce fundamental errors in the data, it doesn't seem out
of the question to ask what research results might be overlooking, mistaking
for something else, or simply not seeing at all.

I'd like to see this whole area broken wide open.  In my opinion, we are
messing around with the fundamental building blocks of living organisms
using tools that look very sophisticated, but seem to me more like the
equivalent of a spelunker's lamp and a pick and shovel.

Denise Caruso, Executive Director, The Hybrid Vigor Institute   Blog:

  [We have long been concerned here with the risks of overendowing risk
  assessment techniques -- and especially quantitative approaches -- along
  with the risks of misusing the results of such analyses.  Although
  Denise's book might seem to be less computer related than many other
  topics discussed in RISKS, I think there are many problems and lessons to
  be learned from what we have in common.  It is important for everyone to
  see that these problems are generic and relevant to essentially all
  technologies, not just computer systems.  PGN]

Re: Yet another canceled public sector IT project (R-24.49)

<"Steve Taylor" <>>
Mon, 18 Dec 2006 15:45:21 -0000

The commentary on this item so far has been quite interesting and I believe
does address a key reason for why so many public projects fail.
Unfortunately, here in the UK at least and I suspect elsewhere as well,
there are serious problems with public projects using incremental
development.  The UK OGC (Office of Government Commerce) has been promoting
"stronger" contracts between government and suppliers.  These have now
become quite onerous and they create a situation where the whole project is
managed in a legalistic way.  This results in both sides focusing very
strongly on the original requirements as specified at time of contract.  The
incorporation of changes is seen by both sides as an opportunity for the
supplier to make some real money and as such is subject to a rigorous and
expensive change management procedure.  The overall effect is to act as a
brake on any change preventing all but the most important changes taking
place.  The end result is all too easy to predict and the trend is in
completely the opposite direction to those being suggested.

The only way out of this mess is for government and their suppliers to find
a more cooperative model for operating these projects.  I strongly believe
incremental development is the way to go and it would be sensible for
suppliers to use it but in all too many cases there is insufficient
flexibility in the requirements.  The suppliers best interests are served by
doing what they agreed to do rather than something that will work.  Even
where a supplier is willing to be helpful the purchasing body will often
make the administrative cost of supplier suggested changes so high that none
are suggested.

Re: Yet another canceled public sector IT project (Ganssle, R-24.49)

<Richard Karpinski <>>
Sat, 16 Dec 2006 15:42:13 -0800

Management needs to learn more about projects so they don't fall into that
trap. They can't get and don't need a price, and if they believe in one that
they are given, then they should immediately resign as they have already
demonstrated their incompetence. All they need to know is that the initial
budget is affordable and that the initial steps of the project will probably
deliver results whose value is likely to exceed the costs.

There is certainly some attraction to the notion that the whole project can
be understood and guaranteed before any budget is allocated, but it is
completely obvious that such notions are unrealistic and misleading.
Management needs to manage, not only at the start of the project, but at
many other times before the project completion date is reached.

> Management needs a date, up front, to know if the product will hit the
> market window. We can waffle and promise to deliver a range of dates and
> costs, or we can protest mightily that such expectations are unreasonable.

I protest mightily. Even if they get a date, they cannot realistically
believe it. Even if they could believe it, they cannot be assured that the
market window will appear at the scheduled time.  Fixing the product
definition, cost, and timing all in advance of the beginning of the project
is manifest foolishness.

Give engineering a chance to work, a chance to trade off one detail against
another to maximize the value delivered to stakeholders. This does make the
resulting product less clear at the start of the project, but it IS less
clear than the advocates would have you believe. This is to say, the reality
is more unknowable than the advocates aver.

Skepticism is a required feature of good management practices. When a
proposal claims that a two or three year project can be accurately and
adequately defined and implemented with a knowable budget and that the
result will have a knowable value when it is delivered, years in the future,
the only proper management response is to laugh that proposal out of the

> Yet if the project is late, so there's no revenue being generated, and as
> a result our paycheck is a dollar short or a day late, we go ballistic.

If that were the only reason for no revenue being generated, this world
would be very much easier to understand and deal with. In fact the major
reason that no revenue is generated is that the project designers did not
plan for any revenue to be generated until the project is completed and all
the money spent.

Many projects are not intended to produce a product to be marketed and then
ignored. In these cases, usable deliveries of improvements can generate
revenue, or value, long, long before the completion of the entire
project. Neglecting this fact is failing to accomplish the due diligence
aspect of management.

I suspect that many customers who recognize the need for a product to
accomplish X will be only too happy to PAY for a product which does only a
portion of the whole task, if it will be improved monthly or quarterly as
guided by customer feedback. This notion needs some testing and validation,
but several recent works stress the value of carrying on a two way
conversation with your customers. See "The Clue Train Manifesto" for

> Alas, I fear this conundrum will never be resolved.

Probably true, but that does not mean it cannot be addressed and managed to
accomplish substantial improvements in project management and substantial
reduction of the risks involved. Such gains require non-traditional
approaches but they do not require silver bullets or magic or slavish
adherence to some particular method. They only require open minds, brave
hearts, skepticism, and common sense. Would we had more of those.

Richard Karpinski, 148 Sequoia Circle, Santa Rosa, CA 95401
+1 707-546-6760   "nitpicker" in subject line gets past my spam filters.

Re: Flat train wheels (Ladkin, RISKS-24.51)

<"Olivier MJ Crepin-Leblond" <>>
Sun, 17 Dec 2006 00:27:41 +0100

> In Fall, in areas with deciduous trees, a slippery film deriving
> from mulched fallen leaves can build up on rails.

More than this, it is a mulch that develops on the wheels themselves that
make them slip. The problem appeared in the UK on most new trains which had
"new" (at the time) breaking systems consisting of disc brakes, or pads
rubbing the *side* of the wheel. British Rail engineers found that the
problem was less likely on older trains where the brake pads would be
applied to the rolling surface of the wheel itself (the circumference), thus
scrubbing the rolling surface clean of all the mulch every time the brakes
were applied.

Definitely a case where new technology is introducing new problems.

Olivier MJ Crepin-Leblond, Ph.D. <>  Tel:+44 (0)7956 84 1113

  [This leaves mulch to be desired.  PGN]

Re: Trig error checking (Ewing, RISKS-24.51

<"mike martin" <>>
Mon, 18 Dec 2006 21:27:26 +1100

Martin Ewing wrote of a VAX floating point unit that exhibited intermittent
faults (RISKS-24.51 <>). Computers'
arithmetic-logic units ((ALUs) don't seem well protected against
intermittent faults. In the early 1970s I worked with a Burroughs B6700
computer that occasionally, when compiling a copy of its operating system,
failed the computation with a spurious fault. (The core operating system was
written in an Algol dialect and took about 20 minutes to compile.)

After much investigation we found that the cause was an occasional bit
dropped by the RDIV operator (which calculated the remainder from an integer
division). This rarely used operator was used by the operating system in
calculating disk segment addresses in the computer's virtual memory paging
file. When a bit was dropped in the segment address, the compiler would be
fed a chunk of garbage from the wrong page file segment and flag a syntax
error. After some detective work I wrote a program that did repeated RDIVs
and checked the results, highlighting the problem. The fault was rare, less
than once in 1000.

Had the problem occurred with a more commonly used operator the result could
have been nasty.

Perhaps ironically this was one of the first B6700s that was delivered with
main memory that included Hamming code single bit error correction and
double bit error detection. But nothing detected a faulty ALU.

Mike Martin

Re: Trig error checking (Ewing, RISKS-24.51)

<"Richard A. O'Keefe" <>>
Tue, 19 Dec 2006 14:01:56 +1300 (NZDT)

I used to work at Quintus, who then made a Prolog compiler.  We were keen
to get our product in the catalogues of various computer vendors.  One
such vendor had a policy of thoroughly testing programs before accepting
them into their catalogue.  So one fine day a bug report from them landed
on my desk:  such-and-such a trig function was delivering answers that
were slightly off.  That's odd, I thought:  I wrote that code and it just
calls their code through the foreign interface, I wonder what happens if
I call that from C?  You guessed it:  the bug was in their code.  They
were testing other people's code much more thoroughly than their own.

Re: Trig error checking (Ewing, RISKS-24.51)

Sun, 17 Dec 2006 03:33:54 +0100 (MET)

Mark Ewing:
 >                                              Eventually we found that our
 > VAX floating point unit (a very large circuit board) was malfunctioning.  It
 > gave slightly wrong results, but quietly - there were no system error
 > reports.  The diagnostic was that sin**2 + cos**2 was intermittently not
 > quite equal to 1 for various arguments.  [NOTE: This is a positive example
 > of circular reasoning!  PGN]

Indeed.  Even with an optimal implementation of sin and cos it is not
necessarily the case that sin**2 + cos**2 equals 1.  It is likely, but
not necessary.  Consider an angle of 45 degrees.  In IEEE single
precision the best approximation to sqrt(2)/2 is (in binary):
the square of that is (with proper rounding):
not really equal to 1/2, and I think that similar examples can be created
using radian arguments.  This one also does not work in IEEE double
precision.  (And if you want to check with a program, be sure that after
each individual operation the result is rounded to the precision you
operate with and that you do use that rounded result.  Otherwise you
will see discussions I have had already a long time ago about another
problem with floating-point arithmetic.)

There is a lot of relevance here.  Assuming that mathematical relations
also hold when doing floating-point arithmetic on computers can lead to
errors.  There is a whole field of mathematics devoted to just this
(numerical mathematics).

 > Field service got us new boards, but how could we have confidence this bug
 > was not recurring?  In the end we ran a background routine that checked
 > sin**2 + cos**2 forever.  (Today, we would make it a screensaver program.)

I do not think you checked for the whole range, otherwise you would have
found errors forever.

 > There is a RISKS issue -- how do you know your CPU is giving good results?
 > There aren't any check bits for trig functions.

Trust.  Already quite some time ago (1970?) Cody and Waite wrote a book
that contained programs that would check the basic elementary functions.
There also does exist an elementary program that checks the basic
arithmetic of computers (from memory, "elefun" by Kahan).  But even these
did not help with the Pentium bug.  And, of course, some basic knowledge
about numerical mathematics.

dik t. winter, cwi, kruislaan 413, 1098 sj  amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn  amsterdam, nederland;

Re: Trig error checking (Ewing, RISKS-24.52)

<RISKS List Owner <>>
Thu, 21 Dec 2006 12:01:27 PST

Of course, an incorrect cos(x) could have been computed from an incorrect
sin(x) as
          /             2
         V  1 - [sin(x)]

in which case the sum of the squares would be IDENTICALLY 1, modulo
roundoff errors.  So that check is NOT ENOUGH.

  [Incidentally, I put a correction to Martin's note in RISKS-24.51,
  changing "1990s" to "1980s" on the date of the VAX episode.  PGN]

USENIX Annual Tech '07 Call for Papers

<Lionel Garth Jones <>>
Thu, 21 Dec 2006 10:58:20 -0800

Call for Papers
2007 USENIX Annual Technical Conference
June 17-22, 2006, Santa Clara, CA
Paper Submissions Deadline: January 9, 2007

On behalf of the 2007 USENIX Annual Technical Conference program committee,
we request your ideas, proposals, and papers for tutorials, refereed papers,
and a poster session.

The program committee invites you to submit original and innovative papers
to the Refereed Papers Track of the 2007 USENIX Annual Technical
Conference. Authors are required to submit full papers by 11:59 p.m.  PST,
Tuesday, January 9, 2007.

We seek high-quality submissions that further the knowledge and
understanding of modern computing systems, with an emphasis on practical
implementations and experimental results. We encourage papers that break new
ground or present insightful results based on experience with computer
systems. The USENIX conference has a broad scope.

Specific topics of interest include but are not limited to:

* Architectural interaction
* Benchmarking
* Deployment experience
* Distributed and parallel systems
* Embedded systems
* Energy/power management
* File and storage systems
* Networking and network services
* Operating systems
* Reliability, availability, and scalability
* Security, privacy, and trust
* System and network management
* Usage studies and workload characterization
* Virtualization
* Web technology
* Wireless and mobile systems

More information on these and other submission guidelines is available
on our Web site:

Paper submissions due: Tuesday, January 9, 2007, 11:59 p.m. PST
Notification to authors: Monday, March 19, 2007
Final papers due: Tuesday, April 24, 2007

Please note that January 9 is a hard deadline; no extensions will be given.

We look forward to your submissions.

On behalf of the Annual Tech '07 Conference Organizers,

Jeff Chase, Duke University
Srinivasan Seshan, Carnegie Mellon University
2007 USENIX Annual Technical Conference Program Co-Chairs

Please report problems with the web pages to the maintainer