The RISKS Digest
Volume 24 Issue 73

Tuesday, 17th July 2007

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…


CCTV biometric surveillance software fails German reliability test
Martin Virtel
Military files left unprotected online
Randall via Dewayne Hendricks
Face recognition flop
Christian Kuhtz via Dave Farber
Microsoft protects me against ... Microsoft
David de Leeuw
Jogger with iPod Struck by Lightning
Gene Wirchenko
Phone switch rootkit in Greek surveillance
Jeremy Kirk
Space Shuttle uses 2-version programming
Andrew Morton
Re: N-version programming — the errors are in ourselves
Peter Mellor
Re: Gripen: Risks of safety measures in military jet ...
Henry Baker
Peter Mellor
Re: BSoD in standardized tests
Martyn Thomas
Re: Wikipedia and Responsibility
Joe Bednorz
Re: Risk with the Mac OS X 10.4.10 version number
Dirk Fieldhouse
Search Engine Dispute Notification
Jurek Kirakowski
Exploiting Online Games, Hoglund/McGraw
Info on RISKS (comp.risks)

CCTV biometric surveillance software fails German reliability test

<"Martin Virtel" <>>
Thu, 12 Jul 2007 10:26:43 +0200

German federal police enrolled 200 commuters to test if they could use face
recognition software to pick out suspects from a CCTV feed at a train
station under real-world circumstances. The three systems tested (produced
by Cognitec, Bosch and Cross Match) failed to recognize 8 out of 10 people
they should have, even when they were fed images of people standing still on
an escalator, one of the favourite settings for this kind of biometrics.
The key factor was the bad lighting conditions in the morning and afternoon,
when most of the test suspects passed the cameras. (The test suspects were
also fitted with RFID tags so they could be reliably identified by the test
setting). Under the right conditions, the systems failed to recognize 4 out
of 10 people, at a rate of 0.1 per cent of false alarms, which the
researchers thought acceptable for practical police work.

The final report [German, link below] recommends against using the systems
for identification purposes. They would only be useful under constant
lighting conditions, and either openly seeking cooperation of the persons
being checked by the biometrics software, or making them cooperate
involuntarily, by using what the report calls "eye-catchers", like changing
billboards or marquees. The report states that three-dimensional face
recognition, currently being developed, could probably do better.

Although the report points out that the systems tested are basically not
usable yet, there is still a major flaw in the design: The researchers
thought 23 false alarms per day would be acceptable. If you have 23 false
alarms a day, and only one or two real suspects (probably hiding their faces
behind a newspaper) crossing the cameras per week, I think you would stop to
trust the system very soon.

The final report (28 pages, german) is available here:

Martin Virtel, Redakteur Forschen & Entwickeln Fon: +49/40/319 90 469
Financial Times Deutschland GmbH & Co KG, Stubbenhuk 3, 20459 Hamburg;
Amtsgericht Hamburg HRA 92810

Military files left unprotected online (Randall via Dave Farber's IP)

< (Dewayne Hendricks)>
July 11, 2007 5:27:10 PM EDT

[Note:  This item comes from reader Randall.  DLH]

From: Randall <>
Date: July 11, 2007 2:02:15 PM PDT
To: David Farber <>, Dewayne Hendricks <>
Subject: Oops.

Detailed schematics of a military detainee holding facility in southern
Iraq. Geographical surveys and aerial photographs of two military airfields
outside Baghdad. Plans for a new fuel farm at Bagram Air Base in

The military calls it "need-to-know" information that would pose a direct
threat to U.S. troops if it were to fall into the hands of terrorists. It's
material so sensitive that officials refused to release the documents when
asked.  But it's already out there, posted carelessly to file servers by
government agencies and contractors, accessible to anyone with an Internet

In a survey of servers run by agencies or companies involved with the
military and the wars in Iraq and Afghanistan, The Associated Press found
dozens of documents that officials refused to release when asked directly,
citing troop security.  [Source: Mike Baker, Military files left unprotected
online, AP item, 11 Jul 2007; PGN-truncated good long item, not surprising];_ylt=Aixup_YEMhxbq7rTtPYTDaNhr7sF

Face recognition flop (via Dave Farber's IP)

<Christian Kuhtz <>>
July 11, 2007 2:32:40 PM EDT

Apparently the BKA (German equivalent of the FBI) tested face recognition,
spent 200K euros to test the system in a rail terminal in the city of Mainz
and basically declared it worthless in terms of being an investigative tool.
Apparently (per the article) this is the first public trial under normal,
every day conditions (rather than having the conditions manipulated for a
good showing) and only matched 30%.  Even when the lighting was modified to
be ideal, it only reached 60%. The BKA considers the system only useful if
the success rate is very near 100%.

The sample size was approximately 23,000 travelers per day over a period of
roughly 3 months.  The targets were 200 commuters who had volunteered for
the trial and travel through this rail terminal at least once per day.

BKA recommended that this is not a suitable system for surveillance and
facial recognition to try to match suspects in a manhunt etc.

The article is in German; try your favorite mechanized translator.  If
there's enough demand, I happen to be bilingual and may be convinced into
doing a translation in my spare time. ;-),1518,493911,00.html

RSS Feed:

Microsoft protects me against ... Microsoft

<David de Leeuw <>>
Tue, 17 Jul 2007 08:07:19 +0200

After the latest monthly automatic updates for Windows XP,
I got the following message on my screen:

  Data Execution Prevention: Microsoft Windows

  To help protect you computer, Windows has closed this program.
  Name: Windows Explorer
  Publisher: Microsoft Corporation

  Data Execution Prevention helps protect against damage from viruses
  and other security threats.   What should I do?

Here is the screen picture:

I will leave it to the Risks readers to find a creative explanation.

David de Leeuw, Medical Computing Unit, Ben Gurion University of the Negev
Beer Sheva, Israel

  [Actually, a Beer sounds like a good idea,
  after which you could Sit Shiva for your PC.  PGN]

Jogger with iPod Struck by Lightning

<Gene Wirchenko <>>
Thu, 12 Jul 2007 21:09:00 -0700

A Canadian jogger happened to be carrying an iPod at the wrong place at the
wrong time. Lightning struck his body during a thunderstorm, and the current
ran along the path of the earphones and into his head, causing injuries to
his jaw and ear eardrums. The patient's physicians say the combination of
sweat and the metal earphones directed the current to his head.

Phone switch rootkit in Greek surveillance (Re: RISKS-24.72)

<"Peter G. Neumann" <>>
Thu, 12 Jul 2007 11:10:18 PDT

Jeremy Kirk, Greek spying case uncovers first phone switch rootkit, 12 Jul 2007

A highly sophisticated spying operation that tapped into the mobile phones
of Greece's prime minister and other top government officials has
highlighted weaknesses in telecommunications systems that still use
decades-old computer code, according to a report by two computer scientists.

The spying case, where the calls of around 100 people were secretly tapped,
remains unsolved and is still being investigated. Also complicating the case
is the questionable suicide in March 2005 of a top engineer at Vodafone
Group in Greece in charge of network planning.

A look into how the hack was accomplished has revealed an operation of
breathtaking depth and success, according to an analysis on IEEE Spectrum
Online, the Web site of the Institute of Electrical and Electronics

The case includes the "first known rootkit that has been installed in an
[phone] exchange," said Diomidis Spinellis, an associate professor at the
Athens University of Economics and Business, who authored the report with
Vassilis Prevelakis, an assistant professor of computer science at Drexel
University in Philadelphia.

A rootkit is a special program that buries itself deep into an OS for some
malicious activity and is extremely difficult to detect. The rootkit enabled
a transaction log to be disabled and allow call monitoring on four switches
made by Telefonaktiebolaget LM Ericsson within Vodafone's equipment. The
software enabled the hackers to monitor phone calls in the same way law
enforcement would, minus the required court order. The software allowed for
a second, parallel voice stream to be sent to another phone for monitoring.

The intruders covered their tracks by installing patches on the system to
route around logging mechanisms that would alert administrators that calls
were being monitored. "It took guile and some serious programming chops to
manipulate the lawful call-intercept functions in Vodafone's mobile
switching centers," the authors wrote.

The secret operation was finally discovered around January 2005 when the
hackers tried to update their software and interfered with how text messages
were forwarded, which generated an alert. Investigators found hackers had
installed 6,500 lines of code, an extremely complex coding feat.

"The size of the code is not something that somebody could hack in a
weekend," Spinellis said. "It takes a lot of expertise and time to do that."

The investigation, which included a Greek parliamentary inquiry, netted no
suspects, due in part to key data that was lost or destroyed by Vodafone,
the authors wrote. It's not known if the hack was an inside job.

Vodafone may have been able to discover the scheme sooner through
statistical call analysis that could have linked the calls of those being
monitored to calls to phones used to monitor the conversations, they wrote.
Carriers already do that sort of analysis, but more for marketing than

But the defense against rogue code, viruses and rootkits is complicated due
to how telecom infrastructure has developed. "Complex interactions between
subsystems and baroque coding styles (some of them remnants of programs
written 20 or 30 years ago) confound developers and auditors alike," the
report said.

Space Shuttle uses 2-version programming

<"andrew morton" <>>
Fri, 13 Jul 2007 12:22:05 -0700

> The Space Shuttle does *not* use N-version programming - it uses identical
> instances of the same software, and uses redundancy to account for hardware
> failures.  Again, a good explanation of the methodology used is at

I wonder if Jeremy read the Wikipedia article he linked to...  currently it

  "The Backup Flight System (BFS) is separately developed software running
  on the fifth computer, used only if the entire four-computer primary
  system fails. The BFS was created because although the four primary
  computers are hardware redundant, they all run the same software, so a
  generic software problem could crash all of them."

Space Shuttle uses 2-version programming

<Peter G Neumann <>>
Mon, 16 Jul 2007 13:38:40 PDT

As I understand it, the following is true: the FIFTH computer is not fully
functional — it is intended to have just enough programming to land the
shuttle in the event that the four main computers all fail.  Testing it
safely under live conditions where the first four computers are inoperable
is essentially undesirable, if not practically impossible.  The fifth system
has never been invoked.  Worse yet, it has most likely not been maintained
for compatibility with the other four.  That is not what is generally
thought of as N-version programming for N=2 in the realistic sense of the
word, although it might be considered so for the stark subset of the
functionality.  It is more like a hot standby fail-safe mechanism.

Re: N-version programming — the errors are in ourselves

Sat, 14 Jul 2007 12:08:08 EDT

Regarding the thread in RISKS-24.71 and 72, the results of Knight and
Leveson's famous N-version experiment show that, if any three of the
replicates from among those they had written were combined in a
two-out-of-three voting configuration, the resulting fault-tolerant system
would have a probability of failure 19 times smaller than one of the
replicates on its own.

This is not as much as fully independent failure would yield, but it is a
significant improvement.

Peter Mellor +44 (0)20 8459 7669   Mobile: 07914 045072

Re: Gripen: Risks of safety measures in military jet ... (R-24.72)

<Henry Baker <>>
Thu, 12 Jul 2007 11:19:16 -0700

This delay-caused pilot-induced-oscillation reminds me of trying to drive
some of the simulated vehicles in current video-game environments.  The
video (and other) effects are stunning, but the experience is marred by the
delays between the controls and the perceived video.  Unlike driving a real
car at >100mph, for example, where the effects of control inputs are felt
immediately, the control inputs in videogame-simulated vehicles have a
noticeable delay.  These delays can cause uncontrollable oscillations if not
consciously damped by the gamer.

Analogously, a gamer who gets into a real car and attempts to go >100mph
will find the opposite situation — he is expecting a delay, but instead
gets instant (and potentially disastrous) results, compounded by the real
inertia of his arms & legs interfering with any recovery effort.

Re: Gripen: Risks of safety measures in military jet aircraft

Sat, 14 Jul 2007 11:52:49 EDT

In RISKS-24.71 Paul E. Black quotes "maddogone" as saying,

  "The tests show it was the G-suit which activated the ejection. ... when
  it filled with air it pressed against the release handle"

In  RISKS-24.72 Matt Jaffe <> quotes him and writes:

> I am unfamiliar with the Gripen but back in my day, more decades ago than
> I care to think about, US ejections seats were activated by handles of one
> sort or another and none of the handles in the aircraft I am familiar with
> could be activated by simple pressure (of an inflating G-suit).

In the early 1990's I spoke to a manufacturer of ejector-seats.  Ejection
was initiated by an upward pull on a handle positioned between the pilot's
legs.  The procedure was for the pilot to pull on the handle with the right
hand, with the left hand gripping the right wrist.  My contact explained
that this was not because the handle was particularly stiff to operate
(although it was not "hair-trigger") but in order to ensure that the pilot
took his left arm with him when he left.

Little chance of the inflation of a G-suit, or G-force alone, causing
unintentional operation in that case.  (I don't know if this applied to
specifically to the Gripen.)

With aerodynamically unstable aircraft, the situation is different.  If the
FCS goes down, the aircraft might break up within half a second or so,
depending on the airspeed and attitude, and I was given to understand that
ejection would be automatic, i.e., initiated without manual input from the

Perhaps someone familiar with the Eurofighter could supply some
authoritative information.

Peter Mellor +44 (0)20 8459 7669   Mobile: 07914 045072

Re: BSoD in standardized tests (Epstein, RISKS-24.67)

<Martyn Thomas <>>
Sun, 20 May 2007 09:00:42 +0100

Jeremy Epstein wrote " ...the RISKS of relying on systems that may not have
been fully tested are pretty obvious."

This comes up far too often.

How would you know a system had been fully tested?
How long would it take?
Can you think of a better way to avoid system failures than test-and-fix for a period of decades or more?

Testing is important for two main reasons:

to try to validate the assumptions you have made about the system's
environment; to detect systems that are egregiously bad, so that you can
scrap them and start again.

Computer scientists and programmers were saying all this 25 years ago. We
won't improve much on the current failure rates of projects until we accept
it, and act on it.

Re: Wikipedia and Responsibility (Weinstein, RISKS-24.72)

<Joe Bednorz <>>
Sat, 14 Jul 2007 17:44:05 GMT

* Immediate irresponsible editing, hugely magnified by Google, drives

* Wikipedia survives through advanced blame-shifting.  (Credit Seth
  Finkelstein for that insight.)

Changing either would destroy Wikipedia.

Why that won't happen:

* "One character who's laughing all the way to the bank, though, is Wales

* "Almost all of Wikipedia's 1,000-odd "administrators" receive no pay for
  their hard work other than the pleasure of power tripping - seeing nothing
  of the $14m of VC money Wikipedia co-founder Jimmy Wales has banked."

1. "Wikipedia defends Reality", from The Register.

2. "Farewell, Wikipedia?", The Register

Re: Risk with the Mac OS X 10.4.10 version number (Yip, RISKS-24.72)

<"Dirk Fieldhouse" <>>
Thu, 12 Jul 2007 13:15:32 +0100

I have several counter-risks here:

* writing applications that ignore the known (perhaps sometimes non-trivial)
  best practice, which is to detect the capabilities required by the
  application (and which, as I discover, has been supported by the Gestalt()
  API since Classic OS 6.0.4)

* if not using the best practice, writing applications that depend on a
  third point of the OS version;

* if detecting a minor OS version, writing applications that refuse to run
  instead of displaying a warning dialogue.

Having said which, the definition of MAC_OS_X_VERSION_ACTUAL does seem
incredibly short-sighted.

Search Engine Dispute Notification (Re: Weinstein, RISKS-24.72)

<"Kirakowski, Jurek" <>>
Wed, 20 Jun 2007 13:24:22 +0100

To see if I understand Lauren Weinstein's premise correctly, let me give an
example: my company has a web site [A] on which we advertise a particular
product that we have created and sell. A competitor sets up a web site [B]
hosted in some odd place which gets either more, or round about the same
number of, hits on a search engine when someone seeks information on
distinctive keywords to do with my site [A].  This competitor's web site [B]
contains derogatory, possibly misleading, and certainly unflattering
information about my company and its product. It may even pretend to be my
company and might sell a similar product or a clone of mine.

Search engines will as search engines do and [A] and [B] are likely to come
close in keyword searches because that is the skill of [B] to be able to
second-guess the algorithms.

Getting court orders against [B] will take ages, and may not be effective.

Lauren proposes that a 'dispute register' be set up in which [A] can
register that [B] and [A] are in dispute about content. The entry in the
register can't afford to make veracity claims or to take sides. It can only
note that there is a dispute between [A] and [B], which dispute has been
notified to the register by either owners of [A], [B] or both. If there is
an attempt by the register to make veracity claims, then a clever faker of
site [B] could tie up a process indefinitely with specious arguments (and oh
boy, have we all heard some lulus!)

The best way for the register to work is that if a searcher finds either [A]
or [B] they will also be given a link to the entry in the register.

However, if the searcher has to go to a special list at, say,
then if I were [B] I would certainly want to draw the searcher's attention
to the entry in this list and rely on my ability to scam. If I were [A] it
would not matter: someone has already found my site [A] and I would either
warn the searcher of counterfeit sites, or present my information in such a
way that it would be convincing.

Either way, this makes the job of [B] even easier. All [B] has to do now is
to set up a bogus site, never mind the keywords or any expertise in getting
noticed by a search engine. Having set up his misleading site, he then
notifies the register that [A] and [B] are in dispute as if he were the
aggrieved party.

And so even if it works for a small percentage of searchers, [B] has made
his hit.

The only real cure is web savvy and siting oneself within web communities.
It may take a while for this to sink in (how many people STILL get caught in
'Lotto' scams?), but on the web where there is a lot of free information the
seeker should understand that the rule CAVEAT EMPTOR applies. Let the buyer

My best defence as [A] is as follows:

I contact sites which reference mine [C], [D]...  and ask them to put a note
next to their listing of [A] saying something to the effect that the reader
should be aware that bogus sites have appeared (not giving their URLs!) A
person browsing for the distinctive keywords of my site will likely find
mention of my site on other sites indexed by the same keywords [C], [D]...
and will find this information. This is not a route available to the bogus
site owner [B] who does not have the same peer network as I do. It will be
in the best interests of [C], [D]... to assist me in this as they themselves
may one day come under attack in this way.

If someone browses using the distinctive keywords they will get [A], [B],
[C], [D]... and will see that there is a problem between [A] and [B].

I offer this more in the spirit of a 'straw man' since there must be an
obvious rejoinder which unfortunately this morning I just can't see.

Exploiting Online Games, Hoglund/McGraw

<"Peter G. Neumann" <>>
Sun, 15 Jul 2007 13:32:35 PDT

Greg Hoglund and Gary McGraw
Exploiting Online Games:
Cheating Massively Distributed Systems
(with a foreword by Ed Felten),
provides some background on the book.

Gary McGraw wrote:

  The most interesting thing to me about EOG is that I believe the kinds of
  time and state errors found in MMORPGs [massively multiplayer online
  role-playing games] like World of Warcraft are indicators of what we can
  expect over the next decade as SOA actually catches on.  You see, moving
  around state between gazillions of clients and a central server in real
  time is a huge security challenge.  Most software people screw it up.
  Darkreading wrote a little story about this:

  The book is packed with real code, hard-core examples, and things you can
  try yourself.  Give it a spin!

For multiplayer game developers, the book is a goldmine on virtual-world
security — particularly what needs to be learned from the RISKS Experience.
For RISKS readers not really interested in games per se, there is still much
grist for the mill in this book.  The subtitle of the book is perhaps the
real hook, exploring what developers of large complex distributed systems
need to learn and mistakes not to make.  A quote from Avi Rubin is pithy:
"Every White Hat should read it.  It's their only hope of staying only one
step behind the bad guys."  PGN

Please report problems with the web pages to the maintainer