The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 23 Issue 83

Wednesday 6 April 2005

Contents

Cancer patients exposed to high radiation
Monty Solomon
Carjackers swipe biometric Mercedes, plus owner's finger
John Lettice via Alpha Lau
Air disasters: A crisis of confidence?
Michael Bacon
Secret Service DNA - "Distributed Networking Attack"
Brian Krebs via Monty Solomon
Yet another phishing scam
Michael Bacon
Times change ... problems don't
Louise Pryor
Re: Why IE is insecure ...
Steve Taylor
Simon Zuckerbraun
Craig DeForest
Re: Remote physical device fingerprinting
Jerry Leichter
Re: Cruise Control failures
Jay R. Ashworth
John Sawyer
Neil Maller
Markus Peuhkuri
David G. Bell
Amos Shapir
David R Brooks
New Security Paradigms Workshop submission deadline approaching
George Robert Blakley III
Info on RISKS (comp.risks)

Cancer patients exposed to high radiation

<Monty Solomon <monty@roscom.com>>
Sun, 3 Apr 2005 22:37:38 -0400

77 patients at the H. Lee Moffitt Cancer Center and Research Institute
cancer treatment center were exposed to radiation levels 50% stronger than
they were supposed to receive because a radiation machine was improperly
installed.  Physicists from the federal Radiological Physics Center detected
the error on 7 Mar, but it was not acknowledged until 1 Apr.  According to a
report by the Florida Bureau of Radiation Control, a physicist calibrating
the machine used an incorrect formula.  Certain side-effects (headaches and
speech and memory loss) reportedly can take from 3 to 12 months to develop.
Twelve patients subsequently died (although the article did not indicate
whether it was as an iatrogenic result of the overdosing or just progressed
cancer).  [Source: AP item in *The Boston Globe*, 2 Apr 2005; PGN-ed]

http://www.boston.com/yourlife/health/diseases/articles/2005/04/02/cancer_patients_exposed_to_high_radiation/


Carjackers swipe biometric Mercedes, plus owner's finger

<Alpha Lau <avlxyz@yahoo.com>>
Mon, 4 Apr 2005 23:26:01 -0700 (PDT)

 Carjackers swipe biometric Merc, plus owner's finger
 By John Lettice - 4 Apr 2005

 A Malaysian businessman has lost a finger to car thieves impatient to get
 around his Mercedes' fingerprint security system.  Accountant K Kumaran,
 the BBC reports, had at first been forced to start the S-class Merc, but
 when the carjackers wanted to start it again without having him along, they
 chopped off the end of his index finger with a machete.

 The fingerprint readers themselves will, like similar devices aimed at the
 computer or electronic device markets, have a fairly broad tolerance, on
 the basis that products that stop people using their own cars, computers or
 whatever because their fingers are a bit sweaty won't turn out to be very
 popular.

 They slow thieves up a tad, many people will find them more convenient than
 passwords or pin numbers, and as they're apparently `cutting edge' and
 biometric technology is allegedly `foolproof', they allow their owners to
 swank around in a false aura of high tech.
 http://www.theregister.co.uk/2005/04/04/fingerprint_merc_chop/

And that is exactly where the risks lie, high-tech does not necessarily mean
high-security!

At least in sci-fi, fingerprint systems check for a heartbeat or pulse!!!

  [`Cutting edge', eh?  Wow!  Incidentally, for many years I've been citing
  the concept of an amputated finger as a hypothetical way of defeating a
  poorly designed fingerprint analyzer.  It's no longer hypothetical.  PGN]


Air disasters: A crisis of confidence?

<"Michael \(Streaky\) Bacon" <himself@streaky-bacon.co.uk>>
Tue, 5 Apr 2005 10:42:47 +0100

Air disasters receive widespread press coverage.  Crashes often cause people
to cancel bookings with the affected airline.  The share price often dips,
sometimes severely, in the aftermath of an air accident.

This is also true for many other major incidents involving corporations
(i.e., not 'natural' causes).

One thing often stands between a 'crisis of confidence' and 'business as
usual', and that is the credibility of the organisation's spokespeople.

On 3 April, a Phuket Air 747 was twice forced by passenger action to abort a
take-off from the UAE when fuel was seen flowing from the wing over an
engine as the plane accelerated down the runway.  A UK-based spokesman for
the airline told the media that no-one had been in any danger and claimed
that passengers had "panicked".  He is also reported to have said that
passengers were not qualified to judge what was safe or not.  He said that
the wing tanks had been "over-filled".

Whilst I do not comment upon the accuracy or otherwise of the spokesman's
comments, I will comment on their advisability and I do suggest that this is
not a good way to manage risk.

It is reported that many passengers have now refused to fly any further with
the airline.

A contrast in risk management is provided by one British airline that
suffered two 'incidents' with the same type of aircraft some nine years
apart.  In the first, the aircraft crashed with tragic loss of life
following the (erroneous) shutdown of one engine and loss of power on the
other (faulty) engine during an emergency landing.  The Chairman of the
airline was interviewed at the scene and with tears in his eyes promised to
find out what had happened and to take every possible step to prevent its
recurrence.  The share price was not much affected, neither were bookings.
The second incident concerned the loss of oil pressure in both engines
shortly after take-off - leading to the shut-down of both engines and a
successful 'dead-stick' landing.  The loss of oil was caused by a
maintenance failure.  The airline put the 'Director of Engineering' (or
similar title) in front of the media, and he attempted to explain away the
incident as a problem with their maintenance company.  It was reported at
the time that passengers subsequently canceled bookings and the stock price
fell.

The 'what', the 'way' and the 'how' of the Chairman were believable,
those of the Director were not.

The RISK is in getting the wrong person to say the wrong thing.  Effective
crisis management involves the right thing by the right person at the right
time in the right way to the right people.

  [The first case is that of a British Midland 737-400 (RISKS-11.42).  PGN]


Secret Service DNA - "Distributed Networking Attack"

<Monty Solomon <monty@roscom.com>>
Wed, 30 Mar 2005 09:07:19 -0500

DNA Key to Decoding Human Factor: Secret Service's Distributed Computing
Project Aimed at Decoding Encrypted Evidence
Brian Krebs, *The Washington Post*, 28 Mar 2005 [PGN-ed]

For law enforcement officials charged with busting sophisticated financial
crime and hacker rings, making arrests and seizing computers used in the
criminal activity is often the easy part.

More difficult can be making the case in court, where getting a conviction
often hinges on whether investigators can glean evidence off of the seized
computer equipment and connect that information to specific crimes.

The wide availability of powerful encryption software has made evidence
gathering a significant challenge for investigators.  Criminals can use the
software to scramble evidence of their activities so thoroughly that even
the most powerful supercomputers in the world would never be able to break
into their codes. But the U.S. Secret Service believes that combining
computing power with gumshoe detective skills can help crack criminals'
encrypted data caches.

Taking a cue from scientists searching for signs of extraterrestrial life
and mathematicians trying to identify very large prime numbers, the agency
best known for protecting presidents and other high officials is tying
together its employees' desktop computers in a network designed to crack
passwords that alleged criminals have used to scramble evidence of their
crimes -- everything from lists of stolen credit card numbers and Social
Security numbers to records of bank transfers and e-mail communications with
victims and accomplices.

To date, the Secret Service has linked 4,000 of its employees' computers
into the "Distributed Networking Attack" program. The effort started nearly
three years ago to battle a surge in the number of cases in which savvy
computer criminals have used commercial or free encryption software to
safeguard stolen financial information, according to DNA program manager Al
Lewis.  ...

http://www.washingtonpost.com/wp-dyn/articles/A6098-2005Mar28.html


Yet another phishing scam

<"Michael \(Streaky\) Bacon" <himself@streaky-bacon.co.uk>>
Mon, 4 Apr 2005 07:09:26 +0100

The Internet payments company PayPal is a natural target for phishing scams.
The latest has both amusing and serious issues.

Received 3 April it refers to "8 April" as the date on which "unusual
activity" was identified ... clearly the phishermen (I do hope that's not
non-PC) have conquered time travel (but one therefore queries why they need
to phish).

The fonts change throughout the e-mail, in one instance within a sentence.
The formatting is poor too.

There is the usual link to click.  This points to an IP address that appears
to be hosted in India (I am in UK).

It also refers to (but does not provide a clickable link to)
"https://www.paypal.com/us/" - an authentic PayPal website and indicates
that you should type this into your browser ... which is good practice.

When the false link is clicked, a page loads from the IP address.  This page
then reports an error and loads another page that shows
"https://www.paypal.com/cgi-bin/webscr?cmd=3D_login-run" in the Address box
and status line.  It does not, however, show a 'locked' icon on the status
line.  This is, of course, a 'false flag' page ... but it is good enough to
fool more people than many other phishing scams.

I'm not a techie, so do not purport to understand how this works.  For the
real experts out there, the clickable phishing address is
http://61.95.206.3/.paypal.com/cmdr_login/error.html .

The RISKS?  As we get more sophisticated ... so do the crooks.

The 'saving grace'?  Most crooks are not that clever.


Times change ... problems don't (RISKS-23.82)

<Louise Pryor <pryor@pobox.com>>
Thu, 31 Mar 2005 16:24:12 +0100

The clocks changed in the UK at the weekend, as they do twice a year. So
you'd think that computer systems would be able to cope, and that there
would be no major disruption. And, on the whole, you'd be right, though you
wouldn't necessarily know it from the press coverage.

About 1,500 Barclays ATMs (out of a total of about 4,000) were out of action
for over 12 hours on Sunday. We were told that a manager put the clocks back
rather than forward, and that this mistake had caused the problems. The
Daily Telegraph carried a leader opining on the lessons that Barclays could
learn from its employee's blunder.  http://makeashorterlink.com/?M170229CA

But hang on a minute: A real live person, changing the clocks in the data
centre at 01:00 on Sunday morning? It just doesn't make sense. Why on earth
wouldn't the time change be automated? After all, it is in just about every
other computer in the world. Did you have to change the time on your PC this
weekend?

And in fact, Barclays say that it was a hardware fault, and not related to
the time change at all. This is much more plausible, and is what I heard a
Barclays person say on the radio. But if it's true, where did the story of
the error-prone manager come from? The Telegraph said that they had it from
customer services staff.

I imagine it happened something like this: The ATMs go down. (And, it
appears, the online banking too). Calls pile into the call centre. Nobody at
the call centre knows what the problem is. (And why should they know?  They
are not omniscient, and these things often take time to track down.)  They
are talking to each other about what is going on. Someone says that it must
be something to do with the clocks changing, as that's something that
doesn't happen every day. And someone else says "Yeah, I bet that's it. Some
stupid person changed them in the wrong direction!" And before you know
where you are, an off the cuff remark (probably made in jest) has spread
around the call centre and becomes the official version.

People are very unwilling to believe in coincidences. They also have mental
models of how things work. And surprisingly often, those mental models boil
down to a little man in the box (or, in this case, in the data centre). So
when the journalists were told that the problem arose because a person made
a mistake, they didn't stop to think about whether the story really made
sense.

Louise Pryor <pryor@pobox.com>  www.louisepryor.com


Re: Why IE is insecure: flawed logical thinking... (DeForest, R 23 81)

<"Taylor, Steve" <Steve.Taylor@assetco.com>>
Wed, 30 Mar 2005 10:04:26 +0100

Craig DeForest has quite correctly raised the issue of logical flaws in the
argument presented by Dave Massy (head developer of Internet Explorer),
however, the key thing that I read in the argument is that Dave Massy is not
interested in whether IE or Mozilla is more secure, he is simply presenting
`rhetoric' in an effort to win the argument. This is a classic situation for
not getting at the truth. It is common in this sort of situation that both
sides are so preoccupied in winning the argument that the truth becomes
irrelevant, after all, rising higher in any organisation is often more about
winning arguments than getting at the truth.

The sorriest aspect of this is the clear implication that Dave Massy is not
interested in whether IE is secure, he is only interested in its reputation.
This matches Microsoft's traditional behaviour of addressing perception
rather than reality.

This is one of the most serious human risk factors on any project.

Steve Taylor, Technical Director, AssetCo Data Solutions


Re: Why IE is insecure: flawed logical thinking... (DeForest, R 23 81)

<Simon Zuckerbraun <szucker@sst-pr-1.com>>
Fri, 01 Apr 2005 13:24:46 -0600

Dave Massy never made the colossal mistakes you think he made. All Dave
Massy was saying is that IE access the Windows operating system through the
same interface that Mozilla does. Therefore a misbehavior of Mozilla has the
potential to cause the same amount of damage as a misbehavior of IE has the
potential to cause. This would not be the case if, for example, IE were
embedded in the Windows kernel, or otherwise had special access to
privileged APIs. In that case, IE could cause *far more* damage than a
third-party browser could, and this would indeed be a poor security
configuration.

People may be led to believe that the latter situation is actually the case,
due to the fact that IE is called "part of the Windows OS". Dave Massy wrote
to clarify this matter. The truth is that all that the statement "IE is part
of the Windows OS" is meant to imply is that IE is installed automatically
on every Windows system, and developers writing for the Windows platform may
rely on IE's presence if they so choose.


Re: Why IE is insecure: flawed logical thinking... (DeForest, R 23 81)

<Craig DeForest <zowie@euterpe.boulder.swri.edu>>
Fri, 01 Apr 2005 12:55:20 -0700

Simon Zuckerbraun wrote:
> All Dave Massy was saying is that IE access the Windows operating
> system through the same interface that Mozilla does. Therefore a
> misbehavior of Mozilla has the potential to cause the same amount of
> damage as a misbehavior of IE has the potential to cause.

Hmmm... I agree that he made that point among others, but he appears to be
saying much more than that.  It is worth excerpting Dave's blog here, to see
exactly how he responds to Mitchell's claims about why Firefox might be more
secure than IE.

> [...]

We could spend a long time deconstructing exactly what each of the authors
believes and/or says about IE and Firefox; but I find it hard to understand
Massy's meaning without including the fallacious argument I mentioned
earlier, or (perhaps worse) assuming that he is being disingenuous.  Not
being an OS facility is a significant advantage to Firefox, even if only
because the Firefox code does not need to have as many entry points.


Re: Remote physical device fingerprinting (Ross, RISKS-23.82)

<Jerry Leichter <jerroldleichter@mac.com>>
Wed, 30 Mar 2005 10:06:12 -0500

David Ross responds to the article by Roth in RISKS-23.80 referring to
Broido and Claffy's work on identifying physical computers by their clock
skew (www.cse.ucsd.edu/users/tkohno/papers/PDF/KoBrCl05PDF-lowres.pdf).
In the grand Internet tradition of attacking work without reading it (well,
I suppose the tradition is much older than the Internet...) he claims this
is easy to defeated by synchronizing with multiple NTP servers, perhaps more
frequently than usual.

Quoting from the abstract of the paper:

> Further, one can apply our passive and semi-passive techniques when the
> fingerprinted device is behind a NAT or firewall, and also when the
> device's system time is maintained via NTP or SNTP.

The details are discussed in the paper.  (Basically, one measures the skew
over multiple short intervals - intervals in the sub-second range.  I won't
go into details because this is a good paper and worth reading.)


Re: Cruise Control failures (Brown, RISKS-23.82)

<"Jay R. Ashworth" <jra@baylink.com>>
Mon, 4 Apr 2005 21:33:07 -0400

> ... anyone who wants to get on TV can just call and say their Renault's
> cruise control blocked; it's "another claimed incident", and why should
> anyone check if it really happened, if it makes a good story ?

Exactly.  This is the same reason, you'll recall, that the Audi 5000 was
taken off the US market: driver error that the driver didn't want to take
responisibility for.  The assertion that the car suddenly took off by itself
was later discredited by the NHTSA, as reported in the book _Gallileo's
Revenge: Junk Science in the Courtroom_, but that didn't stop the incident
from costing Audi and the remains of the car industry in the US about $150M,
installing accelerator interlocks.

House (MD) has it right: everybody lies.

Jay R. Ashworth <jra@baylink.com>, Ashworth & Associates, St Petersburg FL USA
http://baylink.pitas.com  +1 727 647 1274


Re: Cruise-control failures? (Scheidt, RISKS-23.81)

<John Sawyer <jpgsawyer@btopenworld.com>>
Wed, 30 Mar 2005 08:58:57 +0100 (BST)

In response to the article about Cruise-control failures in RISKS-23.81, my
father (a braking system engineer for over 30 years) wrote the following.
Dr John Sawyer

Well all the ABS systems I know have their own micro processor. Of course
that does not mean a Renault has!

Also ABS systems do nothing unless a wheel is detected locking. i.e.  no
fluid flow is closed off from the brakes.  Generally when they do activate
they do not shut off the brakes but dump fluid which would tend to make the
pedal sink. That is why the brake pedal tends to pulse while in an ABS stop.
This is not always the case as there are systems that isolate the apply
system to stop the brake pedal pulsing. Not sure what is on a Renault. But
anyway, in this case the ABS would not be active so it should have no
effect.

However if the cruise control for some reason does not disengage, the brakes
could feel ineffective as the brakes fight the engine as the cruise control
tries to maintain speed!  The brakes would win but it would give you a
fright!

On micro processors, generally the systems are designed with multiple check
systems and any fault results in a shut down reverting the vehicle to a limp
home mode or complete shut down.  ABS systems become inoperative such that
the brakes operate normally but have no way of stopping them locking
up. Brakes are still hydraulic and do not use micro processors to make them
work.(Yet anyway!) Only to stop them locking!  This is what is making people
going to electric operated brakes nervous! I am not aware of a complete
electrically operated brake system going into production as yet.

Patrick Sawyer
(Former Chief Engineer - Braking Systems for a Major Brake Manufacturer)


Re: Cruise-control failures?

<Neil Maller <neil.maller@gte.net>>
Wed, 30 Mar 2005 14:47:26 -0500

Nick Brown point out (RISKS-23.82) that typical brake designs provide
substantially more stopping force than the engine can provide propulsive
force. This invariably so: in the case of my own car the brakes are roughly
equivalent to 1000 hp, more than four times the power of the engine.

However Mark Brader suggests possible loss of power braking due to the
ignition being off. That's not how it works: brake power assist is provided
from engine vacuum, or rarely by a hydraulic pump. In either case a vacuum
or high pressure reservoir provides more than enough power assist to stop
the vehicle, even from high speed, without the engine running. Ray Todd
Stevens suggests that the braking system's thermal capacity could be
exceeded, causing brake fluid to boil and braking effectiveness to be lost.

It's possible to imagine a simultaneous failure condition which would result
in a driver's inability to stop the vehicle. First a failure in the cruise
control itself or the drive-by-wire throttle results in a WOT
(wide-open-throttle) condition. Then the driver brakes, but insufficiently
to overcome the engine, resulting in excessive brake heating, boiled brake
fluid and resultant complete loss of braking power. And because little
engine vacuum is developed at WOT it's also possible that prolonged brake
application might exhaust the vacuum reservoir and cause total failure of
brake power assist.

Ray Todd Stevens also said that "This [overheating] is a problem in race
cars and they use special brake bads because of this." Speaking as one who
does drive cars on race tracks I must point out that we use special brake
*pads* in order to avoid those brake *bads.*

However I'm not volunteering to put either of the theories to the test!


Re: Cruise-control failures?

<Markus Peuhkuri <puhuri@iki.fi>>
Thu, 31 Mar 2005 09:04:38 +0300

I think it is time to put some real figures for discussion.  As Nick Brown
stated, force by brakeng system exceeds one given by motor.  A simple
calculation:

Mass of car: 1500 kg.  Time to stop from 100 km/h speed: 3 s.  Power
consumed by breaks: P = 1/2 m v^2 / t = 1/2 * 1500 * 27.8^2 / 3 = 193
kW.  Power output from 2.0 litre machine at 3000 r/min: less than 100 kW.

Somebody more in mechanical engineering may correct, but based on figures
above, I would say that it takes less than 6 seconds to stop run-away car
using breaks that should not yet cause serious heat problems.  Even if motor
does not give support for braking, one can apply a force more than ones
weight on braking pedal.  Also, the braking power is underestimated
because the 3 second time-to-stop is limited by tyres, not by breaks on
modern cars.  Also, there should be at least two independent braking
circuits.  I was not able to find current car approval rules, but as far I
know, at least steering MUST have mechanical connection from steering wheel
to wheels.

This leaves us two possibilities: either something interfered with braking
system (ABS, ESP) or then it was plain user error or action.


Re: Cruise Control Failures (Stevens, RISKS-23.8x)

<dbell@zhochaka.demon.co.uk ("David G. Bell")>
Wed, 30 Mar 2005 08:57:37 +0100 (BST)

My guess is that the indirect control of power to the engine ignition and
fuel systems is a side effect of anti-theft systems.

But some effective emergency-stop override of the engine control systems
ought to be there.

Trouble is, another anti-theft feature is that removing the vehicle key from
the main switch will mechanically lock the steering, even if it does cut all
the electrical power.

Race-prepared vehicles do have battery isolators, placed for easy operation
by the marshals when a vehicle goes off the track.  Unfortunately, some
early engine control computer systems on cars lost key data when they lost
power, even if only for a few seconds.

Unintended consequences strike again.


Re: Cruise-control failures

<"Amos Shapir" <amos083@hotmail.com>>
Sat, 02 Apr 2005 13:48:47 +0300

Back in 1991, I used to own a Renault Clio.  One day, the cabin ventilation
fan got stuck in the "on" state, not turning off even when the ignition key
was out.  In the garage, a mechanic checked it, went off to the store room
to fetch a HUGE box back: there is no fan any more, only a "climate control
system" which includes a bellows, a fan, its motor, dashboard switches and
an electronics card, and costs about $300 to replace.

The mechanics liked the idea of just replacing the unit by unscrewing 5
screws in less than two minutes, instead of searching for crossed wires
somewhere in the system; Renault liked selling it; I certainly did not like
it and never owned a Renault since.  It seems that now there is no way to
escape this forced computerization at any price (which we the buyers must
pay).


Re: Cruise Control failures

<David R Brooks <davebXXX@iinet.net.au>>
Sat, 02 Apr 2005 21:47:29 +0800

I work on engine-control computers for buses. We are required to have power
for the fuel injectors & for the ignition (these are natural-gas fueled
engines) run through the ignition switch. That way, the driver can turn off
the switch (not, of course, far enough to lock the steering), and the engine
is twice dead: no fuel, no spark.  The brakes on these are not computerised.
I am surprised they aren't required to build cars similarly. Methinks I
shall try to buy used cars rather than new ones, now.


New Security Paradigms Workshop submission deadline approaching

<George Robert Blakley III <blakley@us.ibm.com>>
Wed, 6 Apr 2005 11:41:11 -0500

We're accepting papers for this year's ACSA New Security Paradigms Workshop
for another two weeks.

The CFP and a link to the mail alias for submissions can be found here:
  http://www.nspw.org/current/cfp.shtml

Bob Blakley, Chief Scientist, Security and Privacy, IBM
blakley@us.ibm.com  +1 512 286-2240  fax: +1 512 286-2057

  [This is a rather small but important security workshop.  PGN]

Please report problems with the web pages to the maintainer

Top