The RISKS Digest
Volume 26 Issue 31

Friday, 21st January 2011

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…


UK Health Service IT
Martyn Thomas
Android Trojan horses
Robert Schaefer
Israel Tests on Worm Called Crucial in Iran Nuclear Delay
NYT via Monty Solomon
Cyberwar countermeasures a waste of money, says report
New Scientist via Lauren Weinstein
Can Your Camera Phone Turn You Into a Pirate?
Nick Bilton via Monty Solomon
Windows phone 7 phantom data blamed on an unnamed third party service
Robert Schaefer
Carbon Trading Halted After Hack Of Exchange
Robert Schaefer
More misadventures on Facebook
Gene Wirchenko
IBM Computer Gets a Buzz on for Charity Jeopardy!
Jim Fitzgerald
Re: Jackpot: Bug or Feature?
Steven Bellovin
Ed Mirmak
Re: Caveman: Using the cloud to break passwords
Amos Shapir
Re: The dangers of GPS/GNSS
Erling Kristiansen
No need for privacy when nobody's interested
Re: Hard Drive woes
Dimitri Maziuk
Re: Health information technology risks
Robert L Wears
What Risks really represents: robustness vs. brittleness
Paul Robinson
Info on RISKS (comp.risks)

UK Health Service IT

Martyn Thomas <>
Wed, 19 Jan 2011 17:37:51 +0000

Journalist Tony Collins has published a letter from UK Member of Parliament
Richard Bacon to the Government Minister responsible for the UK National
Heath Service. It catalogues a series of issues that illustrate some of the
risks of implementing IT systems to support patient care across a
significant number and variety of hospitals.

Background to the UK National Programme can be found at

Android Trojan horses

Robert Schaefer <>
Thu, 20 Jan 2011 11:55:40 -0500

For the "just when you thought it couldn't get worse" department:

"A team of security researchers has created a proof-of-concept Trojan for
Android handsets that is capable of listening out for credit card numbers -
typed or spoken - and relaying them back to the applications creator."

Next thing I wouldn't be surprised of hearing of exploits for irobot's

Robert Schaefer, Atmospheric Sciences Group, MIT Haystack Observatory
Westford MA 01886 781-981-5767

  [Notes from Webster:
    ANDROID: late Greek, manlike; a mobile robot usually with a human form.
    TROJAN HORSE: from the large hollow wooden horse filled with Greek
      soldiers and introduced within the walls of Troy by a stratagem.
  Beware of Greek-Bearing Handheld Metaphors.
  They may engage in Spartanogenesis. PGN]

Israel Tests on Worm Called Crucial in Iran Nuclear Delay (NYT)

Monty Solomon <>
Sat, 15 Jan 2011 22:54:10 -0500

William J. Broad, John Markoff and David E. Sanger, *The New York Times*,
15 Jan 2011

The Dimona complex in the Negev desert is famous as the heavily guarded
heart of Israel's never-acknowledged nuclear arms program, where neat rows
of factories make atomic fuel for the arsenal.

Over the past two years, according to intelligence and military experts
familiar with its operations, Dimona has taken on a new, equally secret role
- as a critical testing ground in a joint American and Israeli effort to
undermine Iran's efforts to make a bomb of its own.

Behind Dimona's barbed wire, the experts say, Israel has spun nuclear
centrifuges virtually identical to Iran's at Natanz, where Iranian
scientists are struggling to enrich uranium. They say Dimona tested the
effectiveness of the Stuxnet computer worm, a destructive program that
appears to have wiped out roughly a fifth of Iran's nuclear centrifuges and
helped delay, though not destroy, Tehran's ability to make its first nuclear

"To check out the worm, you have to know the machines," said an American
expert on nuclear intelligence. "The reason the worm has been effective is
that the Israelis tried it out."

Though American and Israeli officials refuse to talk publicly about what
goes on at Dimona, the operations there, as well as related efforts in the
United States, are among the newest and strongest clues suggesting that the
virus was designed as an American-Israeli project to sabotage the Iranian

In recent days, the retiring chief of Israel's Mossad intelligence agency,
Meir Dagan, and Secretary of State Hillary Rodham Clinton separately
announced that they believed Iran's efforts had been set back by several
years. Mrs. Clinton cited American-led sanctions, which have hurt Iran's
ability to buy components and do business around the world.

The gruff Mr. Dagan, whose organization has been accused by Iran of being
behind the deaths of several Iranian scientists, told the Israeli Knesset in
recent days that Iran had run into technological difficulties that could
delay a bomb until 2015. That represented a sharp reversal from Israel's
long-held argument that Iran was on the cusp of success.

The biggest single factor in putting time on the nuclear clock appears to be
Stuxnet, the most sophisticated cyberweapon ever deployed. ...

New Scientist: Cyberwar countermeasures a waste of money, says report

Lauren Weinstein <>
Mon, 17 Jan 2011 12:02:28 -0800

 "Controversially, the OECD advises nations against adopting the Pentagon's
  idea of setting up a military division - as it has under the auspices of
  the US air force's Space Command - to fight cyber-security threats. While
  vested interests may want to see taxpayers' money spent on such ventures,
  says Sommer, the military can only defend its own networks, not the
  private-sector critical networks we all depend on for gas, water,
  electricity and banking."  (New Scientist)

Can Your Camera Phone Turn You Into a Pirate? (Nick Bilton)

Monty Solomon <>
Sat, 15 Jan 2011 22:58:03 -0500

Nick Bilton, *The New York Times*, 15 Jan 2011

My wife and I sat cross-legged on the floor of a local Barnes & Noble store
recently, surrounded by several large piles of books. We were searching for
interior design ideas for a new home that we are planning to buy.
As we lobbed the books back and forth, sharing kitchen layouts and hardwood
floor textures, we snapped a dozen pictures of book pages with our
iPhones. We wanted to share them later with our contractor.
After a couple of hours of this, we placed the books back on the shelf and
went home, without buying a thing. But the digital images came home with us
in our smartphones.

Later that evening, I felt a few pangs of guilt. I asked my wife: Did we do
anything wrong? And, I wondered, had we broken any laws by photographing
those pages?  It's not as if we had destroyed anything: We didn't rip out
any pages. But if we had wheeled a copier machine into the store, you can be
sure the management would have soon wheeled us and the machine out of there.

But our smartphones really functioned as hand-held copiers. Did we indeed go
too far?

Windows phone 7 phantom data blamed on an unnamed third party service

Robert Schaefer <>
Thu, 20 Jan 2011 12:01:06 -0500

Microsoft investigates 'phantom' Windows Phone 7 data Microsoft has told BBC
News that it is investigating why some handsets running its Windows Phone 7
software are sending and receiving "phantom data".
    followup to:

Robert Schaefer, Atmospheric Sciences Group, MIT Haystack Observatory
Westford MA 01886 781-981-5767

Carbon Trading Halted After Hack Of Exchange

Robert Schaefer <>
Thu, 20 Jan 2011 15:55:41 -0500

"The European Commission (EC) suspended trading in carbon credits on
Wednesday after unknown hackers compromised the accounts of Czech traders
and siphoned off around $38 million, according to published reports."

Robert Schaefer, Atmospheric Sciences Group, MIT Haystack Observatory
Westford MA 01886 781-981-5767

More misadventures on Facebook

Gene Wirchenko <>
Wed, 19 Jan 2011 16:41:07 -0800

Social networking provides a growing market for sleazy business practices --
and the antidote to exposing scams as they happen.  *Infoworld*, 19 Jan 2011

IBM Computer Gets a Buzz on for Charity Jeopardy! (Jim Fitzgerald)

ACM TechNews Early Alert Service <>
Fri, 14 Jan 2011 11:30:12 -0500

Jim Fitzgerald, Associated Press (13 Jan 2011)

IBM's Watson computer beat former Jeopardy! champions Ken Jennings and Brad
Rutter in a 15-question practice round in which the hardware and software
system answered about half of the questions and got none of them wrong.
Watson, which will compete in a charity event on Jeopardy! against Jennings
and Rutter on Feb. 14-16, recently received a buzzer, the finishing touch to
a system that represents a huge step in computing power.  "Jeopardy! felt
that in order for the game to be as fair as possible, just as a human has to
physically hit a buzzer, the system also would have to do that," says IBM's
Jennifer McTighe.  Watson consists of 10 racks of IBM servers running the
Linux operating system and has 15 terabytes of random-access memory.  The
system has access to the equivalent of 200 million pages of content, and can
mimic the human ability to understand the nuances of human language, such as
puns and riddles, and answer questions in natural language.  The practice
round was the first public demonstration of the computer system.  IBM says
Watson's technology could lead to systems that can quickly diagnose medical
conditions and research legal cases, among other applications.

  [Who's on First?  Wats'on Second?
  Jeopardy Leopardy, Docked!  The Mouse ran off half-cocked.  PGN]

Re: Jackpot: Bug or Feature? (Weinstock, RISKS-26.30)

Steven Bellovin <>
Mon, 17 Jan 2011 15:00:47 -0500

> I wonder if a good defense here is that the machine was doing exactly what
  it was programmed to do and all the defendant was doing was using expert
  play to increase his chances of winning.

There are two things wrong here.

First, the defendants are not charged with hacking (18 USC 1030), they're
charged with wire fraud (18 USC 1341, I believe).  But even if they were
charged with a computer crime, I do not accept the notion that "what it was
programmed to do" is a good standard, unless and until we have computers or
compilers with a "do what I mean" instruction.  After all, by this standard
someone perpetrating a buffer overflow attack could say "the computer was
programmed to accept as machine code all bytes starting after 18235 non-zero
bytes in the input field"—and it was.  (For a long discussion of the
issue, see
-- and see my long disagreements with Orin Kerr...)

Steve Bellovin,

Re: Jackpot: Bug or Feature? (Weinstock, RISKS-26.29)

Ed Mirmak <>
Fri, 14 Jan 2011 13:28:25 -0800 (PST)

There was a strong element of social engineering in this activity.  Before
the button pushing could work, a technician had to activate the "Double Up"
feature, which was normally inactive.  The grand jury indictment explains
how the accused accomplished that.

" ... Mr. Nestor ... flaunted large amounts of cash to casino employees and
cultivated an image as a so-called 'high roller' during 14 visits to the
... casino ... Mr. Nestor played at only the most expensive slot machines in
the casino, placing wagers of between $1 and $25 per credit in an area
reserved for high-limit gamblers... . Mr. Laverde, a former Swissvale police
officer, acted as Mr. Nestor's bodyguard, flashing his police badge to
casino employees and hinting that he carried a weapon... . The men persuaded
casino technicians to alter 'soft' options on the machines, such as volume
and screen brightness controls. Such perks aren't unusual for high-rollers,
who can wager anywhere from a few hundred to thousands of dollars in one

>From the Pennsylvania grand jury indictment:

"Shortly after arrival, NESTOR inquired about the slot machine's 'Double Up'
feature to slot technician Daniel Joseph DOWNING, a Meadows Racetrack and
Casino employee. Specifically, NESTOR asked DOWNING to activate the
machine's 'Double Up' feature, as it was deactivated at the time.  ... Upon
accessing the device's programming menu, DOWNING was unable to locate the
'Double Up' feature in question. NESTOR then offered to show DOWNING the
location of the feature on the machine's programming menu.  DOWNING refused
NESTOR direct contact with the machine, but allowed NESTOR to guide him
through the menu screens. DOWNING then located the appropriate game specific
menu which enabled the 'Double Up' feature and activated it, per NESTOR's

... his supervisor, RJ FUNKHOUSER ... then told DOWNING to disable the
'Double Up' feature, explaining to NESTOR that the Pennsylvania Gaming Board
prohibited such a change.

... DOWNING then appeared to disable the 'Double Up' feature, but neglected
to save his programming change, resulting in the activation of the 'Double
Up' feature on the slot machine. DOWNING then closed the machine and set it
up for play. The casino employees on duty at the time did not notice the
oversight and the machine was available and open for public play."

The details of the actual code and button sequence have not been published,
for obvious reasons. The machine's manufacturer, International Game
Technology, issued a product warning in July 2009.

A picture and description of the 'high roller' is here:

Re: Caveman: Using the cloud to break passwords (RISKS-26.29)

Amos Shapir <>
Tue, 18 Jan 2011 15:31:21 +0200

The idea behind most encryption algorithms is that they are "hard", which
usually means that brute force attacks require time and space resources
which are an exponential function of the key length.  But according to
Moore's Law, such resources also increase exponentially over time.
Conclusion: as long as average key length increases linearly with time, the
effort required for successful brute force attacks remains constant.

Re: The dangers of GPS/GNSS (RISKS-26.30)

Erling Kristiansen <>
Sat, 15 Jan 2011 10:27:11 +0100

I believe intentional jamming or spoofing of GNSS is relatively rare
today. Few people (possibly with the exception of some military operations)
would have a reason to do so.

I am speculating that this could change the moment somebody would have a
financial advantage. One example is the now-canceled (at least for the time
being) Dutch plans for GNSS-based road pricing.  I contend that any
application where there is an incentive to interfere with GNSS is a danger
to the GNSS service as a whole since jamming/spoofing would affect everybody
in the vicinity, not only the driver who wants to reduce his bill.

Since a jammer or spoofer would interfere not only with his own receiver,
but also with those in cars nearby, someone could be accused of fraud, while
he just happened to drive close to a jammer/spoofer. So the legal system
would have a lot of problems identifying who is guilty and who is innocent
and just happened to be in the range of the perpetrator's device.

No need for privacy when nobody's interested

Mon, 17 Jan 2011 08:07:56 +0800

All this talk here on RISKS about guarding ones privacy. I have taken the
opposite approach, ensuring the most lurid details are mere clicks away to
anybody. Lo and behold, I discovered nobody was interested in the first
place. Come and stalk me at the old folks home.

  [But don't forget the risks of identity fraud.  PGN]

Re: Hard Drive woes (Robinson, RISKS-26.30)

Dimitri Maziuk <>
Fri, 14 Jan 2011 16:56:36 -0600

A quick google would tell you that it's most likely a mechanical problem,
with videos and a suggestion to freeze the drive.

Typical failures for a hard drive that's was knocked over while spinning are
  - heads crashing into the platters, and
  - main spindle/motor/bearings failure thanks to angular momentum
(think  of the spinning platters as gyroscope).

It takes a lot to actually break the platters, the worst you can expect is
heads scratching surface during the crash and making a file or three

Data recovery involves pulling out the actual platters, putting them on a
working spindle, with working heads, and reading the data off them. It
requires some serious gear and expertise, that's what $1100 buys you.

E.g., the heads of a working drive fly on top of the layer of air only a few
molecules thick that rotates together with the platter—obviously, a dust
mote getting in there would be a serious problem. So when they say "clean
room" they aren't kidding: they mean really very clean.

Freezing the drive sometimes works, but I've never seen it work long enough
to recover 4,000 files, nor 14,000. I've recovered the directory structure
and maybe a handful of files once or twice—but then it warms up and
seizes again.

Dimitri Maziuk, BioMagResBank, UW-Madison—

Re: Health information technology risks (Kenzo, RISKS-26.30)

"Robert L Wears, MD, MS" <>
Sat, 15 Jan 2011 13:46:36 -0500

> I consider this an example of one of the primary technology ... risks

This point highlights one of the contradictory notions in the current rush
to insert IT into healthcare—that IT will somehow catalyze the process of
organizational performance improvement.  But embedding work processes in the
concrete (or, more optimistically, the molasses) of rigid IT systems can
only impede the rapid cycle change efforts that its advocates envision.

Robert L Wears, MD, MS, University of Florida 1-904-244-4405
Imperial College London +44 (0)791 015 2219

What Risks really represents: robustness vs. brittleness

Paul Robinson <>
Sat, 15 Jan 2011 08:23:16 -0800 (PST)

My recent incident with inadequately backed-up data and a dropped hard drive
made me realize something.  I wasn't really stupid as much as I was
careless, the normal state of affairs of fallible human beings who maybe do
most things okay but occasionally make mistakes.  (I almost took the bait of
writing "misteaks" for grins but I rose above the temptation.)

I came to realize that we can divide the failure potential along two points
- there may be others but I'll use two for simplicity - "robustness" and
"brittleness".  A robust piece of technology takes into account human error,
and either resists failing as a result of error or degrades gracefully to
avoid or reduce injury or damage.  A brittle piece of technology fails
horribly in the event of error, crashes or causes damage as a result, and
sometimes makes the results worse than if we weren't using technology at

Consider the following scenario: You're coming up on an all-way stop at a
not-very busy corner and aren't paying attention, and in fact the
intersection only just converted from a stop for the side street only to an
all-way stop.  You weren't paying attention and you blow the stop, going
through it at, say, 25 or 30 miles per hour, the speed limit.  What happens?

Usually, nothing.  That's the point.  In the ordinary operation of the real
world, I would guess most mistakes are usually harmless.  The world in
general tends to be robust and many, if not most mistakes - when I say
"most" I'm meaning 51% or higher - probably happen without consequence.  (A
few years ago I mentioned this exact incident regularly occurring at an
intersection where I lived to an associate of mine, after there were
warnings posted weeks in advance of the stop signs being added and the
intersection had advance notice signs permanently installed, people in front
of me were routinely not even slowing down at the new stop signs.  I had to
love his response: Call the county, tell them that the stop signs are not
working, they must be broken!)

What we as RISKS readers see, and what the participants and victims of
technology are learning are that technology is often not robust and mistakes
falling on brittle technology can be deadly.  Technology is too-often
brittle, requires high competence and high attention, and failure is often
more severe.

When we see incidents where technology failure results in good outcomes,
it's almost certainly because someone added in or designed in methods to
increase robustness and reduce brittleness.  Which is more expensive, an
application that crashes on bad input or one that validates input?  Well,
the one that crashes is simpler to make, highly brittle, cheap and not
robust.  An application that catches errors is more robust, less brittle,
takes longer to do and is more costly. (The "more costly" might be in the
negligible cost range but it is still more.)

Sometimes brittle and fast is okay; if you're writing a quick-and- dirty
converter for calculating date differences, sanitizing and validating input
isn't important, you can just restart the program if it crashes.  Sometimes
it isn't; if you're writing the converter for calculating the shield and
radiation levels on a cancer treatment machine, failure results in serious
injury or death.

And what I think we are seeing is that the "infusion" of technology into so
many aspects of our lives, with inadequate robustness, is causing a lot of
brittleness-induced disasters as the brittle technology causes worse results
when it fails.  And this may be the real point of the Risks Digest even if
we didn't realize it: I suspect nearly every disastrous failure is because
the technology lacks adequate robustness to allow it to reduce the effects
of failure, the way most real-world non-technological failures usually
either cause no significant problem or only minor problems.

Spill hot coffee on your arm in a kitchen, if you immediately get it off and
apply ice you might only get a minor burn or you might get a serious one.
But eventually your cells will heal themselves.  Spill even cold coffee on a
airliner control panel and 300 people might die.  Spill coffee on a regular
keyboard and you have to throw it away.  Spill coffee on a spill-resistant
keyboard, you wipe it off and it's no big deal.  Increased robustness
reduces risk; increased brittleness increases risk.

Cars are a good example.  We don't expect them to work after high- speed
collisions, e.g. where the combined impact speeds exceed 50, 60 or 100 or
more miles an hour.  But we do provide increased robustness for occupant
survivability through seat belts, laminated window glass, airbags, etc.  And
of course, every one of those safety features is an increased cost.  But
that cost is acceptable because the increased brittleness of the technology
for reduced probability of survival or higher probability of injury without
them is not acceptable.

Personal example: I've been involved in four serious automobile accidents in
the last thirty-five years.  One was my fault, the other three were someone
else's.  For some not-very strange reason, every time I was in an accident I
had little to no injury and was wearing a seat belt.  I'm not psychic, of
course; when I first got my license at 17, I put on my seat belt.  For the
rest of my life, unless the belt won't fit because I can't buckle it and I
can't get a seat-belt extender, I've always worn a seat belt.  Most of the
time I don't need it, most trips end uneventfully.  But I'm not willing to
presume nothing will happen, instead I simply protect myself and go about my
business.  If, as in most cases, nothing happens, I "wasted" five seconds;
if something does happen, I've saved myself from potential injury or

In short, I have not presumed that the trip will be free of failures due to
error-enhanced brittleness, I use pre-planning to avoid the catastrophic
failures caused by brittleness and error and thus increase robustness.  But
the use of a seat belt does add minor costs; I have to take a few seconds to
use it; if a vehicle I regularly use has a seat belt that's not long enough,
I have to take time to stop at a dealer, order an extender, then go back in
a couple days and pick it up (seat belt extenders are free, by the way, but
the dealer may have to order it for you if their parts department doesn't
have it on hand).  But I pay that minor cost of always using a seat belt
because it's much less expensive than the alternative even though I only
really need it maybe 0.0001% of the time.  I just never know which of the
times will be the ones I need it, so I just pay that tiny little cost every
time and then reap the benefits the few times when I do.

The next time you hear about some disaster, in a news report, or in RISKS,
and think about this: what is the likelihood this incident occurred because
something lacked adequate real-world robustness to allow operation under the
usual and customary conditions it could reasonably be expected to
experience, and how often the failure was the result of insufficient
robustness.  I could be wrong - I've been wrong many times, that's how
anyone learns. by trial and error - but I suspect nearly all disasters are
the result of inadequate robustness to withstand all normal and potential
expected regular or routine uses of that technology in real-world

Please report problems with the web pages to the maintainer