The RISKS Digest
Volume 24 Issue 40

Tuesday, 29th August 2006

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Russian ATM software error
Morten Krog
Still more over-reliance on satellite navigation
Antonomasia
Silliness in Action: California Poised for Cell Phone Ban
Lauren Weinstein
The risks of your ISP putting ads in your signature
Neil Youngman
Re: LA power outages
Stephen Fairfax
Be careful WHAT you test
Name withheld
Re: LA power outages
Rex Black
Kent Borg
Re: ... Your Dell laptop battery must be OK!
S Miller
Chris D.
Re: Pull the Plug on Touch Screens
Sharon Mech
Re: Ambiguous Characters
David Bliss
Re: The SAFEE Project
Stewart Fist
Security Engineering
Ross Anderson
Info on RISKS (comp.risks)

Russian ATM software error

<morten.krog@no.ey.com>
Mon, 28 Aug 2006 12:17:41 +0200

A few days ago in Ekaterinburg city, in the Ural region in Russia, a man
deposited 2000 rubles ($74 USD) in an ATM.  Sounds ordinary so far, however
the ATM credited his account with 2 billion rubles (yes, *billion*, with a
B). When he informs the bank of this error, the clerk responds that he
doesn't care, he has other things to do!

He then proceeds to withdraw cash from his account and occasionally
depositing more cash until he ends up with 20 billion rubles (about $74mil
USD). The money he has withdrawn is packed into shoe boxes and he returns to
the bank to show them the result of the ATM problem. The clerks are now
shocked into action by what has happened, and all the banks' ATMs are turned
off.  No word yet on when they will be back up.

(News item from http://englishrussia.com/?p=249 )

Poorly tested software and uninterested employees can make for a potent
risk.

Morten Krog, Senior IT accountant, Ernst & Young AS, Oslo Atrium, Christian
Fredriks plass 6, NO-0154 OSLO PO Box 20, NO-0051 OSLO NORWAY +47 24 00 20 55


Still more over-reliance on satellite navigation

<ant@notatla.org.uk (Antonomasia)>
Fri, 25 Aug 2006 22:15:31 +0100

  A coachload of pensioners was stranded for four hours after following
  satellite navigation that led them down Rosemary Lane, trying to get from
  Coleford to the A48 in Lydney, Gloucestershire.  The coach became totally
  wedged in the surrounding and overhanging brush [hedged in!], and had to
  be towed out [as opposed to "toed in"].  Expecting a nice lunch in a
  country pub, they wound up having tea with a local family.  [Source:
  *Daily Mail*, 25 Aug 2006; PGN-ed.  PGN notes that Shakespeare might have
  called this "the primrose path of <d>alliance treads", with so many
  drivers having faith in systems whose sat-nav purveyors all seem to be in
  cahoots: this particular error has caused previous episodes over the past
  two years, and has yet to be fixed.  And it is not the only such error.
  The *Daily Mail* article notes, "Hapless drivers with blind faith in the
  gadget's ability to get them from A to B have also been directed straight
  into the river Avon in the Wilshire village of Luckington."]
http://www.dailymail.co.uk/pages/live/articles/news/news.html?in_article_id=402282&in_page_id=1770

The phrase "sent on an expected tour" may be a mistake, or the cynicism of a
risks-reading journo (see RISKS-23.51 and 24.29).


Silliness in Action: California Poised for Cell Phone Ban

<Lauren Weinstein <lauren@vortex.com>>
Sat, 26 Aug 2006 20:27:15 -0700

As you know, I frequently speak out against what I view as silly laws that
fly in the face of logic, science, or just plainly observable facts.

In yet another proof that reality and politics often don't mix, lawmakers
here in California are poised (after many years of refusing to go along with
the bill's main sponsor) to approve a ban on handheld cell phones when
driving.  This may happen as soon as next week.  You can count on Arnold,
desperate for popular actions he can take so close to election day, to sign
the bill.

All of us have been annoyed by the gabbing cell phone user who seems to be
driving oblivious to everything around them.  So without a doubt this law
will have wide appeal.  And if experience in other states holds, the law
will have little or no long-term positive safety effects, and handheld cell
phone use will quickly rise back to pre-law levels after a brief initial
reduction.

The reasons are obvious.  Study after study shows that distracted driving of
*any kind* is a key factor in accidents.  While someone holding a cell phone
clamped to their ear is easy to spot, we're less aware of the radio
manipulators, people screaming at their children in the back seat, makeup
applicators, food eaters, and any of a myriad number of other distracted
drivers.  In fact, studies have shown that the most common distractions
leading to accidents when driving are other people inside the vehicle or
things seen outside the vehicle.

Even worse, research shows quite clearly that talking on hands-free cell
phones (still permitted under the bill) is equally distracting as using a
handheld device.  It's the remote conversation itself that is the real
distraction, not the act of holding the cell phone - — plus there's all the
situations where people fumble around to answer or dial a call even on a
hands-free cell phone.

When proponents of this legislation are presented with these inconvenient
facts, they tend to reply with, "Oh well, at least we're doing something..."

"Something" isn't good enough when it's based on bad science.  If you really
want to remove cell phones as a distraction, you need to ban them totally
when driving — handheld or hands-free, as has been done in some other
countries.  I'm not advocating this, nor do I think that politicians here
have the guts for such actions anyway.  In fact, banning children from cars
might be far more effective in terms of reducing accidents, however unlikely
the prospect.

To a certain extent this law will be a paper tiger.  Major California cities
don't have enough police to deal with serious crime, much less pulling over
people for illegal cell phone use.  And the bill's penalties — $20 for
first offense, $50 for subsequent, will hardly be seen as an onerous burden
by most drivers in an era of $3+ gasoline.

But this law itself is still primarily pandering to voters in a manner that
flies in the face of science.  Perhaps laws officially recognizing astrology
will be next here in the Golden State.

Lauren Weinstein +1 (818) 225-2800 http://www.pfir.org/lauren
http://www.pfir.org http://www.ioic.net DayThink: http://daythink.vortex.com


The risks of your ISP putting ads in your signature

<Neil Youngman <ny@youngman.org.uk>>
Wed, 16 Aug 2006 14:10:26 +0100

I was trying to figure out why a perfectly legit email had got a really high
SpamAssassin score, triggering a lot of rules relating to erectile drugs,
then I realised that the problem was in a signature line, probably added by
Yahoo UK.

  All New Yahoo! Mailā:  Tired of Vi@gr@! come-ons?
  Let our SpamGuard protect you."

I wonder how many, potentially important, e-mails are silently being dropped
into the bit bucket because of this ad?


Re: LA power outages (RISKS-24.37,38,39)

<Stephen Fairfax <fairfax@mtechnology.net>>
Thu, 24 Aug 2006 17:38:07 -0400

Providing massive redundancy to corporate and co-location data centers isn't
hard, and is common.  Redundancy typically ranges from "N+1" (the required
number of components plus an installed spare) to "2N+2" where two entirely
separate systems are installed, each with installed spares.

Redundancy is easy, it just involves spraying lots of money at the problem.
What is hard is achieving high reliability.  The sad fact is that the
correlation between massive redundancy and reliability is poor at best and
sometimes negative when real-world issues of complexity, operator errors,
and common cause failures are considered.

The case described by Kent Borg, where a standby diesel was tested monthly
but failed after the utility power actually failed, is an all too common
experience in the data center world.  The problem here was not redundancy or
lack of it, but the absence of well-engineered design, surveillance, and
testing programs.

In the case described by Mr. Borg the failure was caused by lack of power to
the transfer pumps that move fuel between the day tanks and a distant main
storage tank.  This was clearly a design problem, and the fact that it was
not revealed until an actual utility outage tested the system demonstrates
that the facility design was never properly reviewed, the facility
commissioning plan was deficient, and the testing program did not
interrogate at least one, and probably many likely failure modes.

The testing program described clearly did not involve actually removing
utility power from any significant part of the system, or the design defect
would have been revealed long ago.  This suggests many other defects that
could (and may still) cause the system to fail when called upon: - failure
of the sensors that signal loss of utility power - failure of the circuits
between the sensors and the generator array starting equipment - failure of
the cooling and ventilation equipment to transfer to generator power, or to
maintain acceptable generator temperatures during outages lasting longer
than the typical 30 to 60 minute test interval.  - failure to maintain
adequate fuel inventories in the main tank(s).  Many data center operators
boast of 4-hour service contracts for fuel deliveries, but these prove to be
unenforceable precisely when they are most needed, such as after a
hurricane, earthquake, or flood causes wide-spread, prolonged power outages.
- failure of the UPS and other equipment supporting critical loads to
coordinate properly with the generator array.  Particularly when one or more
machines fail to start or run in a redundant array, operators often discover
to their sorrow that starting current surges are far larger than operating
currents.  It may be possible to re-start large chillers when all generators
start, but impossible to start chillers with one generator failed, even if
the chillers would run with one generator failed.

The list could go on, but there is another point.  It may or may not be "so
hard" to test a critical system, but that is no excuse.  It is hard to
design and build a skyscraper, a commercial jet, a nuclear power plant, a
fossil power plant, or a bridge.  It is hard to test or inspect those
designs.  But our society does not tolerate frequent or even infrequent
failures in those systems.  Failures must be very rare, and in practice
almost always involve multiple elements whose sequential or
near-simultaneous failures conspire to cause a disaster.  This level of
performance is achieved with detailed engineering analysis, modeling,
testing, inspections, reporting, and continuous improvement.

Not so with data centers.  Redundancy is too often the beginning and end of
engineering for reliability.  If Boeing made airplanes the way some "2N+2"
data centers are designed, there would be 8 or 10 engines on a 747, and the
aircraft would be considerably less reliable and capable than the 4-engine
reality.

It is rare to find even an attempt at a reliability calculation supporting a
particular data center design.  Rarer still is a test plan that discusses
what faults and failures are interrogated by the test, and which ones are
neglected.  It is almost unheard of to acknowledge that testing carries
risks as well as benefits, as new defects can be introduced, such as not
restoring equipment to normal operating condition after it has been tested.
I have never encountered a data center generator test program that accounted
for the effects of wear and tear caused by the testing.

Redundancy isn't hard.  Engineering is hard.  Reliability is achieved with
good engineering. Redundancy is a small part of engineering for reliability.


Be careful WHAT you test (Re: Borg, RISKS-24.39)

<[name withheld by request]>
Thu, 24 Aug 2006

Kent Borg (RISKS-24.39) wrote of transfer pumps that were not themselves
emergency loads.

I once inspected a secure USG facility with emergency power that, amidst
other things, supposedly allowed enough time to destroy all the classified
material.  The total volume of the material was calculated, the shredders
and other devices counted, and a time to destroy it all calculated.

Destruct drills were run regularly, with a volume of scrap paper, etc...

I staged a different test, by having the building main breakers pulled.

A) A key generator did not start.
B) The main transfer pump was not on the emergency grid.
C) Fully half of the shredders were not on the emergency grid, either.

RISK: Simulations are nice but.......

BUT: Note the two-edged sword.  7 WTC burned down with the help of all
the fuel stored for the NYC Emergency Operations Center within..


Re: LA power outages (RISKS-24.39)

<Rex Black <rexblack@ix.netcom.com>>
Thu, 24 Aug 2006 16:01:33 -0500

A well-known principle of good test design in software and system testing is
the concept of test fidelity (see, e.g., my book *Managing the Testing
Process*, Beizer's *Software System Testing and Quality Assurance*, etc.)
The fidelity of a test is determined by how well the test, including the
test environment, truthfully replicates the experience under real-world
conditions.  In this case, the test was low-fidelity, specifically in the
test environment, because replicating the failure of utility power to
*everything* powered by utility power would be part of the test procedure
that a competent tester should have designed and executed.

To put this in terms that might be easier for some on this list to
understand, the analogy, in the security world, would be someone forgetting
to test for unnecessary services running on Web server as part of a
penetration test.

Rex Black Consulting Services, Inc.; Pure Testing; American Software Testing
Qualifications Board; International Software Testing Qualifications Board
31520 Beck Road, Bulverde, TX 78163  1-830-438-4830 www.rexblackconsulting.com


Re: LA power outages (RISKS-24.40)

<Kent Borg <kentborg@borg.org>>
Fri, 25 Aug 2006 02:13:08 -0400

At what cost?  We can't afford to have the cure be as bad as the disease.

Unlike software (where dealing with the zillion**umpteenth permutations are
the nastiest part of the task), testing the physical world is worse because
an extensive, expensive, and not completely known physical facility cannot
be cloned for free.  Buildings et al, cost money and time.

Consider a big facility that has primary power, backup power, and a JOB TO
DO.  How do you test it?  You are in a variation of catch-22: Yes, you can
test the backup power by axing the line from the primary power (and don't
miss that secondary line you forgot existed).  But you can only afford to do
the "real world test" of axing the primary power if you already know the
backup power works--in which case you don't need to do the test.  If you
don't trust your backup system you can't afford to do the full-fledged test
because it might fail.  Remember the "job to do" stipulation.

Yes, in a small installation, one with natural down time on weekends, etc.,
testing is much easier, but big is harder than one might appreciate.  You
are stuck with testing components, checking the design over and over again,
checking the installation to see if it matches the design (the flaw in my
example), and generally going for simple, simple, simple in your system so
fewer things can go wrong.  (And, going for flexibility and smaller chunks
in your design so chunks can fail without bringing down the whole.  In my
example, having electricians on staff who can quickly rewire a pump would be
good.)

Getting backup power right for a big system (where it doesn't make your
system less reliable than no backup at all) is hard.  Once you get to
industrial scales, it is really hard.  Search the Risks archives for
evidence.


Re: Dell Battery Recall — Checksums & Balances (Blake, RISKS-24.39)

<SMiller@unimin.com>
Fri, 25 Aug 2006 09:41:03 -0400

Factoring risks is an inherent part of the impact analysis process.  I'd
like to see a little less heat and more light (pun not intended; not
withdrawn:) in discussion of the Sony battery recall.  News articles on the
Apple-Sony recall indicate reported problems (not necessarily fires) with 9
of 1.8MM batteries, for an incidence of 1 in 200,000 units.  The Dell recall
stories seem to put the incidence lower than that (apples to oranges
problem, however).  I happen to have a Latitude D410 with one of the
recalled batteries in my office.  My 30 month old granddaughter sleeps down
the hall.  Even if the actual incidence of problems is 10 times what the
Apple figures indicate, there are a whole host of risks to her safety
(household and automobile accidental injuries, choking, accidental
poisoning) that can and do concern me much more than the risk of fire from
that unit (yes, I have sent for the replacement, but I'm not forgoing
portability while I wait).  Would have been preferable for Sony to have
caught this problem in Q/C?  Certainly, but the vast majority of the 6
million odd units affected to date have been in service for months or longer
with no problems.  According the US Consumer Product Safety Commission, 339
cases of overheating lithium batteries in laptops and cellphones were
reported to that agency between 2003 and 2005.  I don't have a reliable
source for the total number of such batteries in active use, but I suspect
it to be no more than 67,800,000 (9:1.8MM=339:67.8MM) in the US.  I also
suspect that your iPod or your Razr phone is as much of an actual risk as
your Apple or Dell laptop...

  [BTW, My Mac laptop battery is one that had to be replaced, but
  the laptop runs just fine without it when plugged in.  PGN]


Re: Dell Battery Recall — Checksums & Balances (RISKS-24.39)

<"Chris D." <e767pmk@yahoo.co.uk>>
Sat, 26 Aug 2006 21:06:06 +0100

Wouldn't including some sort of checksum digit(s) be simpler and more
effective?

You want an example?  In the UK, motor vehicles have to display a paper "tax
disc" sticker in the windshield to show that the appropriate Vehicle Excise
Duty (ownership tax) has been paid for 6 or 12 months; a vehicle cannot
normally be legally driven or even parked on public roads without this.  In
April 2004 the local newspaper for Blackpool recently featured a young guy
who tried to buy a new VED disc for his car, and was told that he couldn't
as it had been scrapped!  To cut a long story short, it turned out that
someone at the national vehicle registry had typed E instead of F for the
registration number (license plate) of a car that WAS being scrapped, which
happened to be the same make, model, and colour.  The guy was really
inconvenienced as he couldn't drive his car on the road and had to find off-
street storage for it, while using buses and taxis for transport, and he
couldn't do an out-of-use declaration either as the car officially didn't
exist!  At least the ISO 17-digit VIN does include a check digit, but it's
not much help if the vehicle records offices don't use it.

Re: Dave Blake <dave.blake@tiscali.co.uk>

So you don't have smoke alarms in your family home?   Sounds rather
irresponsible, considering that there must be many other potential
sources of fire there, apart from Sony batteries.

According to today's newspaper (Aug 26th), Dell "knew of 6 cases since
December" and Apple "unearthed 9 incidents", which is serious, but is it
really attempted mass murder?   Heck, my office desktop PC blew up last
year, although there wasn't any flame, just a nasty smell and it stopped
working.   (Didn't find out what happened, the repair guy took it away
and gave me a new computer.)

Well, look at it from the companies' point-of-view: either they admit that
the batteries are dangerous, and be branded as irresponsible, or claim that
they're safe, and be branded as irresponsible, and liars too..?  If ALL
batteries are replaced, how do you (or anyone) know that the new ones are
safer than the old ones?  What are the risks of making new batteries and
disposing of the old ones?  Not trying to defend anyone here, but as RISKS
readers will know, nobody has managed to repeal Murphy's Law yet.
Disclaimer: I use an ancient Sony laptop, but just as a portable computer
round the house, without a battery.

Chris Drewe, Essex County, UK.


Re: Pull the Plug on Touch Screens (RISKS-24.39)

<"Sharon Mech" <smech@ntst.com>>
Mon, 28 Aug 2006 17:02:00 -0400

I live in Franklin County (Columbus) Ohio, a locale the current issue of
Mother Jones Magazine was kind enough to accurately identify as one of the
worst places in America to try to vote in the last election. (Lines lasted
for HOURS in many demographically Democratic areas.)

My polling place is in a working class neighborhood elementary school.  We
used touch screens for the first time in the last election.

Poll workers in Ohio tend to be elderly. Some of them have been faithfully
working the polls here for as long as I have lived in the neighborhood
(almost 15 years) and they weren't young even then. They are paid a pittance
for a 13 hour day. (There's a RISK - how will we hold an election in five or
ten or fifteen years, with fewer and fewer poll workers able or willing to
make the sacrifice so we all can vote?)

The workers at my polling place had received some training, but they said
people were having trouble using the machines, and they couldn't figure out
why. It was a cool day. The heat was on, and the air was very dry in the
building. The machine didn't respond when I first tried to vote
either. Calling on my vast store of knowledge acquired by using
self-check-out registers at an increasing number of stores, I exhaled on my
finger, tried again, and it worked fine.

I explained this high-tech solution to the poll workers, who were grateful.

There has been a lot of learned discussion about why these new machines may
not be all that they are cracked up to be technically (security by
obscurity, rampant politics, etc.), and I find it all deeply disturbing.  I
haven't heard much discussion about whether the training poll workers are
receiving on these machines is adequate - what's the risk if machines
malfunction, and nobody knows how to troubleshoot? Has the training itself
been tested and normed on an audience of 70 year olds?  What discussion I've
read about the adequacy of "help desk" availability on Election Day has been
pretty depressing too.

I'm also concerned about the risks involved in a process that requires me to
touch a screen that's been touched by dozens of other people who may also
have had to breathe on or lick their finger before they touched it. There
aren't any handwashing requirements built into the process, and Election Day
is smack in the middle of cold and flu season. I'm not sure how often those
machines get wiped down with anything capable of killing germs...

Pack your waterless sanitizers when you go vote, folks! Me, I'm mailing my
ballot in this fall.

Sharon Mech <bear@bitwolf.com>


Re: Ambiguous Characters (Kimberley, RISKS-24.39)

<David Bliss <david@dbsi.org>>
Thu, 24 Aug 2006 12:49:11 -0700

You may be interested to know that Washington State already considers "O"
and "0" and "1" and "I" identical for purposes of vehicle license plate
issuance (and, presumably, lookup).  From
http://www.dol.wa.gov/forms/420077.pdf :

  When requesting either the number "1" or the letter "I", please specify
  which you desire. NOTE: Because of the similarity of letters and numbers,
  1's and I's and O's and 0's are considered the same. EXAMPLE: COOL (with
  letters) and C00L (with numbers) are both considered the same word. We
  cannot issue single character personalized plates using the letters "I" or
  "O" or numbers "1" or "0".

David Bliss, University of Washington


Re: The SAFEE Project

<Stewart Fist <stewart_fist@optusnet.com.au>>
Fri, 25 Aug 2006 15:56:15 +1000

Nickee Sanders reports that:

> A joint European effort is working on software that would enable remote
> control of an aircraft that could override any attempts by hijackers to
> control the plane, and force a safe landing.........  The project is
> budgeted for 36m Euros.

It seems to me that such a device would simply swap the terrorist's targets
from the aircraft themselves, to the major control towers.

The consequences of having a hundred aircraft in the skies over a city, all
subject to ground-control over-ride and take-over, might seem more
attractive to them than hijacking a single aircraft with a knife or gun.

Stewart Fist, 70 Middle Harbour Road, LINDFIELD, 2070, NSW, Australia
+61 (2) 9416 7458


Security Engineering

<Ross Anderson <Ross.Anderson@cl.cam.ac.uk>>
Sat, 26 Aug 2006 09:36:46 +0100

After several years of argument, I've persuaded my publisher to let me put
my book "Security Engineering" online for free download:

   http://www.cl.cam.ac.uk/~rja14/book.html

My book draws on a lot of the experience shared in this list, and has become
a standard textbook in the field.

The publishers thought for years that it was too risky to let authors put
books online but they are gradually learning that this isn't so.  Putting a
book online often increases its sales; more people read it and those who
find it useful often go buy a copy.

Enjoy!  Ross Anderson, Cambridge University

Please report problems with the web pages to the maintainer

x
Top