The RISKS Digest
Volume 24 Issue 25

Tuesday, 18th April 2006

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

IE Changes Due: What You Can Expect
Gregg Keizer via Monty Solomon
New Microsoft Patch Breaks Web Pages — On Purpose!
Lauren Weinstein
How to lose 10,000,000 pounds
Mike Williams via Mark Brader
Norwegian bank has problems moving customers to new platform
Vetle Roeim
Hong Kong: Former police complainants exposed on the Internet
John Kane
Embedded Bug Detection
Al Mac
Oxygen and autopilots
Andrew Koenig
Another near-disaster due to vehicle automation
Pete Mellor
Re: Another near-disaster due to vehicle automation
Don Norman
Re: IT Corruption in the UK
Lem Bingley
DNS Amplification Attacks
Gadi Evron
Re: "routine" system failure
Ken Knowlton
Info on RISKS (comp.risks)

IE Changes Due: What You Can Expect

<Monty Solomon <monty@roscom.com>>
Fri, 14 Apr 2006 15:37:54 -0400

Microsoft will release a security update for Internet Explorer that
will also change how users interact with Web sites.

By Gregg Keizer,  TechWeb.com, 11 Apr 2006

Microsoft Corp. will release Tuesday a security update for Internet Explorer
that will also change how users interact with Web sites.

Some sites that rely on popular ActiveX controls, such as Apple's QuickTime,
RealNetworks' RealPlayer, and Adobe's Flash and Acrobat, are likely to give
users fits.

The change, which Microsoft has been warning Web site developers about since
December 2005, was made to abide by a ruling in a patent infringement
lawsuit Microsoft lost in 2003 to the University of California and its
startup, Eolas Technologies Inc.

With the changes rolled out in a mandatory security fix, any IE user who
downloads and installs Tuesday's security patches — either manually or via
an automated system such as Microsoft Update — will likely need to modify
how they use those sites which haven't been rewritten.

What should users expect?  ...

http://www.informationweek.com/story/showArticle.jhtml?articleID=185300378


New Microsoft Patch Breaks Web Pages — On Purpose!

<Lauren Weinstein <lauren@vortex.com>>
Sat, 15 Apr 2006 09:42:31 -0700 (PDT)

OK, let's be fair about this, the underlying purpose of the Microsoft patch
isn't to break Web pages, though this result was understood and expected.

I haven't seen a detailed discussion of the implications of this situation
in RISKS (some venues are calling the issue a "mini-Y2K" — which is a bit
overdramatic), but it *is* important. As of a few days ago, vast numbers of
Internet Explorer (IE) users are experiencing Web pages all over the Net
which simply don't work as expected any more.

Simplified backstory first.  A couple of years ago, Microsoft lost a patent
fight over commonly used techniques to embed "active" content into Web
pages.  While "ActiveX" operations are usually cited in this regard, in
reality all manner of embedded active player objects are apparently
involved, including Flash, QuickTime, RealPlayer, Java, etc.

We can argue about whether or not such techniques should be patentable in
the first place.  A lot of us believe that such patents have gone way
overboard and that the USPTO is far out of its depth.

In any case, MS decided that they didn't want to pay the associated license
fees for the patented techniques (so far, the holders of the patent have
seemingly not gone after open source browsers in non-commercial contexts --
such as Firefox — which is why Firefox is not currently affected by this
issue).

Several months ago, MS issued a patch to change IE behavior to what they
believe is a non-infringing operation.  This requires that users explicitly
click embedded objects first (theoretically guided by a small hint message
that appears if they happen to mouse over the objects, which will supposedly
be visually boxed as a cue), before the objects will become active.  In the
case of active objects that already require a click to start, this means
that *two* clicks will now be needed.

There are variations on this theme.  For example, in some cases, playback of
video may commence automatically, but the video control buttons reportedly
won't be active unless the user clicks them first.  Confusing?  Yep.

There are ways to redesign Web pages to restore the original behaviors, more
or less.  But these typically require the use of embedded javascript, which
introduces its own complexity and security issues, especially on large
sites.

If MS originally issued the patch that changed IE behavior months ago, why
is this a big deal today?  Because only now is Microsoft pushing out that
patch as part of the standard automatic "Windows Update" mechanisms.
Previously, you would have had to manually download the patch yourself.
Millions of people are currently receiving the patch, and seeing the
associated effects.

Now for an even more bizarre twist.  Microsoft, realizing the sudden
negative impact that this patch could have on many users, has just issued
yet *another* patch (which as far as I know must be downloaded manually)
that specifically *disables* the "offending" patch until the next planned IE
update in a couple of months or so, restoring the original IE behavior until
then on a temporary basis.  Got that?  You can't make this stuff up.

Perhaps the biggest problem with this situation is that many Web sites don't
realize that they can be affected even if they don't use ActiveX.  In fact,
I wasn't aware of this until a few days ago, when I started having problems
with a relatively simple embedded Flash video on one of my sites.  You can
see the effects and side-effects, plus the explanations I've now placed on
the page, at:
  http://lauren.vortex.com/archive/000173.html

Since the embedded video area is itself black, the new IE behavior of
"boxing" the object as a cue to an additional click turned out to be
essentially invisible.  Surprise!

Note that the underlying display code is unchanged.  I have not at this time
added the javascript "container" code that would be necessary to "fully"
workaround this silly situation.

Are we all bozos on this bus, or what?

Lauren Weinstein +1 (818) 225-2800 http://www.pfir.org http://www.ioic.net
Blog: http://lauren.vortex.com  DayThink: http://daythink.vortex.com


How to lose 10,000,000 pounds

<msb@vex.net (Mark Brader)>
Mon, 3 Apr 2006 03:41:59 -0400 (EDT)

The following story was posted by Mike Williams, of the UK, a few days ago
in rec.puzzles (without a usable email address where I could ask permission
to forward it here).  This copy was edited for typos.

  I used to work on the S.W.I.F.T. payments system, and even that wasn't
  100% perfect at eliminating duplicates and spotting omissions.

  In the many years that I worked with the system, we had one situation
  where everybody followed the rules, and yet a payment for ten million
  pounds got lost.

  It all started when an operator at Bank A mistyped 10,000,000 instead of
  20,000,000 on the initial payment. The error was spotted pretty quickly -
  banks have systems in place for double checking the total amount that gets
  paid in and out.

  The operator could have sent a cancellation for 10,000,000 and a new
  payment for 20,000,000 and all would have been well, but cancellations can
  take days to process and someone would have to pay the overnight
  interest. What actually happened was that they sent a second payment for
  the remaining 10,000,000.

  Now Bank A happened to use a system whereby the SWIFT Transaction Sequence
  Number is the same as the initial paperwork that caused the payment to be
  made, so the two payment messages were sent with the same TSN, the same
  amount, date, payer and payee. In fact the two payment messages were
  identical. (My bank didn't work like that, my programs always used a
  unique TSN, but that's partly because I wanted to use our TSN as a unique
  index on our outgoing files to make the coding simpler).

  Unfortunately, at some point in its journey round the world, the initial
  payment hit a comms glitch. These were the days when electronic data
  communications were far less reliable than they are now. The relay station
  didn't get a confirmation ("ACK") so it sent a copy of the message with a
  "PDS" marker (Possibly duplicated by SWIFT network).

  When the payments arrived at Bank B, they got passed to one of my programs
  that checks the possible duplicates. Because the payments were 100%
  identical, and one of them was flagged "PDS", that payment was dumped into
  the "real duplicate" file.

  Mike Williams, Gentleman of Leisure [Forwarded to RISKS by Mark Brader]


Norwegian bank has problems moving customers to new platform

<"Vetle Roeim" <vetler@gmail.com>>
Thu, 13 Apr 2006 13:26:39 +0200

After merging in 2003, the Norwegian banks DnB and Gjensidige NOR (now DnB
NOR[1]), finally finished moving their customers onto a common platform
earlier this year.

This did not go as smoothly as planned, though; in some cases company
accounts and private accounts now require the same login, which in some
cases have forced users to disclose their private accounts to others wanting
to access company accounts[2].

In other cases, old access rights have been activated again. In one case a
man had his account emptied by his ex-wife[3], and in another case a mother
was granted access to her 37 year old sons account.

Both The Financial Supervisory Authority of Norway[4] and The Data
Inspectorate[5] are a little cross with DnB NOR, and has asked them for more
information about the problem. The bank, on the other hand, is trying to put
a positive spin on the whole thing, claiming that all this is good for the
customer and that it gives the customers better overview of their
accounts[6]. Somehow I don't think their customers agree.

[1]: <URL:http://www.dnbnor.com/>
[2]: <URL:http://www.vg.no/pub/vgart.hbs?artid=142282> (Norwegian)
[3]: <URL:http://www.dn.no/forsiden/naringsliv/article756284.ece> (Norwegian)
[4]: <URL:http://www.kredittilsynet.no/>
[5]: <URL:http://www.datatilsynet.no/>
[6]: <URL:http://www.dn.no/privatokonomi/article753237.ece> (Norwegian)


Hong Kong: Former police complainants exposed on the Internet

<"John_Kane@tricolour.queensu.ca" <John_Kane@tricolour.queensu.ca>>
13 Mar 2006 11:45:18 -0800

The identities of 20,000 former police complainants in Hong Kong have been
made public on the Internet.  The database, which contained highly
confidential information, was discovered a few days ago on the website of a
private company.  The Independent Police Complaints Council has apologised
for the security lapse and is currently conducting an investigation into the
matter.  Critics are now asking how the sensitive details were leaked in the
first place.  [Reported by Huey Fern Tay on Radio Australia's Asia Pacific
web-page http://www.abc.net.au/ra/asiapac/]


Embedded Bug Detection

<Al Mac <macwheel99@sigecom.net>>
Wed, 12 Apr 2006 01:33:21 -0500

  [Much of this item will be familiar to old-time RISKS readers, but is
  included to remind us that many old risks are still present.  PGN]

Embedded bugs can kill people.  Many bugs can be detected by thorough
testing, or release the product without spending money on good testing, and
wait until it kills people, then you know you got bugs.  Guess which
approach is most popular?

Software to analyse other software to detect Bugs is much in
demand.  How effective and economical is that state-of-art?  As
compared to doing proper testing, for example.  Traditional software
(not embedded) has testing tools that can capture script of normal
operations so as to test what happens after minor software changes.
It sounds like this kind of comprehensive automated testing is not in use
where it is most needed.

At the Embedded Systems Conference in San Jose, California, attendees
discussed how software practices can mean the difference between life and
death.  http://www.embedded.com/esc/sv/

* The Therac 25, designed to treat tumors with carefully targeted radiation,
  killed three patients with radiation overdoses due to programming errors.

* Inspections of software after the crash of a U.S. Army Chinook helicopter
  revealed 500 errors, including 50 critical ones, in just the first 17
  percent of code tested.  One wonders if the testing after the crash was
  better than the testing before implementation, and if litigation will lead
  to a better budget for testing before next disaster strikes.

* Electronic Smog is when instruments are inadequately shielded from
  interference from other electronic instruments.  Engineers have known of
  this risk for decades, but new technology producers are perpetually
  rediscovering this phenomena.

* A classic example of this is Japanese Bullet Train Doors opening when
  passing Apartment Complexes due to lots of kids playing Electronic Toys.
  This can kill passengers sucked out of the trains in the decompression.

* Pacemaker patients have had their devices inadvertently reprogrammed when
  walking through metal detectors. In 2003, the pacemaker of a woman in
  Japan was accidentally reprogrammed by her rice cooker.

* There have been a spate of problems with software in autos.

One report suggests that large software systems of more than a million lines
of code may have as many as 20,000 errors, 1,800 of them still unresolved
after a year.  In my experience, such bugs are not evenly distributed, but
are related to quality of programmers, programmer tools, testing, tech
support (when alleged bugs are reported), project oversight, leading to
clusters of systemic bugs some places, and almost total absence of bugs
other places.

http://www.informationweek.com/news/showArticle.jhtml?articleID=185300011
http://dso.com/news/showArticle.jhtml?articleID=185300246
http://biz.yahoo.com/prnews/060411/sftu082.html?.v=54


Oxygen and autopilots (Re: Ferguson, RISKS-24.23)

<"Andrew Koenig" <ark@acm.org>>
Tue, 4 Apr 2006 22:58:09 -0400

> I would propose this Basic safety concept: before the system will allow
> itself to move into dangerous situations, the pilot must confirm that he
> is aware of the specific danger involved.  [...]

As a (admittedly inactive) private pilot, I have to respond to this
suggestion by saying "No way!!"

One reason is an even more basic safety concept: No autopilot or other
device should ever be permitted to move the flight controls in ways that the
crew cannot override.  The reason is that it is impossible to predict all
the circumstances in which a malfunctioning "safety device" might itself
cause a hazard that is impossible to prevent without overriding it.

More generally, automation tends to carry hazards in practice that do not
exist in theory.  For example, one light airplane manufacturer once came up
with what looked on the surface like a wonderful safety innovation: Whenever
the airplane is flying more slowly than a given limit, the wheels come down
by themselves.  No more gear-up landings, right?

Wrong.  In practice, it turned out that the automatic gear extension
mechanism failed more often than pilots forgot to lower manual gear
explicitly, so there were *more* gear-up landings rather than fewer.
Eventually, the FAA required the auto-extension system to be deactivated.

I can also imagine an altitude-based "safety system" forcing an airplane to
fly into terrain if there is a sudden cabin depressurization that might have
otherwise been survivable, or — even worse — if the sensor that measures
cabin pressure malfunctions, indicating a depressurization when in fact none
exists.

I see nothing intrinsically wrong with a cabin depressurization warning (and
I imagine that pressurized airplanes already have them, though I've never
flown one myself).  I wouldn't even mind if an emergency depressurization
instructed the autopilot to descend in case the crew were
incapacitated--especially if the autopilot is coupled to a navigation system
that knows enough to avoid terrain.  But unless incapacitated, the pilot in
command should be in command.


Another near-disaster due to vehicle automation

<Pete Mellor <pm@csr.city.ac.uk>>
Sat, 15 Apr 2006 00:09:24 +0100 (BST)

Don Norman contributed an item: "Motorist trapped in traffic circle for 14
hours" to RISKS-24.22 on 1 Apr 2006.  This reminded me of the following.  I
checked, and I was surprised to find that no one seemed to have reported it
to RISKS, although it smells to me very much like an engine control system
failure, and possibly a software failure.

It was widely reported in the UK press at the time.  The following is one
account by Nick Britten, which I found on-line, originally printed in the
*Daily Telegraph*.

http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2006/03/11/ntrapped11.xml&sSheet=/portal/2006/03/11/ixportaltop.html

The comment by the driver regarding the power steering and the reaction
of BMW, are particularly interesting.

 - - - -

A motorist was trapped in his car driving at almost 130mph for 60 miles
after the accelerator jammed.  Kevin Nicolle, 25, was unable to stop the
automatic BMW going at top speed after the malfunction on the A1.  Kevin
Nicolle: 'I couldn't get the pedal off the floor' His terrifying journey,
which was followed by four police cars and a helicopter, ended when he
smashed the car into a roundabout, flipping it on its roof.  ...  Mr Nicolle
was driving back from friends in Newcastle to his home in Southsea, Hants,
last Sunday, when the accelerator on his automatic BMW 318 jammed at
Catterick, near junction 53 of the A1.  [PGN-ed]

Peter Mellor, Centre for Software Reliability, City University,
Northampton Square, London EC1V 0HB   +44 (0)20 7040 8422


Re: Another near-disaster due to vehicle automation

<"Don Norman" <norman@nngroup.com>>
Fri, 14 Apr 2006 18:08:25 -0700

The BMW incident reported upon by Peter Mellor is eerily similar to the fake
incident that I contributed to the April Fools' edition of RISKS-24.22, 1
Apr 2006.  My incident, which I carefully created to be as realistic as I
could make it, fooled a few people. Moreover, I believe it could actually
happen.

I was just in Cambridge (UK) and at a talk, I showed a slide of my fake news
story. The audience responded by describing the BMW incident.  There was
some discussion that this particular auto has a "drive-by-wire" control:
that is, the throttle pedal no longer has a mechanical link, but instead
signals the car's electronic control modules. In automobile language, this
is called "Electronic throttle control": ETC. BMW calls it EDR, or possibly
EMR. At least one aviation safety specialist at the Cambridge meeting said
that his car has this, and he prefers it, because now throttle position
controls speed, so that as long as the foot is held constant, the car
maintains constant speed, even up and down hills.  (And, if the newspaper
story is correct, even if you take your foot off the pedal and attempt to
apply the brakes.)

See RISKS for 1988: "Drive by wire" autos in development (RISKS-6.48). Yes,
to answer a question asked in that RISKS submission, BMW did introduce
electronic throttle control in their 7 series autos in 1988.

Caution: in the accident business, it is unwise to reply upon the initial
newspaper reports. The official accident investigations, which can take a
year or more to prepare, often have a very different slant on the
incident. Perhaps Peter Mellor can follow up on this story when the official
incident report is released.

Don Norman.  www.jnd.org

  [Later note from Don, Sun, 16 Apr 2006 18:41:40 -0700]

By the way.  I did some more research on the topic.  Seems that stuck
throttles were a continual event with old, mechanical throttles. The
electronic throttles have received numerous complaints, but all of the ones
I could find were about "unintended acceleration".  Doing a web search for
"electronic throttle accident" (without the quotes) is quite revealing.

I still don't know enough about this class of potential accidents to offer
definitive comment. But from what I can tell, automobile incidents will
replace aircraft ones for the RISKS community. The more things change, ...

Example:

The National Highway Transportation Safety Administration is investigating
complaints that some Toyota Motor Corp. cars may suddenly accelerate or
surge, causing one car to strike a pedestrian.  The 2002 and 2003 Toyota
Camry, Camry Solara and Lexus ES300 vehicles all come equipped with an
electronic throttle control system, which the NHTSA said uses sensors to
determine how much throttle is being applied.

The NHTSA said 30 crashes have been attributed to the problem, with four
accidents resulting in five injuries. The crashes "varied from minor to
significant and may have involved other vehicles and/or building
structures."  The preliminary investigation is the first step in the
investigative process.  The NHTSA will contact Toyota to ask for documents
pertaining to the issue, and could upgrade the investigation to an
engineering analysis. More than 1 million Toyotas are covered by this
investigation, according to the agency.

Toyota officials could not immediately be reached for comment.

[Source: NHTSA Investigating Toyota Cars For Sudden Acceleration,
Sharon Silke Carty, Accident Reconstruction]
  http://www.accidentreconstruction.com/news/mar04/030804d.asp


Re: IT Corruption in the UK (Ravetz, RISKS-24.23)

<"Lem Bingley" <Lem_Bingley@vnu.co.uk>>
Fri, 7 Apr 2006 14:51:39 +0100

Jerry Ravetz's item (http://catless.ncl.ac.uk/Risks/24.23.html#subj3) on
Mapeley's involvement in Passport Agency biometric testing blurs the
distinction between the current UK passport system, which uses a crude
facial biometric process, and the upcoming biometric UK identity card
system, which will undoubtedly be more complicated.

Mapeley may or may not become involved in the UK ID card project - the
procurement phase has only just begun (see
http://www.computing.co.uk/computing/news/2153478/id-card-scheme-moves).

At present the UK Home Office suggests that the eventual ID card will likely
use a combination of facial biometrics and iris-recognition, which will
obviously be much less prone to error than a facial biometric alone (the
passport system compiles its biometric template from the photograph supplied
on application, so is presumably fairly likely to spit out false
positives). Obviously when I say there is less opportunity for error, I'm
talking about the richness of information on which authentication decisions
can be based, not the implementation of the system. Clearly you can
implement a system to create as many wrong decisions as you like.

I have wondered whether it might be possible to apply for two biometric ID
cards, under different names, and escape detection.

According to one expert I've spoken to, iris-recognition applied to both
eyes should be good enough to detect 99.99 percent of duplicate registration
attempts - assuming there is a central register of templates where a new
applicant can be compared with existing records. This is not yet certain for
the UK system, but it is very likely. Again, the above level of confidence
assumes the unlikely circumstance that there are no errors in
implementation.  (See
http://lembingley.itweek.co.uk/2006/03/biometric_card__1.html for more).

Of course you may get away with applications for two ID cards if you turn up
for each test wearing an eye patch - on alternate eyes.

Lem Bingley, Editor IT Week VNU BUSINESS PUBLICATIONS LIMITED, 32-34 Broadwick
Street, London, W1A 2HG +44 (0) 20 7316 9000 http://www.itweek.co.uk


DNS Amplification Attacks

<Gadi Evron <ge@linuxbox.org>>
Sat, 18 Mar 2006 03:50:44 +0200

In this paper we address in detail how the recent DNS DDoS attacks work:
how they abuse name servers, EDNS, the recursive feature and UDP packet
spoofing, as well as how the amplification effect works.

Our study is based on packet captures (we provide with samples) and logs
from attacks on different networks reported to have a volume of 2.8Gbps.
One of these networks indicated some attacks have reached as high as 10Gbps
and used as many as 140,000 exploited name servers.

In the conclusions we also discuss some remediation suggestions.

Given recent events, we have been encouraged to make this text available at
this time.

  http://www.isotf.org/news/DNS-Amplification-Attacks.pdf

Please note that this version of this paper is prior to submission for
publication and that the final version may see significant revisions.

Randy Vaughn and Gadi Evron


Re: "routine" system failure (Magda, RISKS-24.24)

<Ken Knowlton <KCKnowlton@aol.com>>
Thu, 13 Apr 2006 20:59:55 EDT

I reacted, much as David Magda did, at the very odd notion of a "routine"
system failure [in RISKS-24.24]. On further thought, an "ordinary" system
failure (from buffer overrun, mishandled leap year, etc., etc.) can be
meaningfully distinguished from a maliciously intended failure.

If the document had been translated into English (though it presumably
wasn't in the cited case), a translator might not have understood the
delicate difference between 'ordinary' and 'routine'. This thought makes me
wonder whether troubles might not, many times, be compounded by insufficient
vetting of translations of the technical reports of various misfortunes.

Please report problems with the web pages to the maintainer

x
Top