The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 24 Issue 89

Friday 2 November 2007

Contents

Computer glitch stops TransAdelaide trains
Andrew Pam
Predicting fatigue failure
Ken Knowlton
Satanic car key traps 12 motorists in car park of horror
Chris Leeson
Car park denial-of-service attack
Peter Houppermans
Risk of Unanticipated Countermeasures -- Congestion Pricing
David Lesher
License plate scanners in police cars
Jonathan de Boyne Pollard
A second look at the Mac OS X Leopard firewall
Monty Solomon
CAPTCHA trojan
Scott Nicol
Mac trojan in-the-wild
Gadi Evron
Double Dipping and Double Charging
Paul Robinson
Re: Fighting traffic citations
Doug McIlroy
Plagiarism & technology
Jeremy Epstein
End of Leap Seconds?
Rob Seaman
Info on RISKS (comp.risks)

Computer glitch stops TransAdelaide trains

<Andrew Pam <xanni@glasswings.com.au>>
Fri, 02 Nov 2007 13:29:56 +1030

THE supplier of the problem-plagued $9.5 million computerised Central Train
System will be forced to fix it, as commuters were again delayed yesterday.

TransAdelaide general manager Bill Watson yesterday revealed an audit of the
system was already in progress after it caused disruptions to morning
services.  "There has been a whole series of different problems," Mr Watson
said.  "They have diminished quite substantially, but there are still
incidents once or twice a month which is unacceptable."  The latest problem
caused delays of up to 15 minutes for morning commuters from 6.30am
yesterday, Mr Watson said.  "The server became unstable and caused delays
right across the network," he said. "By 10am everything was back to
normal. The system had stabilised itself."
http://www.news.com.au/adelaidenow/story/0,22606,21913481-5006301,00.html

THE computer problem that threw the travel plans of about 15,000 rail
commuters into chaos yesterday had been known to for almost five months.

An audit of the $9.5 million computerised Central Train System was
ordered by TransAdelaide in May and completed at the end of June. The
same problem that created yesterday's chaos also caused disruptions to
morning train services in June.

Thousands of passengers were stranded or delayed yesterday morning
because of the ongoing technical problem with the computerised train
control system.  [...]
http://www.news.com.au/adelaidenow/story/0,22606,22684078-5006301,00.html

Andrew Pam  http://www.sericyb.com.au/


Predicting fatigue failure

<Ken Knowlton <KCKnowlton@aol.com>>
Thu, 1 Nov 2007 11:22:34 EDT

Speaking from ignorance, I'll make this short. Many disasters (recently the
Minneapolis bridge, in 2001 the Airbus 300 in Queens) are presumed to have
resulted from fatigue failure. Without analyzing/guessing about possible
modes of failure, couldn't one start with this low-knowledge, high-tech
method: One at a time, tug together several (arbitrarily selected?) pairs of
points of a structure, recording compliance curves.  If the loading and
unloading curves don't match, or if they are different from last month's
curves, something can be presumed to be happening.  With the Queens Airbus:
pulling tips of vertical and horizontal stabilizers together (presumably
elastically) might have demonstrated changes over previous months. Is
anything like this done? Even after-the-crash, such prior data would be
valuable evidence -- exculpatory or otherwise.  I've never heard of it.


Satanic car key traps 12 motorists in car park of horror

<Chris Leeson <Chris.Leeson@atosorigin.com>>
Fri, 2 Nov 2007 12:01:46 -0000

The RISKS Archives are full of interference-related problems.  Here's
another one for the mix, related in *The Register*'s usual style.

http://www.theregister.co.uk/2007/11/02/kent_car_key/

12 cars in a Gravesend, Kent, car park failed to start or had alarms
triggered by a faulty transmitter in another car.  There had been problems
in the car park for some time.

Not computer related? Well, the initial suspects were "a rogue transmitter
or wireless broadband". Now that virtually everything appears to be
wi-fi/bluetooth enabled these days, we can only expect more of the same.


Car park denial-of-service attack

<Peter Houppermans <phobos@pobox.com>>
Fri, 02 Nov 2007 10:36:27 +0100

[...  After quite a long search, the problems were found to emerge from a
small family car which was alleged to send out signals blocking keyfobs in a
50m radius.

I must admit I have trouble believing that a CAR does this.  Maybe something
IN the car, but why would a mechanism in a car transmit?  For what purpose?
Main RISK: if someone works out how, I would find it's a major worry for any
executive driver.
http://news.bbc.co.uk/2/hi/uk_news/england/kent/7073935.stm


Risk of Unanticipated Countermeasures -- Congestion Pricing

<David Lesher <wb8foz@panix.com>>
Fri, 2 Nov 2007 01:25:21 -0400 (EDT)

Niraj Sheth, London's Congestion Fee Begets Pinched Plates,
*Wall Street Journal, 2 Nov 2007, B1
http://online.wsj.com/article/SB119396467957679995.html?mod=fpa_editors_picks

London's congestion pricing for drivers is heralded around the world for
reducing traffic and pollution. It's also causing an unintended effect: a
sharp jump in thieves stealing or counterfeiting license plates.

Thieves are pinching plates by the dozens every day to fool the city's
traffic cameras, which enforce the £8 ($16) daily charge to drive in central
London as well as other traffic infractions. A computer system matches the
plate numbers caught on camera with a register of vehicles; if owners don't
pay a congestion fee (which they can do online, by phone or at gas stations)
by the following day, they get a photo of their car along with a fine in the
mail. With someone else's license plate on their car, scofflaws can drive
around free, and any fines are billed to the plate's rightful owners.

Before the congestion charge took effect in February 2003, police didn't
bother to track stolen number plates, as they're called in Britain, because
so few incidents were reported. In 2004, nearly 6,000 plates were stolen,
according to London's Metropolitan Police. Reports of stolen plates in the
city spiked to 9,777 last year. Up to 300 cars with illegal license plates
enter London's congestion charge area every day, according to the country's
Automotive Association.

Where IS James Bond's Aston Martin DB5 when you need it?
Caught in traffic, no doubt...


License plate scanners in police cars (McCool, RISKS 24.86)

<Jonathan de Boyne Pollard <J.deBoynePollard@Tesco.NET>>
Fri, 02 Nov 2007 10:28:36 +0000

> The article briefly mentions that such systems are common in London and in
> casinos, with little discussion of any problems that may have come up.

In fact, ANPR (Automatic Number Plate Recognition) has quietly become
all-pervasive in the U.K. in recent years.  (Fitch pointed out the
construction of a national ANPR network two years ago in RISKS 24.09.
ANPR-equipped vehicles are almost permanent fixtures in some places, also.)
M. McCool's observation that "the enthusiasm for the systems in this article
is tangible" can be repeated for much news coverage of the subject, where
there is great emphasis on the "security" and "safety" of having automatic
cameras and picture recognition softwares linked to various databases of the
country's population.

In part that enthusiasm can be traced back to originating with the news
sources themselves, whose interests in downplaying any potential for abuse,
accident, or error in these systems are understandable.  A quick Google News
search turns up many articles, such as
<URL:http://news.bbc.co.uk/2/hi/uk_news/england/bristol/somerset/7037938.stm>,
<URL:http://news.bbc.co.uk/2/hi/uk_news/magazine/7048645.stm>,
<URL:http://www.wbtimes.co.uk/content/brent/willesdenchronicle/news/story.aspx?brand=WBCOnline&category=news&tBrand=northlondon24&tCategory=newswbc&itemid=WeED04%20Oct%202007%2017%3A51%3A27%3A037>,
<URL:http://www.thisislancashire.co.uk/news/headlines/display.var.1745860.0.caught_on_camera.php>,
<URL:http://manchestereveningnews.co.uk/news/s/1017764_cops_crush_10000_cars>,

many of which are quick to tout the numbers and categories of arrests made,
and how many vehicles were impounded, and gloss over or ignore questions of
whether any errors were made.  Such coverage has all the trappings of
journalists simply regurgitating press handouts.  (Compare the aforelinked
BBC News coverage with that of another news organization at
<URL:http://gazetteseries.co.uk/mostpopular.var.1760672.mostviewed.arrests_at_operation_on_bridge.php>,
for example.)

The cited statistics also require some scrutiny.  The Manchester Evening
News article, for example, repeats police claims that "uninsured drivers are
six times more likely to have convictions for driving un-roadworthy vehicles
and nine times more likely to have convictions for drink-driving".  But the
thought that immediately comes to mind is how much that disparity might
simply be an artifact of the way that the statistics are gathered.  Whether
a driver has insurance is only checked after xe has already been stopped for
another reason.  There is, as yet, no automatic roadside system for scanning
drivers as they pass and checking them against the central MIB (Motor
Insurers' Bureau) database to see whether they have insurance.  The measured
ratio of uninsured to insured drunk drivers may be 9:1 (which seems to be
the datum that the claim is derived from).  But that may simply be because
there are many uninsured drivers who are not stopped for drunk-driving.

There is an interesting 2005 editorial piece in The Register at
<URL:http://www.theregister.co.uk/2005/03/24/anpr_national_system/> on this
subject, which discusses the problems of directly checking whether drivers
are insured.  But perhaps the most interesting article related to this is
Neil Mackay's 2007-10-06 article in The Sunday Herald at
<URL:http://sundayherald.com/news/heraldnews/display.var.1741454.0.0.php>.
Two quotes stand out from it.  The first is the first line of the
report being discussed:  "We live in a surveillance society."  The
second is from the information commissioner, Richard Thomas:  "Today, I
fear that we are, in fact, waking up to a surveillance society that is
already all around us."

The report discussed by Mackay depicts a dystopian vision of the U.K. in
2017.  Some may dismiss such visions.  Science fiction is littered with
disturbing visions of the future that have never come to pass, after all;
and the regularity of that may lead some to erroneously think that _all_
such predictions are, similarly, unlikely to be realized.  However, science
fiction is also littered with occasions where fiction became fact.  One
relevant example: The police hoverdrones of the television series _Dark
Angel_, set in the U.S. in 2019, are to become a reality in the U.K. in
2007/2008 according to
<URL:http://www.scenta.co.uk/Gadgets/1707394/silent-witness.htm>.


A second look at the Mac OS X Leopard firewall (Jürgen Schmidt)

<Monty Solomon <monty@roscom.com>>
Tue, 30 Oct 2007 22:36:30 -0400

Jürgen Schmidt, Leopard with chinks in its armour, 29 Oct 2007

Apple is using security in general and the new firewall in particular to
promote Leopard, the latest version of Mac OS X. However, initial functional
testing has already uncovered cause for concern.

The most important task for any firewall is to keep out uninvited guests. In
particular, this means sealing off local services to prevent access from
potentially hostile networks, such as the Internet or wireless networks.

But a quick look at the firewall configuration in the Mac OS X Leopard shows
that it is unable to do this. By default it is set to "Allow all incoming
connections," i.e. it is deactivated. Worse still, a user who, for security
purposes, has previously activated the firewall on his or her Mac will find
that, after upgrading to Leopard, the system restarts with the firewall
deactivated.

In contrast to, for example, Windows Vista, the Leopard firewall settings
fail to distinguish between trusted networks, such as a protected company
network, and potentially dangerous wireless networks in airports or even
direct Internet connections. Leopard initially takes the magnanimous
position of trusting all networks equally. ...
  http://www.heise-security.co.uk/articles/98120


CAPTCHA trojan

<Scott Nicol <scott.nicol@gmail.com>>
Fri, 02 Nov 2007 15:08:53 -0400

Interesting blog entry at Trend Micro on a new "striptease" trojan,
that's simply a ploy to get users of the trojan to solve CAPTCHAs:

http://blog.trendmicro.com/captcha-wish-your-girlfriend-was-hot-like-me/

Nice to see that we've progressed from the thin-client model of a few years
ago (RISKS-23.17) to today's more robust client implementation.


Mac trojan in-the-wild

<Gadi Evron <ge@linuxbox.org>>
Wed, 31 Oct 2007 18:23:23 -0500 (CDT)

For whoever didn't hear, there is a Macintosh trojan in-the-wild being
dropped, infecting mac users.  Yes, it is being done by a regular online
gang--itw--it is not yet another proof of concept. The same gang infects
Windows machines as well, just that now they also target macs.

http://sunbeltblog.blogspot.com/2007/10/screenshot-of-new-mac-trojan.html
http://sunbeltblog.blogspot.com/2007/10/mackanapes-can-now-can-feel-pain-of.html

This means one thing: Apple's day has finally come and Apple users are going
to get hit hard. All those unpatched vulnerabilities from years past are
going to bite them in the behind.

I can sum it up in one sentence: OS X is the new Windows 98.  Investing in
security ONLY as a last resort losses money, but everyone has to learn it
for themselves.

  [Mike Hogsett's reaction to this: "Sure, it is a vulnerability, but the
  user has to confirm the download, then run the installer, then enter their
  admin name and password during the installation of the trojan.  PGN]


Double Dipping and Double Charging

<Paul Robinson <paul@paul-robinson.us>>
Sat, 27 Oct 2007 03:35:35 -0400

In RISKS-24.86, Arthur Flatau mentions how the Austin, Texas tollway system
is double-billing some customers.  And that it seems odd they couldn't have
designed the system to ignore duplicate transponders occurring very close to
each other.

On this point, I agree.  Even if someone was able to make a duplicate of a
transponder, I think it would be extremely unlikely that they would use it
on two vehicles traveling together.  Now, two people, on the other hand,
might be a different story.  So I have a different story.

Back, oh, about twenty years ago I lived in Long Beach, California, Long
Beach Transit, the local bus company, went from the old "dump" style
fareboxes to the fully automatic ones that count the money and even have a
magstripe reader, so they changed from a regular paper-type bus pass to one
with a mag strip. You would swipe your monthly pass through the reader and
it would beep.  If something was wrong, it would beep twice and the display
would tell the driver what it was.  (I was a cash payer because a pass
didn't work for me; I had to use two different bus companies to get to work,
and they didn't accept each other's passes.)

So I got thinking about it, and I was talking to a driver, and I asked him
what would keep someone traveling with someone else from sneaking their pass
back, say, out the window to someone else (I have seen it done by kids on
the bus sometimes, if they're slick about it the driver will never know!)
He said that it doesn't allow it. He asked me to wait until the next person
came on with a pass and he'd show me.

So, a few stops later someone came on and swiped their pass through, and it
beeped once.  He asked the woman if she would do it again, and she did.  It
beeped twice, and on the LCD display I could see it said "PASSBACK".  (I
was, at the time, sitting in the seat directly behind the driver.) The
driver explained to me that it won't let you use the same bus pass on that
bus for about ten minutes.

So, twenty years ago the technology on a bus farebox was capable of knowing
when an access token was being used twice, but even with the advances in
technology we can't do it today.  On the other hand, it could be argued that
there's no percentage in keeping you from cheating the customers but a lot
of incentive in preventing the customers from cheating you, so maybe that's
part of the reason.

Paul Robinson http://paul-robinson.us - My Blog


Re: Fighting traffic citations (RISKS-24.88)

<Doug McIlroy <doug@cs.dartmouth.edu>>
Wed, 31 Oct 2007 20:22:54 -0400

"Fighting traffic citations", 26 October 2007, brought to mind an old Joe
Condon story.  Seems his neighbor was hauled in for speeding in his Porsche
and asked if Joe might be able to check the accuracy of the radar.  Joe
relished the challenge and agreed to serve as an expert witness.  He
borrowed the very radar from the police and set it up at the very spot of
the ticket, where the cop lurked just where cars came into view around a
wooded curve.  The radar worked fine on several trials at the speed limit
and then gave a startlingly high reading.  A truck appeared out of the woods
behind the Porsche; its big cross-section had been detected through the
trees beyond the little stealth car.  Joe testified to this at the trial,
but his neighbor was found guilty anyway.  After the trial the judge took
them aside and told them what technicalities to appeal on.  He had been
willing to accept Joe's evidence that the radar might have detected a
following vehicle, but was unwilling to get that fact recorded as a
precedent.


Plagiarism & technology

<Jeremy Epstein <Jeremy.Epstein@softwareag.com>>
Mon, 29 Oct 2007 12:25:23 -0400

The interaction of plagiarism and technology seems to crop up periodically
in the news, and at PGN's invitation I'm writing this brief note in hopes of
soliciting a discussion.  A recent discussion on the USACM (Public Policy
Committee of the Association for Computing Machinery) mailing list triggered
these thoughts.  I'm also posting this on my blog in case anyone feels like
adding comments there.  (http://abqordia.blogspot.com)

It's obvious that the availability of so much information online makes
plagiarism easier - it's impossible for a reader to know everything that
could have been used without permission or attribution.  On the flip side,
things like Google make it easier to find suspected instances - as an
example, when I'm reviewing an article for a journal or conference, I
frequently put phrases in to Google that I suspect are stolen, and have on
numerous instances found that they were in fact taken verbatim without
attribution.  [Hint to the plagiarist: if you're going to use someone else's
words without attribution, make sure they fit with your writing style.  This
is particularly notable when choosing text written by someone with a
different native language than your own - if your native language is English
and you copy something written by a native Chinese speaker, it will be
fairly obvious; the converse is also obviously true.]

For high school and college students, technology like TurnItIn
(www.turnitin.com) is one way of finding plagiarism without teachers having
to do extensive searching.  Although I haven't personally seen the output,
my understanding is that the student submits text which is automatically
analyzed, and potential instances of plagiarism are noted in a message to
the teacher.  (If someone could provide a better explanation, I'd certainly
appreciate it!  I noticed that TurnItIn now put emphasis on improving
students' writing style, perhaps as a way to give students a feeling that
they're getting something out of the deal.)

There are several problems with products of this sort:

(1) False positives.  When my daughter was in high school, she noted several
times that TurnItIn considered her a plagiarist because it was unable to
distinguish between properly quoted/referenced text, and unauthorized
copying.  Teachers who simply look at the overall "score" without reading
the individual comments will tend to penalize those students who do the best
job of citing background work!  (I'm reasonably sure that TurnItIn is
sufficiently cautious as not to deny that there are false positives, and to
strongly encourage teachers and students to examine the results rather than
simply believing them verbatim.)

(2) Copyright infringement.  TurnItIn keeps copies of student papers in
their database, for matching against future papers.  This seems reasonable
at first blush - after all, selling term papers is an old tradition, dating
back well before the Web (although today's students may not believe that)!
However, by keeping submissions for matching, TurnItIn may be violating
copyright, as a recent lawsuit claims (see "McLean Students Sue
Anti-Cheating Service", Washington Post, March 29 2007,
http://www.washingtonpost.com/wp-dyn/content/article/2007/03/28/AR200703
2802038.html).  Additionally, students have effectively no option to refuse
adding their papers to the database, and are not compensated for their
submissions.

So to bring this to RISKS, the issue is that we have competing risks: the
risk of plagiarism being combated by TurnItIn and similar products vs. the
risk of unfair accusations of plagiarism and copyright infringement - all of
which is enabled by technology.


End of Leap Seconds? (Re: RISKS-24.79)

<Rob Seaman <seaman@noao.edu>>
Thu, 25 Oct 2007 13:43:30 -0700

An earlier thread, "U.S. legal time changing to UTC" discussed a possible
future for UTC without leap seconds.  We are now just one step away from
that future.  Rob Seaman, National Optical Astronomy Observatory

  ---------- Forwarded message ----------
  From: Richard B. Langley
  To: Canadian Space Geodesy Forum
  Subject: End of Leap Seconds?

  At the Civil GPS Service Interface Committee meeting in Fort Worth last
  month, Dr. Wlodzimierz Lewandowski from the Bureau International des Poids
  et Mesures (BIPM) summarized the outcome of the International
  Telecommunication Union (ITU) meeting on the redefinition of Coordinated
  Universal Time (UTC), which was held in Geneva, 11-14 September 2007:

  o April 2008: ITU Working Party 7A will submit to ITU Study Group 7
  project recommendation on stopping leap seconds

  o During 2008, Study Group 7 will conduct a vote through mail among member
  states

  o 2011: If 70% of member states agree, World Radio Conference will approve
  the recommendation

  o 2013: Application of leap seconds will stop and UTC will become a
  continuous time scale.

The risk here is in attempting to resolve a technological issue with complex
implications by voting.  One would submit that any solution that generates a
negative opinion from 30% of a pool of experts is a bad solution.  Worse yet
is if the voters are not themselves experts...

Rather, a coherent plan should be developed in an open, collaborative
environment and a consensus should be sought not only to the acceptability
of the plan, but to its necessity.  Participation should be sought from all
affected communities - that list is quite extensive for timekeeping.  For
instance, one might expect a UTC conference to be organized, not just an
internal meeting of the ITU.

In this case, no plan whatsoever exists for addressing the inevitable
discontinuity that will occur as the missing leap seconds accumulate.  The
previous thread described why civil time is a flavor of mean solar time in
the first place.  What happens when this assumption is challenged?

Earlier suggestions for embargoing leap seconds relied on the flabby idea of
leap hours.  The leap hour concept appears to rest on the notion that many
localities manage to handle one hour Daylight Saving Time shifts twice a
year.  Perhaps the thought is simply that a year will come when one of the
DST jumps is skipped...unfortunately, it doesn't work like that.  (And not
only because not all localities observe DST, and not all at the same time.)
The precise reason that DST is an acceptable timekeeping policy is that any
civil or legal entities or systems that need to know an unambiguous time can
fall back on a common worldwide UTC.  It would be completely inappropriate
to institute a leap in UTC by resetting the clocks to run through the same
hour twice.  How could one disambiguate that hour of world history ever
after?

Rather, a leap second is an intercalary event like a leap day - that
particular minute, hour, and day is one second longer.  There is no
ambiguity during a leap second.  A leap hour would simply be 3600 embargoed
leap seconds released one after another.  That particular red-letter day
would have 25 hours.  Any software that has trouble handling the time
23:59:60 would be faced with 3600 such time values in a row: 24:00:01, ...,
24:59:59, 25:00:00.

But that's not all, since the leap hour would occur all over the world at
the same time.  The leap second 2005-12-31T23:59:60 corresponded to 18:59:60
EST in New York City and 15:59:60 PST in Los Angeles.  A leap hour, say
2600-12-31T24:00:00-24:59:59, would be interposed between the successive
clock ticks 18:59:59 and 19:00:00 in New York, between 15:59:59 and 16:00:00
in LA.

How would this work logistically?  For instance, would the NYC clock count
from 18:59:60 to 18:59:3659?  This is the sort of detail that should be
worked out before voting a fundamental change to UTC.

Rob Seaman, National Optical Astronomy Observatory, Tucson, AZ

Please report problems with the web pages to the maintainer

Top