The RISKS Digest
Volume 24 Issue 72

Wednesday, 11th July 2007

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…


Remote physical security for air traffic control center
Rob Slade
Beware of the fine print
Peter Mellor
The risk with the Mac OS X 10.4.10 version number
T Yip
The Athens Affair: Greek Cellphone Caper
Roy Stehle
Lightning bolt blamed for NYC power outage
Voltr Risks, Glitch - Fire Alarm - International Space Station
Robert J Perillo
Wikipedia, It's Time to Grow Up! The Benoit Murder/Suicide Case
Lauren Weinstein
Wikipedia and Responsibility
Lauren Weinstein
Re: Transport system complexity presents insurmountable risk?
Mark Brader
Re: Gripen: Risks of safety measures in military jet aircraft
Matt Jaffe
Peter Mellor
N-version programming — the errors are in ourselves
Fred Cohen
Secure Programming with Static Analysis
Brian Chess
Info on RISKS (comp.risks)

Remote physical security for air traffic control center

<Rob Slade <>>
Mon, 09 Jul 2007 14:31:18 -0800

Because watching a monitor from 4,600 kilometres away is more secure ...

  "The air traffic control center in Surrey, B.C. will have its security
  guards replaced with automated entry systems and officials watching
  monitors in Ottawa."  CBC News, 9 Jul 2007

Beware of the fine print

Wed, 11 Jul 2007 08:27:52 EDT

On BBC Radio 4 "You and Yours" last week with a follow-up at lunchtime today
(11th July 2007):

Some naughty web users have got more than they bargained for when casually
browsing for adult material.  At least two porn sites (mysexworld and
sexpassport) feature a novel way of enforcing payment.  A page on the site
(p7 of 13 in one case) contains a warning well buried in the small print
that, by visiting that page, the reader agrees to a 3 day free trial.  If
they do not cancel the arrangement, then a 3 month contract at 39.99 pounds
payable in advance comes into force.  It is stated that, if payment is not
received, then the "subscriber" agrees to inconvenience up to and including
the complete disruption of their use of their computer.

Having inadvertently walked into this "agreement", the hapless victims then
found that they had downloaded software which flashed pop-up windows onto
their screens, demanding payment.  The pop-ups cannot be disabled, moved,
closed or sent to background, and persist for increasingly long periods of
up to 10 minutes.  Since they appear every few seconds, they render the
computer unusable.

This charming "business model" is the brainchild of a certain MBS, who lease
the software to the porn site operators.  The CEO of MBS quite brazenly
stated that this is fair practice since the victims had knowingly agreed in
advance to the disruption in the event of them not paying for their
subscriptions.  He denied that he was anything to do with the porn
"industry" and that his software was available for hire by any outfit
wanting to sell any type of web services.

The UK Trading Standards Authority has received 200 complaints so far and
are apparently "in discussion" with MBS to modify their practices.  The
equivalent authority in the US has adopted a less limp-wristed attitude and
has enforced on a similar firm in the US a maximum duration of 40 seconds
for the pop-ups, and are considering slapping an injunction on them to ban
the practice.

To listen again to the programmes, go to
and follow the links.

Peter Mellor;   Mobile: 07914 045072;   email:

The risk with the Mac OS X 10.4.10 version number

<T Yip <>>
Thu, 28 Jun 2007 12:59:59 -0700 (PDT)

Mac OS X 10.4.10 is the first iterative release of Mac OS X to have 5 digits
in its version string (1, 0, 4, 1, 0). It is also the first iterative
release of Mac OS X to use the ".10" extension. This is causing some
significant issues.

The initial three [sic] digits for "10.4.10" are the same as "10.4.1," an
earlier release of Mac OS X 10.4 (Tiger). Since the
"MAC_OS_X_VERSION_ACTUAL" string (used by Cocoa applications to determine
the current OS version) can carry a maximum of four digits, Mac OS X 10.4.10
and and 10.4.1 are both labeled "1041."

This means that some applications recognize Mac OS X 10.4.10's version
string as Mac OS X 10.4.1 and refuse to properly run, erroneously thinking
that the system version is too old. For instance, the application UNO
requires Mac OS X 10.4.4. When running under Mac OS X 10.4.10, it recognizes
the Mac OS X version number as 10.4.1 and refuses to operate.

Essentially, the built-in Cocoa method for forbidding an app to run on too
low a system breaks against Mac OS X 10.4.10.

We're still searching for a viable method for tricking applications into
thinking that the system version is 10.4.9, which would largely obviate this

RISKS: This sounds almost like a repeat of the Y2K scenarios, with all its
attendant risks.

The Athens Affair: Greek Cellphone Caper (IEEE Spectrum)

<Roy Stehle <>>
Mon, 02 Jul 2007 14:33:49 -0700

This is an interesting article.  One would wonder what might be gained if
the high-level parties were trained to know the insecurity of the cellular
network.  However, there's real life, and people will do what's convenient.

The Athens Affair
Vassilis Prevelakis and Diomidis Spinellis, IEEE Spectrum, July 2007

A case involving hackers deploying sophisticated eavesdropping technology
within Greece's largest cellphone network provides a rare glimpse into one
of the most elusive of cybercrimes. Major network penetrations of any kind
are exceedingly uncommon. They are hard to pull off and equally hard to
investigate. This one proved to be legendary.

  [See the blogs of Matt Blaze and Steve Bellovin for excellent commentary: and

Lightning bolt blamed for NYC power outage

<"Peter G. Neumann" <>>
Fri, 29 Jun 2007 13:10:15 PDT

On 27 Jun 2007, lightning hit a component of New York City's power
distribution network, resulting in a 49-minute power outage that affected
385,000 people in Manhattan's Upper East Side and the Bronx — all supplied
by two power stations in the southwest Bronx that were knocked out.  The
initial guess is that the system misdiagnosed the power surge resulting from
the lightning strike, and overreacted — protectively shut down those

Following last summer's 9-day outage in Queens (RISKS-24.36), Con Ed has
spent $90 million to upgrade the aging equipment.  [Source: Patrick
McGeehan, *The New York Times*, National Edition, A25, 29 Jun 2007, PGN-ed]

Voltr Risks, Glitch - Fire Alarm - International Space Station

<Robert J Perillo <>>
Thu, 5 Jul 2007 16:22:23 -0700 (PDT)

The so-called "software glitch" that caused the false Fire Alarm to go off
in the Russian portion of the International Space Station (ISS) during the
major computer and solar panel position repairs in early June, was probably
not a glitch but a fail safe programmed response to the power failures being
experienced. (Since no one seems to know the detailed design of the Russian
systems, there is a slight possibility that it was a pre-programmed
response, or caused by, the computers going down, but the timing of the
alarm does not support this.)

We (U.S. Industry) used to program facilities monitoring systems, security,
fire, heating-cooling, like that in the '70s and '80s, that is, the alarm
would go off if power failed, dipped, or was irregular in a section, just to
be on the safe side, i.e. if the fire alarm is not working, turn on the fire
alarm. Now we are more sophisticated, and with battery or backup power on
the main monitoring section, and more sophisticated software to detect a
specific problem, to work around, and fault isolation software, this
procedure is not in place any more.  We have accurate Fire Alarms and an
alarm to say the fire alarm is not functional, and power or voltage
reduction (Voltr) alarms, and have stopped using this indiscriminate
"shotgun" approach of turning on the fire alarm to be on the safe side.

In cryptographic devices, because of the problems with "gate arrays" when
voltage is irregular, and the fact that a clear text can never be permitted
to go out on the cipher text output, if we detect a power loss, voltage dip
or irregularity on the device or components around the device, a Voltr
Crypto-Alarm is issued, and the cipher text output is immediately
disconnected, and not reconnected until the alarm is checked (Cleared). When
things stop working, like data links, or Radios, everyone suspects the
encryption device, and does an alarm check to clear the crypto, sometimes
that works and sometimes it doesn't because the problem is not with the

Built-in-test (BIT), fault detection, calibration, remote maintenance,
and/or fault isolation, is not the stepchild, leave it to the Intern,
embedded systems software anymore. In the U.S., it is mature, complicated,
specialized software written by experts.

About the use of '70s and '80s technology in the Russian portion of the ISS,
while this might be a good thing for mechanical systems, it is worrisome in
terms of computers and software?

Robert J. Perillo, Principal Software Engineer

Wikipedia, It's Time to Grow Up! The Benoit Murder/Suicide Case

<Lauren Weinstein <>>
Fri, 29 Jun 2007 08:56:46 -0700 (PDT)

June 29, 2007

Greetings.  After causing law enforcement and the news media to spin their
wheels uselessly, a Wikipedia user has <a
confessed</a> to planting a rumor as fact on the Wikipedia page for wrestler
Chris Benoit, claiming his wife was dead hours before the bodies of Benoit
and his family were found.

The ease with which this was done by a still anonymous party, triggering
investigations and consternation at a time that was already intensely
emotional for everyone involved with the Benoit case, demonstrates once
again a fundamental flaw in Wikipedia's usually anonymous, non-moderated
editing framework for most Wikipedia pages.

The fact that such editing can usually be undone (and redone later for that
matter) doesn't change the fact that Wikipedia can never be an authoritative
source while it is subject to this kind of anonymous abuse — whether by
jokesters out to get their kicks or well-meaning contributors simply
unwilling to check their facts.  Such events can easily turn Wikipedia pages
into rumor and defacement billboards rather than encyclopedia-quality
content. The damage is already done.

If Wikipedia expects to really be taken seriously in the long run, it needs
to rethink its standards for item creation, modification, and attributions.

Wikipedia, it's time to grow up.

Wikipedia and Responsibility

<Lauren Weinstein <>>
Sat, 30 Jun 2007 12:13:02 -0700

                      Wikipedia and Responsibility

Greetings.  In the wake of my recent posting regarding Wikipedia and the
Benoit murder/suicide case ( ),
I've received a number of responses that boil down to: "Why are you blaming
Wikipedia for anything relating to this situation?  Wikipedia isn't supposed
to be authoritative."

I definitely agree that in a perfect world everyone would understand that
Wikipedia is not authoritative — and cannot be under its current structure.

But in the real world, Google searches on a vast array of topics will return
Wikipedia articles as the top or near top results (and/or in other
contexts), and a vast number of sites use Wikipedia entries as convenient
explanatory text or links — despite most Wikipedia entries' lack of
attribution, lack of documented fact checking, and being subject to mutation
and alteration at any time.  But Wikipedia entries are free, they're easy to
link to, and hell, if any particular Wikipedia page is wrong at any
particular moment, people can always say "it's not my problem."

Unfortunately, it is not necessarily obvious to many Web users following
such links — or reading related excerpted texts — that Wikipedia articles
"aren't supposed to be authoritative."  Many people who find their way to
Wikipedia items or texts don't know what Wikipedia really is about, and many
persons understandably assume it's like any other "real" encyclopedia (that
is, authors attributed somewhere, facts get a modicum of checking at least
most of the time, entries aren't subject to random editing on a whim, etc.)

The Wikipedia folks created the system under which they operate.  They need
to take some responsibility when that structure causes damage.  This isn't
the first example of Wikipedia abuse screwing around with people's lives.

I am frankly very tired of hearing some people use the Internet as an excuse
for anonymous attacks and abuses, with it seems relatively few persons
having enough guts to take responsibility for the impacts that then result.

We want to let people post anonymously, at least the pseudo-anonymity
(subject to tracing in many cases) offered by the Internet?  Fine.
Anonymous speech definitely has its role.  But the buck has to stop
somewhere, and these systems should not be an excuse for a hit and run

In most such cases a significant amount of the responsibility when damage
occurs must rest on the publisher of the unattributed information, if they
have voluntarily chosen to operate in that manner.  I'm not talking about
common carriers and ISPs.  I'm referring to sites that set themselves up in
a way that serves to isolate posters/editors of material in public forums
from attribution.

Again, if you want to operate this way, that's a perfectly valid choice.
But realize that you're transferring part of the responsibility onto
yourself.  I do not believe that as a society we can accept the premise that
anonymous systems erase all aspects of responsibility from all involved

In the current Benoit situation, I likely wouldn't throw the book at that
hoax poster.  It's easy to be suckered in by the "devil-may-care" attitude
that Wikipedia tends to foster.  The hoaxer didn't realize that, in this
case, they were falling into a serious and painful trap.

Lauren Weinstein  +1 (818) 225-2800
Lauren's Blog: or

Re: Transport system complexity presents insurmountable risk?

< (Mark Brader)>
Tue, 26 Jun 2007 12:40:54 -0400 (EDT)
         (Martin, RISKS-24.71)

In contrast to Sydney with its ticketing system that tries to do everything
and fails, we have this story from England about ticketing machines that try
NOT to do everything, and succeed... in cheating the passengers.

Three key paragraphs:

  The companies have chosen secretly not to programme their ticket machines
  to sell the GroupSave fare, which is meant to be available to any group of
  three or four people traveling after the morning peak.  Under GroupSave,
  when two adults buy tickets another two can travel free.  Staff at ticket
  offices are obliged to sell the cheapest fare, including GroupSave, even
  if passengers do not specifically request it.  But the law does not extend
  to machines.

  Passengers travelling alone are also unable to obtain the cheapest fares
  from machines for some morning trains on which those fares are valid.  The
  fares can be obtained only from ticket offices.  Only 44 of SWT's 177
  stations have offices open for at least 12 hours a day.  Another 105 have
  part-time ticket offices and 28 have no offices.

  After being presented with the evidence gathered by The Times, SWT said
  that it would consider reprogramming its machines to offer the GroupSave
  discount.  A spokeswoman added: "We are looking at adding more options,
  but then we get advised that the machines are really complicated and
  people can't use them."

(At this point something might be said about overly complex fare structures,
but note that according to the description in this article, the group fare
is not an "option" in any case.)

Mark Brader, Toronto,

Re: Gripen: Risks of safety measures in military jet aircraft

<Matt Jaffe <>>
Sun, 01 Jul 2007 20:12:00 -0700

In RISKS-24.71 Paul E. Black quotes "maddogone" as saying,

  "The tests show it was the G-suit which activated the ejection. ... when
  it filled with air it pressed against the release handle"

I was unable to find the original source of the maddogone quote (perhaps
Mr. Black can provide a reference) but I am doubtful of the explanation in
the maddogone quote.  I am unfamiliar with the Gripen but back in my day,
more decades ago than I care to think about, US ejections seats were;
activated by handles of one sort or another and none of the handles; in the
aircraft I am familiar with could be activated by simple pressure (of an
inflating G-suit).  I could be wrong (it's rare, but it's been known to
happen ;-) but I doubt that the Gripen ejection system would have been
designed with that obvious a hazard, given that ejection seat technology has
been fairly mature for quite some time now.  Be nice to know more,
particularly if I am correct and the ejection was not caused by a simple
mechanical stupidity but by a more complex systems problem which we, the
readers of this forum, would want to know more about.  So, as noted, perhaps
Mr. Black can provide the source of the maddogone quote or other pointer to
further information.  (Or just tell me that I'm wrong and the Gripen
ejection system *can* be activated by simple pressure, in which case shame
on Gripen — or, more specifically, their ejection seat manufacturer).

Re: Gripen: Risks of safety measures in military jet aircraft

Mon, 9 Jul 2007 18:40:14 EDT

The item by Tony Lima <> in RISKS-24.70 was
interesting.  The following is an excerpt from my paper "CAD: Computer-Aided
Disaster", High Integrity Systems Journal, Vol. 1, No. 2, 1994, pp 101-156.
(It was based on press reports at the time.  Statements in double quotes
below are from people who were quoted in the press articles.  I have omitted
the references, but will send the whole paper to anyone who wants it.)

  The SAAB JAS 39 Gripen is one of the new generation of aerodynamically
  unstable fighters. It has no ailerons on the main wings, but uses a pair
  of smaller wings mounted forward to control its attitude. The FCS actively
  controls these and other surfaces to maintain stability. The FCS employs
  three digital computers, presumably in some fault-tolerant architecture.
  (Precisely what this architecture is, is not clear from the reports.)

  "It has to respond to signals within 200 milliseconds in order to maintain
  stability.  If the digital system is disconnected, an analogue backup
  system ensures that the plane flies level but it is not then possible to
  manoeuvre.  Since the centre of gravity lies behind the centre of lift,
  there is a tendency to lift the nose when control is lost."

  On 2nd February 1989, the first prototype was coming in to land after its
  sixth test flight.  On its previous five flights it had shown a tendency
  to lateral instability.  This time, it showed longitudinal instability,
  pitching down, then sharply up, then down again to the extent that the
  pilot could not recover control.  The aircraft hit the runway, shearing
  off the left main gear, bounced, skidded off the runway, turned through
  180 degrees, struck the ground with its right wingtip, flipped over, and
  came to rest on its back.  Amazingly, the test pilot, Lars Radestrom,
  walked away from the wreck.

  The investigating committee concluded that the crash was due to a software
  fault.  The chairman, Olaf Forsberg, stated:

  "The accident was caused by the aircraft experiencing increasing pitch
  oscillations (divergent dynamic instability) in the final stage of
  landing, the oscillations becoming uncontrollable.  This was because
  movement of the stick in the pitch axis exceeded the values predicted when
  designing the flight control system, whereby the stability margins were
  exceeded at the critical frequency."

  Note that the software fault in question seems to be a requirements fault,
  since a separate investigation by the JAS consortium concluded:

  "The control laws implemented in the flight control system's computer had
  deficiencies with respect to pitch axis at low speed.  In this case, the
  pilot's control commands were subjected to such a delay that he was out of
  phase with the aircraft's motion. ... JAS is now introducing the necessary
  modifications to the control laws."

  Just how effective these changes were is shown by the events of 8th August
  1993, when the second production aircraft (out of 140 ordered by the
  Swedish Air Force) was engaged in a display flight over the Water Festival
  in Stockholm.  Entering a turn, the pilot found that "... the computer
  overcompensated by roughly 10 degrees.  When I then straightened out the
  aircraft, I got an undemanded pitch oscillation and, when I tried to
  compensate for that one, the aircraft kind of sat down and became
  impossible to control."  He added that it felt "... like being on top of a
  slippery sphere" or "... like butter on a hot potato".

  At the point when the aircraft reached a nose-up angle of around 70
  degrees to the horizontal, the pilot decided to get out and walk.  He
  ejected safely, and the aircraft then leveled off and flew on for a while
  before crashing on an island.  The only casualties were one tree, three
  spectators who suffered minor burns, and one who sprained an ankle running

  The Crash Investigative Commission stated in their preliminary report:
  "The JAS crash was caused by the control system's high amplification of
  joystick deflections in combination with the pilot's large and rapid
  joystick movements.  This caused margins of stability to be exceeded."
  They also concluded that the aircraft had no technical faults at the time
  of the accident.  The engine continued to function normally until the
  plane hit the ground.  From loss of control until ejection had taken 6.2

  The cause of the crash was therefore "partly the pilot and partly the
  control computer" (i.e., the FCS).  The phenomenon is referred to as
  "Pilot Induced Oscillation" (PIO).  The cycle time of the Gripen FCS is
  200 milliseconds, similar to the human sensory/motor system reaction time.
  According to the report: "The designers knew that during certain
  circumstances the aircraft could be forced into an unstable state due to
  the steering-gear actions of the pilot, which would cause the aircraft to
  leave its envelope.  However, they had estimated the risk of this
  happening to be very low.  As it turned out, after some 7-8G manoeuvre,
  when the pilot tried to re-stabilize the plane, his actions happened to
  coincide with that of the computer and so the aircraft over-staggered."

  There are several interesting points about these crashes.  In both cases,
  it appears that the FCS was doing what was specified but not what was
  required.  This is often the case with such accidents.  Also the tendency
  is to blame (at least partly) the pilot, although a better design of human
  interface would seem to be necessary.  The investigators concluded that
  "When [measures have been taken to prevent any future similar occurrence],
  the Commission expect there to be no reason for continued grounding of the
  JAS 39 Gripen."  (Following a $3.2 billion development investment, this
  should come as no surprise!)

  On a final note, the pilot in the 1993 Stockholm crash was ...  Lars
  Radestrom!  (His personal private comments on the subject of active FBW
  would make interesting reading!)

Peter Mellor;   Mobile: 07914 045072;   email:
Telephone and Fax: +44 (0)20 8459 7669

N-version programming — the errors are in ourselves

<Fred Cohen <>>
Tue, 26 Jun 2007 10:35:21 -0700

The problem with N-version programming for redundancy is not that the idea
is flawed — but rather that those implementing it are not sufficiently well
educated in the subject matter to do the job right.  There is a related
sub-problem — our educational system produces programmers that are too
uniform — something like the problem in a lack of diversity in our
programming languages and hardware and operating systems.

In most of the examples cited to show that N-version programming fails, the
programs are all written in the same language by people with similar
expertise and background using the same development platforms and operating
systems. The common errors they make are not examined as part of quality
control, and it is foolish to act as if N-version programming will eliminate
common-mode failures that can be readily detected by automated tools — such
as failure to check bounds and off-by-one errors in array references.

The assumption of independence is indeed one that is commonly violated --
not by the programmers as much as by those who decide to use N-version
programming but only go half-way. It is not "appealing" in the sense that it
is expensive — more than N-times as expensive — to write an N-version
program as a single-version one. It is only worth it in cases where the risk
justify the costs.

As an example, try writing the 5-version program using the following

* Lisp on a LispMachine — programmers from a trusted systems
  development group
* Shell script on an AT&T Unix box (3B2 or so) — political science
  students from Chinese university
* C on a 68000 microprocessor — no OS — doctors from an Indian medical
* Java on OS-X (68K processor) — electrical engineers from the power
* Pascal on a Windows Intel box — historians from a museum in Cairo

Put each through formal code review and proof processes to generate
mathematical demonstrations that each is correct in the relevant senses from
the specification — which of course has to be developed redundantly as

You will probably tell me that this is ludicrous, and I will probably agree
-- that if you really wanted a trusted program and the fate of the World
depended on it, you would want more versions with even more diversity. But
that's exactly the point of N-version programming. The more assurance you
want, the more diversity you need.

Fred Cohen & Associates  572 Leona Drive    Livermore, CA 94550

Secure Programming with Static Analysis

<Brian Chess <>>
Wed, 04 Jul 2007 20:30:14 -0700

Jacob West and I are proud to announce that our book, Secure Programming
with Static Analysis, is now available.

The book covers a lot of ground.
* It explains why static source code analysis is a critical part of a secure
  development process.
* It shows how static analysis tools work, what makes one tool better than
  another, and how to integrate static analysis into the SDLC.
* It details a tremendous number of vulnerability categories, using
  real-world examples from programs such as Sendmail, Tomcat, Adobe Acrobat,
  Mac OSX, and dozens of others.

  [This is an extremely useful and timely book.  PGN]

Please report problems with the web pages to the maintainer