The RISKS Digest
Volume 23 Issue 23

Tuesday, 2nd March 2004

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

July 2002 air collision revisited
Paul Cox
FBI employee snoops through confidential police databases
Declan McCullagh
Data Protection and an increasingly paranoid world
Matthew Byng-Maddick
When entries aren't screened
Gillian M Brent
Re: Malicious IT design in support of the cold war
Henry Baker
Diomidis Spinellis
MS self-inflicted DDoS
Doug Sojourner
Re: MS Java Virtual Machine issue
Jonathan de Boyne Pollard
Re: SPF and its critics
Greg Bacon
SPF is harmful. Adopt it.
Jonathan de Boyne Pollard
Info on RISKS (comp.risks)

July 2002 air collision revisited

<"Paul Cox" <pcox@eskimo.com>>
Sun, 29 Feb 2004 02:23:39 -0800

http://www.avweb.com/eletter/archives/avflash/203-full.html#186793

An air-traffic controller was stabbed to death at his home near Zurich.  The
controller was working the sector when two airplanes had a mid-air collision
in July, 2002, a crash which killed 71 people on the two planes (one was a
cargo carrier, which limited the death toll) over Lake Constance,
Switzerland.

This submission isn't about the RISKS of being an air-traffic controller,
but rather serves to remind those of us in the business of RISKS we have not
yet rooted out from the ATC system.

The investigation into the crash has not yet been finalized, but they have
released some interesting points.

1) Only one controller was on duty at the time.  While he might not have
been busy and could handle the traffic, the fact is that two sets of eyes is
generally accepted as being better than one set.

2) A collision-alert system in the control center was down for maintenance.

3) A phone warning from the previous controllers in Germany was not received
by the Swiss controller.

4) The biggest problem was that the Russian pilots followed the instructions
of the controllers (to descend) instead of the on-board "TCAS" collision
avoidance system (which said to climb); the pilots in the DHL aircraft (the
other plane) were following the TCAS system and were also descending.

To be frank, having a single controller staffing the position is a RISK but
one which is taken quite frequently in ATC.  Usually it is not a problem,
but there is significant emphasis right now in the US ATC system to reduce
staffing and "do more with less" in terms of #s of controllers.

Nearly all aviation crashes these days are a result of a "chain" of events,
and frequently changing a single factor would quite possibly break the
chain.

In this case, we can see a few things that need changing.  Europe's ATC
system is a mess compared to the neat, orderly system in the United States,
because in Europe each nation has their own ATC system.

This is inefficient, to say the least, because the countries are so
(relatively) small and airspace sector (and control center) design has to be
built around the boundaries of the particular nations, rather than around
the airports and natural flows of the aircraft.

As we have seen, it's not only inefficient, but also possibly unsafe.  The
good news is that Europe has been working on modernizing the ATC system
design and improving it, allowing ATC to be a truly European system instead
of a hodgepodge of systems (French, German, Swiss, Spanish, Belgian, Dutch,
etc).

The conflict alert system being down for maintenance is a classic risk --
nearly all ATC providers do not have completely functional redundant systems
for backup while the primary one is off-line.  In the USA, the primary
system (NAS-HOST) has a conflict alert function; the backup system (DARC)
does not.  The main factor driving this decision is cost; it would be much
more expensive for ATC providers to have two full-capability systems.

While NAS-HOST functions are down for maintenance, controllers do not have
the computerized conflict alert.  In yet another classic good/bad kind of
tradeoff, the maintenance is usually performed on the "mid" shift (graveyard
overnight shift).  This is good because air traffic is much lower; it's bad
because the controllers are probably less alert and able to adapt due to
plain old circadian rhythms.  (Controllers' schedules are horrible for this,
but that's another RISKS post.)

Finally, we turn our attention to the TCAS.  Potential problems with the
TCAS system have been in RISKS before, and this exact scenario was also
predicted by many in the ATC and aviation industry.

The specific scenario that was predicted, and which apparently happened in
the Switzerland crash, was this: Two aircraft are in conflict.  Their TCAS
systems activate and proscribe a course of action for each aircraft which,
if followed, should separate them.

(It should be noted that each plane has a TCAS system on board, and while
they are capable of operating independently they typically operate in
conjunction, so the resolution suggested works well for both aircraft.)

At the same time, the controller sees the situation developing and issues
instructions to one or both aircraft; these instructions might be what the
TCAS is saying to do, or might not.

The pilots on the aircraft then have to figure out what to do.

In the worst-case scenario, the TCAS tells one plane to climb and another to
descend; ATC tells the first plane to descend and the pilots follow THAT
instruction, which only makes matters worse as both planes are now trying to
descend below the other.  Planes smack, everyone in the air dies.

The problem is that the controller, whose job is to sit and issue these
sorts of instructions at all times, has NO WAY of knowing that the TCAS unit
is suggesting a course of action, and can therefore be giving a perfectly
good instruction to one aircraft while the TCAS unit is giving a horrible
instruction to the other aircraft.

(Or vice-versa; the point isn't that the commands themselves are good/bad,
but that TCAS and ATC are trying to resolve the situation with conflicting
instructions that only make things worse.)

Now, in theory this problem has been "solved".  Controllers are told and
pilots are trained that when a TCAS conflict resolution happens, the pilots
are to follow the TCAS instructions- even if they are in conflict with
instructions from ATC.

Unfortunately, the possibility for confusion still exists.  In this case, it
was probably exacerbated by the fact that the Russian plane's pilots came
out of an aviation system which heavily emphasized ground control and
discouraged pilots making "their own call" in the air; the Russian pilots
were (apparently) predisposed to follow the ATC instructions instead of the
TCAS instructions.

Perhaps their training was not sufficient; perhaps the TCAS unit wasn't
operating properly.  However, the biggest problem is that different people
will sometimes make different choices no matter how much training you give
them; one aircrew follows the TCAS, one follows ATC.

In any case, this crash perfectly illustrated the TCAS RISK which has
existed for over a decade, and which is still not solved in any manner --
namely, that ATC has no way of knowing if/when TCAS resolutions are being
broadcast in the cockpits unless the pilots inform ATC, and usually in this
last-minute situation there isn't time to do so.

What I found interesting in skimming back through the RISKS archives is that
this RISK, which has been borne out, was tossed back and forth like a
political football- and decidedly non-scientific means were used to dismiss
the RISK.

The US ATC union, NATCA, pointed out repeatedly that this RISK existed, but
was frequently dismissed as merely hyping the issue because "they don't like
giving up control" to the pilots in the cockpit.

*** From RISKS-13.78, by Nancy Leveson, September 1992:

  "In the report (which was surprisingly good) they also mentioned that the
  controllers hate TCAS because they lose control and that the pilots love
  it because they gain control."

  "If you read about TCAS, you need to be aware that it is in the midst of a
  big political struggle.  The pilots love it (there was a representative
  from ALPA on the CNN report).  The controllers hate it....So watch who is
  speaking when you hear about TCAS and its problems or advantages."

  "In case there is anyone who doesn't know, my Ph.D. students (Mats
  Heimdahl, Holly Hildreth, Jon Reese, Ruben Ortega, and Clark Turner) and I
  are working on a formal system requirements specification of TCAS II.
  This will serve as the official FAA specification of TCAS and also as a
  testbed application for their dissertations on safety analysis and risk
  assessment."

The irony here is profound.  The controllers' union was raising the issue
and providing data, anecdotal evidence, and proposing scenarios which were
both entirely possible and possibly occuring; the scientist dismissed those
concerns by labeling them as politically motivated... but failed to give any
evidence of where the union had gone wrong.

In March 1993, RISKS-14.44, John Dill, an ATC in Cleveland, Ohio, laid out
nearly exactly the same scenario that I did above- ATC giving one set of
instructions, TCAS giving another.

In RISKS-15.51 and 15.66, an incident in Portland, Oregon, again illustrated
the issue with TCAS.

RISKS-16.25 had a submission that was highly critical of an NBC "Dateline"
report on TCAS.  Andres Zellweger claimed "The fact is that TCAS
substantially reduces the risk of midair collisions."  The report had
presented the controllers' view fairly positively, but many in the aviation
world made claims like Mr Zellweger's.

However, this begged the simple question- how many midair collisions were
occuring PRIOR to TCAS's implementation?  Real answer: nearly zero.  How
many happened AFTER the implementation: Still nearly zero.  This is not
evidence of improvement, of course.

In RISKS-17.18, 18.83, 19.55, and 20.11, we have more reports of incidents
that mirror the warnings of controllers.

In RISKS-20.12, we had a report of a "good" TCAS move and a "bad" TCAS move,
and the submitter, David Wittenberg, asked the $64,000 question at the end
of his submission: "So the controllers got one pair of planes into a bad
situation, and TCAS bailed them out; and TCAS got another set of planes into
a bad situation and the controllers bailed them out.  How are the pilots to
know which one to trust when they must make decisions quickly?"

RISKS-20.60 and 20.62 had more reports of TCAS-induced mishaps and
near-collisions.

In RISKS-10.16, dated back 18 July 1990, I found this question from Henry
Spencer: "This brings to mind an interesting thought: who gets the blame if
(when) a TCAS warning *causes* a collision, through either electronic or
human confusion?"

Well, unfortunately for the controller in Switzerland, it appears that at
least one angry family member held ATC responsible for a RISK that
controllers had been pointing out for years.

The (ongoing) history of the TCAS system is nearly a perfect example of how
implementing technology can be good, bad, or somewhere in the middle.  It'll
be a great subject for dissertations in the future, I believe.

Aviation is still not perfected.  We might like to think that it is, but
it's not; errors still do occur.

What this whole story points out, to me, is that we must continue to ensure
that the humans are given enough training and support to be in a position to
succeed; that the automation we implement must be shown to IMPROVE safety,
and through hard evidence and not mere assertions that it does; and that we
should not dismiss the naysayers simply because we think they have an axe to
grind.

Sometimes there really are people out to get the paranoid guy.

Paul Cox, ZSE


FBI employee snoops through confidential police databases

<Declan McCullagh <declan@well.com>>
Thu, 26 Feb 2004 22:54:28 -0600

Assistant Attorney General Christopher A. Wray of the Department of Justice
Criminal Division and U.S. Attorney Roscoe C. Howard of the District of
Columbia announced that Narissa Smalls, a legal technician in FBI
Headquarters, was sentenced to 12 months in prison on charges stemming from
her unlawful access of the FBI's Automated Case Support (ACS) computer
system.  [Source: FBI legal technician pleads guilty to unlawfully accessing
the FBI's computer system, WWW.USDOJ.GOV, 26 Feb 2004; PGN-ed from Declan
McCullagh's Politech mailing list, archived at http://www.politechbot.com/


Data Protection and an increasingly paranoid world

<Matthew Byng-Maddick <mbm@colondot.net>>
Fri, 27 Feb 2004 11:52:45 +0000

As most here will know, the UK has had Data Protection legislation for some
time, requiring companies to register their data processes where personally
identifiable data is involved.  *The Metro*, London's free newspaper (seen
all over the Underground) printed by the right-wing *Daily Mail* group, has
been printing lots recently about the Soham murders and Data Protection. The
essence is that several police records were binned for this convicted
murderer, that might have helped police find him earlier.

Of course, they don't say what has been mentioned here and in other places,
that the school where he was caretaker was not the one where the murdered
children were, and that even if he'd been a plumber or a builder, he'd
probably still have been in a position to murder the two girls. The Data
Protection Act is being blamed for the fact that people with no understanding
of the Act's requirements binned the records.

So, here we have the newspaper hinting that they think Data Protection Act
is rubbish and should be scrapped.

The contrast came to me this morning; the London Underground Oystercards
have been mentioned here previously. I have one of these
devices. Apparently, I shouldn't be putting it in my back jeans pocket and
sitting on it, because it's now started to work <50% of the time. According
to the man behind the counter at the ticket office at 11:45 last night, this
is not the "massive design flaw" that I seem to think it is in such a
system, that they do not cope with moderately rough handling, and paper
(well, printed) tickets are "obsolete", being, as they are, based on an old
technology. (So where have we heard that one before, then?).

So, having asked for a replacement card, I was instead handed a form to fill
in (London Underground adore bureaucracy), and told to bring this form back
with a proof of address/identity. Now, when I got the oystercard, I had to
fill in a form with address details on it (no proof needed). The number
printed on the Oystercard is enough of a key to be able to get all of the
information that they request on the form (including name, address,
photocard number, what tickets are loaded), and I pointed out that it seemed
crazy to be getting this information again. Typically, for London
Underground, the staff member looked at me as if I was mad. As well as this,
the form asked for my signature on the following text:

(TfL - Transport for London, government body overseeing all transport issues
       within London,
 LUL - London Underground Limited, private company who own and operate the
       Underground system)

| I declare that the information I have provided on this form is true to
| the best of my knowledge. I consent to TfL and/or LUL checking the
| information I have given. I accept that if the information I have given
| is inaccurate one or more of the following may happen: my use of the
| replacement card may be stopped; I will be liable for any balance due to
| TfL/LUL (and I will pay it); legal action may be taken against me.

This is just one example of where the Data Protection Act is not effective
enough, not that it would help here, either.

RISKS:
  + Spurious sensationalistic reporting dulling the senses of those who are
    most in need of the awareness
  + Private company, in virtual monopoly position, imposing ever more
    onerous data requirements on its customers
  + People being required to sign unpleasant and open-ended liability in
    order to be able to buy season tickets to use a busy city's transport
    system
  + People taking it as read that they have no right to privacy any longer,
    in this world of terrorism
  + And of course, the obvious and standard: replacing "obsolete"
    technologies that work well with technologies that haven't been properly
    tested for real world conditions (though he claimed that they had to
    reprint paper tickets reasonably often, I never had (user-fixable)
    problems where they failed to go through the machine)
  + And, lastly a relatively good risk to the Oyster itself, having an
    expensive ticket which only stores the data in electronic form, with
    no (tied) user/staff-verifiable print of what the data contained should
    be (ie. if the readers don't work, they have to take your word for it)

Matthew Byng-Maddick <mbm@colondot.net> http://colondot.net/


When entries aren't screened

<Gillian M Brent <Gillian.M.Brent@optus.com.au>>
Sun, 29 Feb 2004 02:58:08 +1100

Apparently Channel 14 in News 14, Raleigh, NC, have a Web interface for
people to submit school and business closing announcements during the bad
weather. All well and good, except that it seems no-one was checking the
entries.

A number of the more dubious entries are included on the discussion board at
http://mb3.theinsiders.com/fcarolinacanesfrm1.showMessageRange?topicID=3254.
topic&start=1&stop=20 (complete with screenshots), but I have to concur with
Mr Steele whose Live Journal included the screenshot for "Tutone Inc, Closed
Thursday and Friday, Call Jenny at 867-5309".
http://www.livejournal.com/users/mdsteele47/28884.html

The risk - leave a public-broadcast message service un-moderated and it
won't be long before the humourous, the annoying, and the simply obscene get
through.


Re: Malicious IT design in support of the cold war (Garst, R 23 21)

<Henry Baker <hbaker1@pipeline.com>>
Thu, 26 Feb 2004 19:52:11 -0800

It seems to me that Microsoft "Windows" is payback for these CIA misdeeds.
It passes the tests, then fails catastrophically...  Or perhaps the CIA
assumed that the USSR would slavishly copy Windows the way they had copied
OS/360, and never imagined that the US DoD would itself have to use Windows.
Maybe the CIA is now being hoist with its own petard??


Re: Malicious IT design in support of the cold war (Garst, R 23 21)

<Diomidis Spinellis <dds@aueb.gr>>
Mon, 01 Mar 2004 23:42:22 +0200

Although the issue of maliciously designed components is interesting, I find
the specific story regarding the malicious IT design of the gas pipeline
control software difficult to believe.  According to the article [1]:

  "When we turned down their overt purchase order, the K.G.B. sent a covert
  agent into a Canadian company to steal the software; tipped off by
  Farewell, we added what geeks call a "Trojan Horse" to the pirated
  product.  [...]  The pipeline software that was to run the pumps, turbines
  and valves was programmed to go haywire," writes Reed, "to reset pump
  speeds and valve settings to produce pressures far beyond those acceptable
  to the pipeline joints and welds. The result was the most monumental
  non-nuclear explosion and fire ever seen from space."

The above excerpts make the pipeline software appear to be an off-the-shelf
application.  However, software of this complexity is not sold
shrink-wrapped on CDs with a glossy manual that a covert KGB agent can
steal.  Even if we are not talking about a bespoke application, the software
would have to be adapted, tuned, and configured by its developers for the
specific environment.  A typical ERP system has to be expensively configured
for the needs each company by trained specialist consultants; I can not
imagine the pipeline control software to have a "Tools - Options - Joints -
Weld quality" setting implied by the article.

[1] http://seclists.org/lists/isn/2004/Feb/0011.html

Diomidis Spinellis - http://www.dmst.aueb.gr/dds


MS self-inflicted DDoS

<Doug Sojourner <dsojourner@matrixsemi.com>>
Fri, 27 Feb 2004 09:19:03 -0800

The other day my children called me to the computer ... they had a prompt to
download software, and (thanks to careful training) were suspicious.

It turned out to be Microsofts Critical Patch warning — they wanted us to
download a patch right away. Given their concern that most or all attacks
come from black-hats reverse engineering the patches
<http://news.bbc.co.uk/1/hi/technology/3485972.stm>, it is understandable
that they would want us to patch quickly. However, if you ask every windows
user on the net to upgrade NOW, you probably haven't got bandwidth for the
response. Microsoft didn't.

We ended up patching later.


Re: MS Java Virtual Machine issue (Reinke, RISKS-23.20)

<Jonathan de Boyne Pollard <J.deBoynePollard@Tesco.NET>>
Sat, 28 Feb 2004 18:22:53 +0000

In RISKS-23.20, Ferdinand John Reinke states that Microsoft will no longer
"be able" to support its own JVM from 2004-10-01 onwards, and, as a result,

	FJR> customers who have the MSJVM installed after this
	FJR> date will be vulnerable to potential attacks that
	FJR> will attempt to exploit this technology.

I have to ask how he thinks that the risks of potential vulnerability are any
different to those in the situation _before_ 2004-10-01.  They aren't, of
course.  The software is not, after all, magically changing somehow on that
date to become more vulnerable.  (Is he aware of some Microsoft-implanted time
bomb in its JVM that he hasn't told us about?)  If it's vulnerable to attacks,
it is vulnerable _now_.  Conversely, if it isn't vulnerable to attacks, it
won't suddenly become vulnerable.  The date on which support stops is
irrelevant to concerns about its vulnerability.

Additionally, he says

	FJR> More alarming, many organizations aren't even aware
	FJR> that they have MSJVM installed.

Again, I have to ask how he thinks that Microsoft's JVM is any different in
this respect from the numerous other bits and bobs that many organisations,
who just take whatever comes on the discs and in the software updates, will be
unaware that they have installed on their machines.  The risk of not knowing
what softwares one's machine has on it applies to all softwares, isn't
specific to Microsoft's JVM, and isn't related to whether those softwares are
supported by their manufacturers.


Re: SPF and its critics (Maziuk, RISKS-23.21)

<Greg Bacon <gbacon@hiwaay.net>>
Fri, 27 Feb 2004 15:43:36 -0600

Dimitri Maziuk was right on when he wrote

> To get back where I started: we know that technical solutions for
> non-technical problems don't work.  We are clearly dealing with
> non-technical problem here.

Spam is an instance of the well-known problem from economics known as
the tragedy of the commons.  The fix is ditching the bad framework, but
we refuse to learn from history and instead allow our hubris to goad us
into codifying more expectations for people to ignore.


SPF is harmful. Adopt it. (Re: Kestenbaum, RISKS 23.18)

<Jonathan de Boyne Pollard <J.deBoynePollard@Tesco.NET>>
Sat, 28 Feb 2004 20:57:05 +0000

Lawrence Kestenbaum says, with respect to his mail, that

	LK> the junk has suddenly reached a new level.

My response to that is the same one that I gave to Jon Seymour in
response to his submission to RISKS 22.95 last year:

Welcome to the club.  You're late.

You can find a condensed history of my experience at
<URL:http://homepages.tesco.net./~J.deBoynePollard/deluge-of-microsoft-worms.html>.
People now have more success contacting me via Fidonet than they do
contacting me via Internet, a situation that I find amusingly ironic.

Lawrence Kestenbaum also says

	LK> The critics of SPF suggest that spammers would
	LK> simply find or invent other addresses to use.

But this is missing the point.  The fact that, as with many other
anti-UBM measures for SMTP-based Internet mail, SPF creates yet another
arms race is but one of the many things wrong with it.  There has been
a lot of discussion of the glaring faults of SPF in several discussion
fora (which I encourage RISKS readers to read, by the way).  When the
the turn came for the "qmail" mailing list to have the discussion, I
listed 12 of the problem areas to consider for SPF in
<URL:news://news.gmane.org./402321C2.7825F3FA@Tesco.NET>, but that is
by no means an exhaustive list.

Lawrence Kestenbaum then says that he doesn't care about the arms
race that SPF initiates (a lamentably short-term and short-sighted
attitude, especially in light of the history and consequences of the
"Bayesian filters" arms race)

	LK> so long as [spammers stop] plastering my personal
	LK> address on hundreds of thousands of fraudulent and
	LK> disreputable spam messages and viruses, [...]

But, of course, SPF won't actually stop that at all.  The tool for
proving that one did or didn't write something, signed message
bodies, has long since been invented anyway.

On the gripping hand, perhaps the fact that widespread adoption of
SPF will do serious damage to the SMTP mail architecture is a good
thing.  In the battle against unsolicited bulk mail, we've
concentrated upon the wrong problem time after time, with mechanisms
that address the wrong thing and that don't address the actual
"unsolicited" and "bulk" qualities of undesirable mail.  SMTP has
become less usable, more patchy, and more balkanised with each new
bodge, yet continues to bend and not quite break completely.  Perhaps
the adoption of SPF will turn out to be the straw that finally breaks
the camel's back, and that thus finally forcibly weans us off this
bad habit of addressing the wrong problem.

Please report problems with the web pages to the maintainer

x
Top