The RISKS Digest
Volume 23 Issue 54

Saturday, 25th September 2004

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Stupidsecurity
Richard Forno via Dave Farber
Tests show cell phones don't disrupt navigation systems
NewsScan
Railroad signal failure
Chuck Weinstock
Breach Security, Inc. offers just that
Olin Sibert
E-Vote Fears Soar in Swing States
wired.com via Monty Solomon
Some times, new ideas are not good ideas
David Lesher
Internet attacks jump significantly this year
NewsScan
Don't worry about security holes ...
George Michaelson
Re: LA ATC Failure
Paul Cox
Re: 49.7 day "overloaded with data" in Los Angeles
John Dallman
Nose-steered mouse
James Garrison
Java programs at risk from decompilers
Fiachra O'Marcaigh
REVIEW: "Systems Reliability and Failure Prevention", Herbert Hecht
Rob Slade
Info on RISKS (comp.risks)

Stupidsecurity (via Dave Farber's IP)

<Richard Forno <rforno@infowarrior.org>>
September 21, 2004 3:43:47 PM EDT

Finally, someone is chronicling the stupidity that passes for "stronger
security" post-September 11:
  http://www.stupidsecurity.com

Topics include:

Teacher Arrested After Bookmark Called Concealed Weapon
Big Trouble For Mentioning a Plastic Explosive in the Airport
You Can't Hide In Chicago
Washington Post: Freedom's Light Hidden Under A Security Blanket
Government Asks Court to Keep ID Arguments Secret
Putting a Price Tag on US Visa Stupidity
TSA Cynicism
New York Convention
Barefoot toddlers delay Air NZ flight
Police Delay Departure From Plane to Catch A Dangerous Criminal
Cleveland Air Show In Danger....
It's Fun To See The Power Of Stupidity Turned On Its Sources...
Delete PIN When It Has Become Invalid
Nuclear Power Plants Security Gaps to be Withheld From Public
Let's Intimidate our Innocents — That'll Scare the Terrorists!
High-Tech Wallpaper Keeps Wireless Wardrivers Out
HM department of vague paranoia

Dave's IP archives:
  http://www.interesting-people.org/archives/interesting-people/


Tests show cell phones don't disrupt navigation systems

<"NewsScan" <newsscan@newsscan.com>>
Thu, 23 Sep 2004 08:05:51 -0700

Recent tests by Airbus and American Airlines/Qualcomm indicate that,
contrary to popular lore, cellular signals do not disrupt airplanes'
navigational systems. The two results were similar for both the CDMA and GSM
cellular technologies, but the Federal Aviation Administration and the
Federal Communications Commission say the tests can't officially be
considered in their review of the rules because they were conducted without
government oversight. The agencies say they are moving ahead with their own
tests.  [*Wall Street Journal*, 23 Sep 2004; NewsScan Daily, 23 Sep 2004]
  http://online.wsj.com/article/0,,SB109589672706925579,00.html (sub req'd)


Railroad signal failure

<Chuck Weinstock <weinstock@sei.cmu.edu>>
Fri, 17 Sep 2004 17:17:42 -0400

[Taken from an e-mail circulating on the net. Edited slightly for
clarity. Chuck]

Hello my name is ... I'm an engineer in Sparks, Nevada. On August 28 about
5:25 am, my conductor and I heard slight conversations on the radio; we were
about milepost 533.  We heard Amtrak tell the dispatcher that they had a
clear signal into a BNSF train and they did not think they were protected
behind them.  The engineer and conductor both said they were going to walk
back their and see if the signal they went by was still clear.  At that
point we were about 8 minutes from them.

Well, I heard Amtrak tell dispatcher 76 that they were looking at a clear
Signal at mp 524.1.  At that point I was going by milepost 525.5 and had a
clear signal there, so we took it upon ourselves to slow down and take
caution for we were not quite sure what we were going to see.  When we
reached milepost 524.1 the engineer from Amtrak and the conductor were both
standing below the signal, and of course we had a clear signal with Amtrak
was only 25 cars on the other side of the signal.

Up until the point of us stopping at the clear signal, no one warned us
about anything.  When I told the dispatcher what we came across it seem to
finally sink in, I guess.  That's when everything came to a halt, signal
management came over the radio and told no one to move.  The whole reason
the first train was stopped because of broken rail.


Breach Security, Inc. offers just that

<Olin Sibert <u9313@siliconkeep.com>>
Fri, 24 Sep 2004 11:08:55 -0400

A delightfully named company called Breach Security, Inc., (www.breach.com)
has introduced a product called BreachView SSL that "is a unique add-on
module for existing network IDS products that performs SSL traffic
decryption without terminating the SSL session, or affecting non-repudiation
in any way".

Judging from a ZDNet report (blogs.zdnet.com/index.php?p=531), the product
operates by placing a piece of client software into every user's browser
that informs Breach's IDS add-on module of every SSL session key.  This
allows the Breach module to decrypt the SSL session and provide plaintext to
the IDS the analysis.

Aside from being reminiscent of the Clipper Chip, with its Faustian "give us
your keys and we'll let you use encryption, and we promise to use our
knowledge and power wisely" bargain, the RISKS of this technology should be
clear.

Sure, if you can make a browser give up its session keys, you can read its
SSL sessions--but if Breach's software can do that, then what's to stop some
other software from doing the same thing?  I'd also expect browser designers
to consider this sort of thing a violation of sound security design
principles and take steps to prevent it--making it ever more difficult for
Breach's product to work at all.

The flip side is that there IS a genuine problem here: if you rely on
network-based intrusion detection as your network security mechanism, you
can't inspect encrypted traffic to look for intrusions.  However, "solving"
this problem by preventing that traffic from being securely encrypted seems
likely to introduce more fundamental risks.  This approach should be yet
another hint that network-based intrusion detection might not be the ideal
answer for maintaining client system security.

This type of product seems more plausible on the inbound side of a system.
A company's server might well want to give up access its traffic for
intrusion analysis before processing it, but as a filter for outbound
traffic, it seems rather less desirable.


E-Vote Fears Soar in Swing States (wired.com)

<Monty Solomon <monty@roscom.com>>
Fri, 24 Sep 2004 08:00:56 -0400

Jacob Ogles, E-Vote Fears Soar in Swing States, *WiReD.com*, 23 Sep 2004

Roughly a third of the votes cast in the November presidential election will
be made on controversial paperless electronic voting machines, but as any
political analyst can tell you, the only votes that will matter a great deal
will be cast in a handful of swing states.  And just as the Kerry and Bush
campaigns are spending most of their efforts in those states where neither
holds a heavy margin in the polls, voting advocacy groups concerned with the
integrity of voting technology are devoting their resources toward the
states which matter most.  ...

http://www.wired.com/news/evote/0,2645,65044,00.html


Some times, new ideas are not good ideas (Re: RISKS-22.34)

<David Lesher <wb8foz@nrk.com>>
Mon, 20 Sep 2004 21:11:37 -0400 (EDT)

RISKS-22.34 included my submission about Georgetown's adoption of this
software. Looks like it was a mixed blessing after all:
  http://www.law.com/jsp/article.jsp?id=1095207119649

Software Snafus Disrupt Law School Tests
Georgetown Law drops its exam software,
leading some to question the future of testing (excerpt)
Beth Hanson, *Legal Times*, 20 Sep 2004

  After the Student Bar Association adopted a resolution to stop using the
  software and the Georgetown faculty agreed, the school dropped the
  software and asked students to sign an honor pledge during exams. The
  school also added extra proctors to each exam classroom. Now students are
  allowed to take their exams using an ordinary word processor.

[kudos to Sean Donelan for spotting the article...]

Also, there was no provision for Linux and Mac laptop owners.  I knew one in
the LLM program; she was forced to borrow a WinME machine, and yep, it
crashed during the exam,


Internet attacks jump significantly this year

<"NewsScan" <newsscan@newsscan.com>>
Mon, 20 Sep 2004 10:49:40 -0700

The semiannual Internet Security Threat Report, which is based on monitoring
by computer security firm Symantec, indicates that in the first six months
of 2004 there were at least 1,237 newly discovered software vulnerabilities
and almost 5,000 new Windows viruses and worms capable of compromising
computer security. The numbers represent a dramatic increase over the same
period in 2003. Even more troubling was the sharp rise in the number of
"bot," or robot, networks, which comprise a large number of infected PCs
that can then be used to distribute viruses, worms, spyware and spam to
other computers. The survey notes that in the first half of 2004, the number
of monitored botnets rose from fewer than 2,000 to more than 30,000. The
botnets, which range in size from 2,000 to 400,000 "zombie" machines, are
often "rented out" to commercial spammers who use them to distribute junk
e-mail while concealing their identities.  E-commerce was the industry most
frequently targeted for attacks, accounting for 16% of the total, and report
authors note that phishing scams are responsible for pushing up the numbers
in that category. "We're seeing a professional hand in development that was
pretty startling in terms of malicious code," says Alfred Huger, senior
director of engineering for security response at Symantec. The report's
findings mirror those of recent government-supported research.  [*The New
York Times*, 20 Sep 2004; NewsScan Daily, 20 Sep 2004]
  http://www.nytimes.com/2004/09/20/technology/20secure.html


Don't worry about security holes ...

<George Michaelson <ggm@apnic.net>>
Thu, 23 Sep 2004 10:11:40 +1000

A quote from a *WiReD* article on the Diebold hack in VBS:

  "...But speaking generally on the vulnerabilities Harris mentions, Diebold
  spokesman David Bear said by phone that no one would risk manipulating
  votes in an election because it's against the law and carries a heavy
  penalty. ..."

http://www.wired.com/news/evote/0,2645,65031-3,00.html?tw=wn_story_page_next2

I'm glad that's clear. We can all sleep better now knowing our money is safe
in the banks, because nobody will risk stealing it since its against the law
etc etc..

George Michaelson, APNIC, PO Box 2131 Milton, QLD 4064 Australia
+61 7 3858 3150  http://www.apnic.net  ggm@apnic.net


Re: LA ATC Failure (RISKS-23.53)

<Paul Cox <pcox@eskimo.com>>
Thu, 23 Sep 2004 13:03:40 -0700

I'm an air traffic controller in Seattle Center, which is a facility just
like the one in LA that had the crash.

To do their job, air traffic controllers need one thing above/beyond all:
They need the ability to communicate with the aircraft they're controlling.

We can control planes even without radar, because we can get position
reports from the airplanes and provide safe separation via altitude,
spacing, and so forth.  But without comm, we're completely and utterly
hosed.

(Some of the FAA spokesflacks had the audacity to suggest that the system
was still safe, because the radar system continued working just fine.  Sure,
the controllers could still *see* the airplanes; they just couldn't do
anything about it as they watched them get closer, and closer, and
closer... they'd have had a wonderful view of the targets merging as the
passengers were converted instantly a thin pink mist had the planes
collided.  But hey, the system was safe.)

The VSCS (Voice Switching Communications System) puts all of our
communications into one spot- ground-to-ground calls to other facilities,
calls within our own facility to other controllers, and air-to-ground comm.

It's a purely digital system; all the incoming feeds are converted to bits
and bytes and switched through a series of servers and such until they're
turned back into analog and put into the controller's ear through his
headset.

Of course, this means that power to the system is absolutely critical, and
we've had power failures in the past (see past RISKS for that info).

The VSCS system was designed and built by Harris Corporation, but their
contract ran out some time ago.  The FAA, coming to the end of the contract,
decided to go a much less expensive route- and replace all the servers with
Dell boxes and their own programming.

In theory, there's nothing wrong with this; do the required maintenance, and
there's no problem.  But the system does have the design flaws referred to
in the RISKS articles.

Basically, the system needs to be reset about once a month- or more
specifically, once every 30 days or so.  I heard a rumor that part of the
problem in LA was that they'd done the reset at the beginning of August, but
had put it off for September... and were planning to do it at the end of the
month.

There's a RISK right there; "once a month" probably means "once every 30 or
so days", not "once in a calendar month" which could leave an interval as
long as nearly 60 days in between resets.

(On a side note, the voice recordings are only kept for the past 15 days,
and it's done by an entirely separate system.  The main reason for the reset
has to do with file and memory buffers overloading.)

Now, there's a backup system for VSCS.  It's called VTABS, and is basically
a reduced-capability server that normally runs the VSCS system on the ATC
simulator that's used to train students.

The VTABS system, with much less server power, cannot run the entire control
room and all of the frequencies that the control center has, so it's a
hassle to go to VTABS.

When the reset on VSCS is done, you have to run on VTABS for a while, which
usually means it's done on graveyard shifts to reduce the impact on live
traffic.  The downside to this is that the VTABS system also doesn't get a
full workout.

So the next RISK pops up: The backup system isn't really fully checked out,
and if/when ATC needs it... it might not work.

Sure enough, that happened.  When VSCS died, LA Center switched to
VTABS... which also didn't work right.  Big trouble, now.

Finally, the FAA (in its infinite wisdom) a while back decided to remove a
last-ditch backup system called EARS.

EARS was basically a hard-wired, all-analog system that only provided the
most crucial thing- air-to-ground communications.

EARS required power to run, but the reason it had a big advantage over VSCS
or VTABS is that if the power died for, say, 20 seconds, as soon as the
power was back on EARS would work with no spool-up startup time.  VSCS takes
up to 45 minutes to completely start up, and VTABS has a significant delay
in startup time as well.

Seattle Center (where I work) is the only facility of its type that still
has EARS (our variant is called VEARS).  We have it because a fairly wise
manager asked our technicians to keep the system when it was slated for
removal.  The tech side agreed, and have kept VEARS going by moving a little
money around in their budget (since FAA nationally cut VEARS, they don't
provide any money to maintain the system to the facilities.)

Fortunately (and perhaps a bit unbelievably) VEARS costs very very little to
maintain, because it's just a set of switches that sit there unused the huge
majority of the time.  We test them for functionality about once a week.

The LA failure was both ridiculous and scary.  It's ridiculous on several
levels; the fact that the system is designed to shut itself down is silly in
a way, because from the user's perspective the system basically crashes to
protect itself from crashing.

Well, when suddenly you can't talk to the airplanes, you don't much give a
damn whether it's an intentional shutdown or an accidental/buggy shutdown.
Therefore, they might as well remove this intentional design.

It's ridiculous that the technicians weren't doing the reset.  This issue is
NOT NEW, and has been known for some time... and had any of the 10 airplanes
(with 200 passengers each) managed to smack into another plane, you can bet
that the FAA would have been paying the families for a long, long, long
time.

It's ridiculous that the first backup system didn't work right simply
because people were too lazy/unmotivated to test it properly.  VTABS is an
acceptable backup; it's not perfect, but for the money it cost (essentially
nothing for hardware, some reprogramming costs for the servers) it's nearly
ideal.

It's ridiculous that a perfectly good SECOND backup was thrown away by the
FAA that cost even less.  The technology in EARS has been around since, oh,
about as long as there's been radio; it's tried and true, and it's pathetic
that there's only one facility in the nation (out of 21) that still has
EARS.

And it's scary to think that this could've happened in an even busier
facility than LA.  The morning crush of traffic in New York or Boston or
Indy or Cleveland Centers, for example, where there's even more traffic
packed into even less airspace than out west in LA.

The RISKS here are many and silly, because nearly all of them could have
been easily avoided with some diligence and forethought.

RISK 1) programming the system to shutdown to try and prevent a shutdown.
If you don't expect it either way, it doesn't matter.

RISK 2) being lazy or not really understanding that "once a month" actually
means "once every 30 days" and ensuring that a critical job is done, on
time, and correctly.

RISK 3) having a backup system that isn't checked to see if it can actually
do the job.  You rely upon it, it better work, and if/when it doesn't,
you're screwed.

RISK 3) throwing out a perfectly good second backup system because you think
it's "old fashioned" and that the primary/secondary system you have now is
so much better.  Hey, the new stuff is all digital, it's gotta be better,
right?

Finally, on a personal note, the manager at Seattle Center who managed to
talk the technical guys into keeping our VEARS system should be considered a
hero and an example for the rest of the FAA.  He's already a hero to me-
he's my father.  :)

Paul Cox, Seattle Center


Re: 49.7 day "overloaded with data" in Los Angeles

<jgd@cix.co.uk (John Dallman)>
Fri, 17 Sep 2004 22:46 +0100 (BST)

In Risks 23.53, Keith Price quoted the *Los Angeles Times* thus:

> When the system was upgraded about a year ago, the original computers
> were replaced by Dell computers using Microsoft software. Baggett
> said the Microsoft software contained an internal clock designed to
> shut the system down after 49.7 days to prevent it from becoming
> overloaded with data.

I really hope that this quote is a mistake. If it's true, it sounds very
much like the issue in Windows 95 and Windows 98 that causes them to shut
down every 49.7 days, described in Microsoft Knowledge Base article
Q216641. It's caused by integer overflow of a 32-bit count of milliseconds,
and "overflow" is something that could readily get misunderstood as
"overloaded with data".

The idea that anything remotely connected with air traffic control was
running on unattended Windows 9x causes me goosepimples of fear.  Especially
if it's Windows 9x without the long-available patches that fix the 49.7 day
problem.

There are lesser 49.7 day issues in Windows NT 4.0 and Windows 2000 due to
32-bit counts of milliseconds wrapping around, but the Windows 9x one is the
one where you have to reboot. The quote that Ben Moore had from the same
paper:

> In the meantime, they required manual resetting of the communications
> system — a process they described as similar to rebooting a personal
> computer.

makes me fear that the process is very similar indeed...

John Dallman, jgd@cix.co.uk, HTML mail is treated as probable spam.


Nose-steered mouse

<James Garrison <jhg@athensgroup.com>>
Fri, 17 Sep 2004 14:52:01 -0500

*New Scientist* (16 Sep 2004) contains an article describing a "Nose-Steered
Mouse" or "nouse" that uses a webcam to track nose movement, and eye- blinks
for clicks.  While the potential benefits to disabled users are obvious,
there might be some interesting risks:

Files "Gone with the Wind"

Early technology adopter Joe Bloggs accidentally deleted a file representing
several hours of work today while using the new "nouse" or nose-activated
mouse. "I was moving a file to my desktop when I sneezed and blinked at the
same time, accidentally opening the context menu, scrolling to "delete" and
confirming the request. Mr. Bloggs' colleagues point out that his sneezes
are particularly spectacular....

This new technology is obviously nothing to sneeze at :-)

James Garrison, Athens Group, Inc., 5608 Parkcrest Dr, Austin, TX 78731
http://www.athensgroup.com  (512) 345-0600 x150


Java programs at risk from decompilers

<"Fiachra O'Marcaigh" <fiachraomarcaigh@amas.ie>>
Fri, 17 Sep 2004 17:30:26 +0100

Java programs at risk from decompilers

A new book shows that practically all Java programs are vulnerable to being
decompiled back into the original source code. Author Godfrey Nolan says: "I
know I could recover the source code from almost any Java application... and
I'm pretty sure there are other people out there who could do the same."

There are several risks here. The programmer's work and intellectual
property is vulnerable if the source code can be accessed relatively
easily. There is also the danger that a cracker could decompile a popular
piece of Java code, insert malicious functionality, and the recompile the
Java. This new version would be an exact copy of the original program, but
with a malicious payload.

An experienced programmer himself, Godfrey Nolan says he wrote this book
(Decompiling Java, Apress, August 2004) to explain exactly what
decompilation means and what options programmers have to protect their
work. The book includes building an obfuscator (to attempt to protect source
code) and a decompiler (to expose source code).

There is also detailed description of the options open to programmers to
protect their code.

http://www.amazon.com/exec/obidos/tg/detail/-/1590592654/qid=1091398671/
sr=8-2/ref=sr_8_2/002-7737498-5564005?v=glance&n=507846

Fiachra Ó Marcaigh, Killiney View House, Newtownpark Avenue,
Blackrock, Co Dublin, Ireland.  +353 86 083 1880  fuom@online.ie


REVIEW: "Systems Reliability and Failure Prevention", Herbert Hecht

<Rob Slade <rslade@sprint.ca>>
Fri, 17 Sep 2004 11:57:53 -0800

BKSYRLFP.RVW   20040531

"Systems Reliability and Failure Prevention", Herbert Hecht, 2004,
1-58053-372-8, U$79.00
%A   Herbert Hecht
%C   685 Canton St., Norwood, MA   02062
%D   2004
%G   1-58053-372-8
%I   Artech House/Horizon
%O   U$79.00 800-225-9977 fax: +1-617-769-6334 artech@artech-house.com
%O  http://www.amazon.com/exec/obidos/ASIN/1580533728/robsladesinterne
  http://www.amazon.co.uk/exec/obidos/ASIN/1580533728/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/1580533728/robsladesin03-20
%P   230 p.
%T   "Systems Reliability and Failure Prevention"

Chapter one is a very brief introduction: almost a preface.  Basic
statistical measures of failure and service are described in chapter
two.  "Organizational Causes of Failures," in chapter three, tells
stories of some major disasters, but provides no structural
recommendations.  Chapter four looks at analytical approaches to
failure prevention, covering the failure modes and effects analysis
(FMEA) and fault tree analysis (FTA) methods that should be more
widely used in general risk assessment.  The discussion of testing
types, purposes, and analysis, in chapter five, raises some very
interesting questions: if a thousand versions of a part are tested for
a thousand hours and only one fails, does this *really* support the
vendor's assertion that the mean time between failures (MTBF) is a
million hours--or is it equally possible that all of them start
failing shortly after a thousand hours, and one failed early?  Factors
such as partitioning, involved in implementing redundancy in a system,
are reviewed in chapter six.  The material on software reliability, in
chapter seven, is rather disappointing: there is still an evident
hardware bias, little deliberation regarding the nature of software,
and the techniques for stability are limited to UML (Universal
Modeling Language) analysis, which is, itself, only suitable to
object-oriented tasks.  Chapter eight looks at the project life cycle,
the preferred development models, reliability activities in various
phases, testing, and reviews.  In chapter nine Hecht addresses
economic considerations in preventing versus accepting failures with a
good deal of math: a more practical illustration is provided in
chapter ten.  Chapter eleven uses the techniques explained in the book
in three example cases.

For those involved in risk analysis and operation continuity work,
this text is a tutorial for a number of engineering principles that
are not widely discussed in the available literature.  However, there
are a multitude of topics that sound interesting and useful, but are
not presented in sufficient detail to be useful to the non-engineering
professional.  For those in the field, the book will definitely be
worth reading, but it probably could have provided much more
assistance to those in the safety and security field.

copyright Robert M. Slade, 2004   BKSYRLFP.RVW   20040531
rslade@vcn.bc.ca      slade@victoria.tc.ca      rslade@sun.soci.niu.edu
http://victoria.tc.ca/techrev    or    http://sun.soci.niu.edu/~rslade

Please report problems with the web pages to the maintainer

x
Top