The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 24 Issue 53

Friday 29 December 2006


Glitches postpone launch at Wallops
Walter Schilling
Cybercrooks Deliver Trouble ...
Brian Krebs via Monty Solomon
Typo takes tourist 13,000 km out
Monty Solomon
2007 Preview: Newt's Muzzle, Google's Data, Microsoft Over the Line
Lauren Weinstein
Vista DRM The 'Longest Suicide Note in History'?
Peter Gutmann via Gunnar Helliesen and Dave Farber
Drop zones and an intelligence war
Gadi Evron
Re: Trig error checking
Ted Lee
Ken Knowlton
Gene Spafford
Re: Flat train wheels
Peter B. Ladkin
E-mail me at xx at yy dot zz
Dan Jacobson
Info on RISKS (comp.risks)

Glitches postpone launch at Wallops Island

<Walter Schilling <>>
Fri, 22 Dec 2006 21:24:37 -0500

It appears that software coupled with a fast cycle development time for
spacecraft has again resulted in a launch problem.  While the details of
this article are sketchy at best, it harkens back to the Milstar 3, ICO
Global Communications F-1 Satellite, GeoSat, and Clementine failures.,0,5583500.story

Walter W. Schilling, Jr., 2004-2007 Ohio Space Grant Consortium,
Doctoral Candidate, University of Toledo, Department of EECS

Cybercrooks Deliver Trouble ...

<Monty Solomon <>>
Wed, 27 Dec 2006 20:53:15 -0500

With Spam Filters Working Overtime, Security Experts See No Letup in '07,
Brian Krebs, *The Washington Post*, 27 Dec 2006

It was the year of computing dangerously, and next year could be worse.
That is the assessment of computer security experts, who said 2006 was
marked by an unprecedented spike in junk e-mail and more sophisticated
Internet attacks by cybercrooks.

Few believe 2007 will be any brighter for consumers, who already are
struggling to avoid the clever scams they encounter while banking, shopping
or just surfing online. Experts say online criminals are growing smarter
about hiding personal data they have stolen on the Internet and are using
new methods for attacking computers that are harder to detect.
"Criminals have gone from trying to hit as many machines as possible to
focusing on techniques that allow them to remain undetected on infected
machines longer," said Vincent Weafer, director of security response at
Symantec ...

One of the best measures of the rise in cybercrime is junk e-mail, or spam,
because much of it is relayed by computers controlled by Internet criminals,
experts said. More than 90 percent of all e-mail sent online in October was
unsolicited junk mail, according to Postini, an e-mail security firm in San
Carlos, Calif. Spam volumes monitored by Postini rose 73 percent in the past
two months as spammers began embedding their messages in images to evade
junk e-mail filters that search for particular words and phrases. In
November, Postini's spam filters, used by many large companies, blocked 22
billion junk-mail messages, up from about 12 billion in September.

The result is putting pressure on network administrators and corporate
technology departments, because junk mail laden with images typically
requires three times as much storage space and Internet bandwidth as a text
message, said Daniel Druker, Postini's vice president for marketing. ...

Typo takes tourist 13,000 km out

<Monty Solomon <>>
Fri, 29 Dec 2006 12:42:40 -0500

Typo takes tourist 13,000 km out, Reuters, 29 Dec 2006

A 21-year-old German tourist who wanted to visit his girlfriend in the
Australian metropolis Sydney landed 13,000 kilometers (8,077 miles) away
near Sidney, Montana, after mistyping his destination on a flight booking
Web site. ...

2007 Preview: Newt's Muzzle, Google's Data, Microsoft Over the Line

<Lauren Weinstein <>>
Thu, 21 Dec 2006 15:24:40 -0800

  2007 Preview: Newt's Muzzle, Google's Data, and Microsoft Over the Line
           ( )

Greetings.  As 2006 draws to a close, I wanted to review three issues from
this year that are likely to be of considerable note in 2007.  One is a
bizarre blast from left field (or more precisely "right field"), the next is
a pressure cooker data problem that we must resolve soon, and the last
demonstrates how anti-piracy efforts can cross the line from reasonable to
arrogant and potentially dangerous.

The latter two of these topics may cry out for legislative attention if
voluntary approaches continue to be impotent — and with the new Congress
coming into power we may have our best shot of accomplishing something
positive on the federal level if legislation indeed becomes necessary.

I realize that many people shudder at the prospect of legislation, fearing
that it may make matters worse, that lobbyists will warp beneficial efforts
into twisted mutations of intent, and similar concerns.  These are indeed
real risks, but we're also seeing the increasing risks of allowing important
technology issues that affect society at large to be determined solely by
corporate entities who — quite naturally and understandably — have their
own agendas and priorities.  Again, I'd prefer to see things done on a
voluntary basis, but we may have to bite the bullet and give legislation the
old college try.

But onward to the issues ...

OK, what the blazes is Newt's Muzzle?  A couple of weeks ago, former Speaker
of the House Newt Gingrich started spouting off (first in a speech and just
a few days ago on NBC's "Meet the Press") about how useful it would be to
censor the Internet.  The example he's using (for now) is "jihadist" Web
sites, and he'd like a panel of federal judges to decide which sites would
be "closed down."

Outside of showing his true colors when it comes to freedom of speech
issues, Newt is also displaying a woeful lack of understanding of the
Internet and how essentially impossible (and counterproductive) attempts at
censorship really are in this environment.

The UK Guardian asked me for an op-ed on this topic, and it went up on their
Web site a few days ago as "Can Newt Nix the Net"
( ).  Rather than my taking much more
space discussing the matter here, if you're interested in Newt's thinking
(and my views on the Internet censorship topic in this context), please
visit that link.

Even though Internet censorship (despite the help of U.S. technology
companies that provide systems to foster its deployment) is ineffective, it
is still a tremendously counterproductive waste of time, resources, and
human creativity, and distorts communications in ways that are both
unnecessary and potentially result in dangerous backlashes.  This is an
issue that will only become more important in 2007 and beyond.

Onward ...

The data retention controversy — the battle to determine how much data is
reasonable for search engines and other entities to maintain on their users
-- is becoming ever more a red flag issue.  In 2006 alone we saw the specter
of the feds going after Google data in DOJ vs. Google, AOL releasing
privacy-invasive search keyword lists, and issues of Chinese use of
U.S. company Internet records to track dissidents, among other similarly
distressing activities.

The concerns in this area go way beyond Google, but as the most powerful
player in the Internet search industry, Google has a special responsibility
to be a leader, not only by fulfilling their "don't be evil" slogan (and I
do believe Google's motives are benign) but also by not creating
infrastructures that allow others to do evil.  It is in this latter respect
that it appears Google "talks the talk" when it comes to concern about how
their data could be abused by outsiders, but hasn't "walked the walk" by
taking sufficient definitive steps to make such abuse impossible.

Again, I'd prefer that this entire area (industry-wide, not just Google) be
dealt with on a voluntary basis.  But as I've discussed in detail over at
the California Initiative For Internet Privacy ( ) and
links referenced there, if voluntary approaches don't work we may have to
take the next step, either at the California initiative level or — given
the upcoming changes in Congress — perhaps at the federal legislative level
(an option that did not appear reasonably to be on the horizon when I wrote
the existing CIFIP essay).  While some of my reservations about the
California state legislature might apply to Congress as well, it is
undeniable that a federal approach to these issues could be far more
effective, that is if — and only if — we need to choose the legislative

This is a complex area, with the competing goals of mandated data
destruction to protect users' privacy, and the desires of governments to
mandate data retention, continuously at odds.  We have a tremendous amount
of work to do to reach a reasonable outcome.

Finally ...

There's been a lot of discussion about the anti-piracy features in
Microsoft's new "Vista" Windows operating system
(e.g.  I've had a number of
very friendly conversations with MS executives regarding the issues
surrounding their anti-piracy implementations, and in particular their new
ability to functionally "hobble" Vista systems that they believe are

The more that I've considered this, the increasingly unreasonable and
hazardous this functionality appears to be.  It turns the assumption of
innocence on its head — you have to take affirmative steps to prove to
Microsoft that you're not a pirate if your system appears on their suspect
hit list.  As we know from Windows XP, there are all sorts of ways that
honest consumers can end up with systems that have cloned copies of the OS
(often installed by repair depots to replace trashed copies of the original
system after disk failures, for example).

Many consumers don't even realize the difference between the hardware and
operating system of their computers.  Many will ignore the warning messages
that MS will send before triggering a system hobble, assuming that the
messages don't apply in their cases, or that they're phishing or virus
come-ons.  The mere existence of the mechanisms to initiate the hobbling may
represent an attractive attack vector for destructive hackers, who might
well get their jollies by shutting down a few thousand (million?) PCs at a

Vast numbers of these computers will be in highly important applications in
business, health care, government, and the military.  Yes, Microsoft says
you're not supposed to use them for critical applications.  But we know what
the real world looks like, and even the definition of "critical" can be

Even more to the point (and this also relates to the data retention issues
above) it is extremely problematic to assume that it is even reasonable for
individual corporate entities to have total ad hoc, carte blanche authority
to make these decisions on their own, decisions that technologically have an
enormous and ever increasing impact on individuals and society at large.

I might add that while the new Microsoft anti-piracy systems are of
particularly concern, there are other anti-piracy technologies being
deployed that carry similar risks, including but not limited to a range of
upcoming Digital Rights Management (DRM) systems.

I keep saying "voluntary is best" and I mean it.  In all of these topic
areas I've discussed, voluntary approaches are always to be preferred.  But
in our society, a key role of legislation is to help provide mechanisms for
"power-sharing" in situations like these, if voluntary and cooperative
approaches prove to be failures.

We are all part of this.  We can sit on our hands and watch as mute
spectators — or we can get our hands dirty by reaching directly into the
innards of the machines — figuratively speaking — and helping making sure
that these systems serve not only their immediate masters, but also
society's requirements as well.

None of this will be trivial, of course.  But to quote the great animated
philosopher "Super Chicken" — "You knew the job was dangerous when you took

Have a great holiday season, and all the best for 2007.  Take care, all.

Lauren Weinstein +1(818)225-2800
Blog:  DayThink:

Vista DRM The 'Longest Suicide Note in History'? (via Dave Farber IP)

<Gunnar Helliesen <>>
December 26, 2006 3:50:46 PM EST

Highly recommended piece by security researcher Peter Gutmann. It details
how Vista is intentionally crippled, to protect "premium content". Also
possible effects on OSS, drivers and such. For IP, if you wish.

            A Cost Analysis of Windows Vista Content Protection
                 Peter Gutmann,
                      Last updated 27 December 2006

Executive Summary

Windows Vista includes an extensive reworking of core OS elements in order
to provide content protection for so-called "premium content", typically HD
data from Blu-Ray and HD-DVD sources.  Providing this protection incurs
considerable costs in terms of system performance, system stability,
technical support overhead, and hardware and software cost.  These issues
affect not only users of Vista but the entire PC industry, since the effects
of the protection measures extend to cover all hardware and software that
will ever come into contact with Vista, even if it's not used directly with
Vista (for example hardware in a Macintosh computer or on a Linux server).
This document analyses the cost involved in Vista's content protection, and
the collateral damage that this incurs throughout the computer industry.

Executive Executive Summary

The Vista Content Protection specification could very well constitute the
longest suicide note in history. [...]

Disabling of Functionality

Vista's content protection mechanism only allows protected content to be
sent over interfaces that also have content-protection facilities built in.
Currently the most common high-end audio output interface is S/PDIF
(Sony/Philips Digital Interface Format).  Most newer audio cards, for
example, feature TOSlink digital optical output for high-quality sound
reproduction, and even the latest crop of motherboards with integrated audio
provide at least coax (and often optical) digital output.  Since S/PDIF
doesn't provide any content protection, Vista requires that it be disabled
when playing protected content.  In other words if you've invested a pile of
money into a high-end audio setup fed from a digital output, you won't be
able to use it with protected content.  Similarly, component (YPbPr) video
will be disabled by Vista's content protection, so the same applies to a
high-end video setup fed from component video. [...]

  [Note: *The New York Times* had a Christmas-Day article on Vista flaws:

Drop zones and an intelligence war (fwd)

<Gadi Evron <>>
Sat, 23 Dec 2006 12:32:49 -0600 (CST)

In this post (, FX
describes a drop zone for a phishing/banking trojan horse, and how he
got to it.

Go FX. I will refrain from commenting on the report he describes from Secure
Science, which I guess is a comment on its own.

We had the same thing happen twice before in 2006 (that is worth mentioning
or can be, in public).

Once with a very large "security intelligence" company giving drop zone data
in a marketing attempt to get more bank clients ("hey buddy, why are 400
banks surfing to our drop zone?!?!)

Twice with a guy at DEFCON showing a live drop zone, and the data analysis
for it, asking for it to be taken down (it wasn't until a week later during
the same lecture at the first ISOI workshop hosted by Cisco). For this guy's
defense though, he was sharing information. In a time where nearly no one
was aware of drop zones even though they have been happening for years, he
shared data which was valuable commercially, openly, and allowed others to
clue up on the threats.

Did anyone ever consider this is an intelligence source, and take down not
being exactly the smartest move?

It's enough that the good guys all fight over the same information, and even
the most experienced security professionals make mistakes that cost in
millions of USD daily, but publishing drop zone IPs publicly? That can only
result in a lost intelligence source and the next one being, say, not so

I believe in public information and the harm of over-secrecy, I am however a
very strong believer that some things are secrets for a reason. What can we
expect though, when the security industry is 3 years behind and we in the
industry are all a bunch of self-taught amateurs having fun with our latest

At least we have responsible folks like FX around to take care of things
when others screw up.

I got tired of being the bad guy calling "the king is naked"[*], at least in
this case we can blame FX. :)
  [* Especially when "the Emperor has no clothes."  PGN]

It's an intelligence war, people, and it is high time we got our act

I will raise this subject at the next ISOI workshop hosted by Microsoft
( and see what bright ideas we come up with.

Re: Trig error checking (RISKS-24.51/52)

<"Ted Lee" <>>
Fri, 22 Dec 2006 10:13:36 -0600

Speaking of spurious faults, which "mike martin" <> did
in RISKS-24.52, I am reminded of an amusingly insidious fault I ended up
tracking down at the PDP-1 at the Cambridge Electron Accelerator ca. 1968.
The machine was used primarily to run experiments, but one of the professors
had the idea of also using it as a teaching aid.  The machine had been
retrofitted with memory protection hardware so several experimenters could
run their software at once without stepping on each other's toes.  (As I
recall, it didn't have any address translation, just protection) I ran a
program (n-body simulator for elementary physics classes) I'd written that
had been working fine — and it came up with a memory fault, repeatedly.  I
tracked the fault down to happening in a display subroutine, in particular,
a subroutine to draw a circle.  I vaguely remember simplifying everything so
all I was doing was drawing a single large circle (like a foot in diameter
-- the screen was huge) — and the machine and display were slow enough I
could see that the fault happened exactly at something like the top of the
screen.  The only "interesting" thing about that is that it was at a point
where the value in the accumulator would have been all 1's and on the next
iteration overflowed to all 0's.  For any of you old enough to know what a
real computer was like, the buses in this machine were bundles of wires or
flat cables with something like 18 wires in them.  It turns out that the
single wire (and it really was a single wire that just sort of hung across
the electronic racks) that carried the signal indicating a protection
violation had been routed close to the accumulator: the sudden energy of all
the bits turning from 1 to 0 got coupled into that wire and caused the fault.

Re: Trig error checking (RISKS-24.51/52)

<Ken Knowlton <>>
Fri, 22 Dec 2006 20:41:30 EST

The recall of lurking, obscure errors from olden times (ca 1960) brings to
mind one struggle I had during the development of a system on the IBM 7090
(7094?) at MIT — when my main debugging tool was a massive core dump of
spaghettified list structure. After a week of bashing my head, I had a
hunch: I asked the machine operators, between runs, manually to store a
particular number in a certain register, fetch it, and see whether the nth
bit got dropped.  Three hours later I stopped by for the news: two out of
five tries the 1 turned to 0.

Yes, stuff like that was happening with hardware as well as software; during
my week of puzzlement (and earlier), who knows what how much trash was
strewn into others' results?

Re: Trig error checking (RISKS-24.51/52)

<Gene Spafford <>>
Fri, 22 Dec 2006 14:35:05 -0500

Nearly 25 years ago, some of my grad school buddies were working on a
compiler and support language as part of the Georgia Tech Software
Tools project.  This was a full set of the standard software tools,
only for PR1MOS  (the operating system of Prime computers --
actually, quite an interesting architecture, based on segments and
rings ala Multics).

I was asked to write up the basic math library — they didn't want to
call the underlying Prime library for copyright reasons.   I was
asked because I was really, really good with the assembly language on
the systems (having written a Pascal compiler and OS in the assembly
language in the previous couple of years).   So, I checked out some
texts and wrote up some fast libraries and the test routines that
were in the books.  All looked good.

However, being the cautious type, I wanted to check that my code was
indeed correct.  I wanted an independent check.  So, I asked around,
and found the Cody & Waite book.   I coded all the tests, ran them
against my library, and found one or two spots where I had not quite
reduced arguments correctly. I fixed them until they passed both my
original tests and the Cody & Waite tests.  In the succeeding years,
I never heard about any problems.

As a matter of curiosity, I ran the tests against the native OS
library shipped with the Fortran compiler.   I was aghast at the
results!  In some cases, the results were of the wrong sign an
magnitude, didn't return errors for input out of range, and often
lost about 60 out of 64 bits of precision!   I wrote this up as a
tech report (GT/ICS 83/09), and it was distribute to Prime and the
Prime User's group, as well as included with the GT-WT
distribution.   I got mail from dozens of chagrined users of Prime
systems who discovered errors in their systems because they had
accepted the output of the math libraries — including some
astrophysicists who had to withdraw a paper claiming a better
approximation of some constant, and a team of engineers who had been
designing a nuclear reactor containment vessel using one of those

A few years later, as a post doc, I pulled out the routines and got a
grad student to help me rerun the experiments on several other
systems we had around the lab at Georgia Tech.  The result was issued
as a tech report, Spafford, E.H.; Flaspohler, J.C.: A Report on the
Accuracy of Some Floating-Point Math Functions on Selected
Computers.   Georgia Institute of Technology, Technical Report GIT-
SERC-86/02, GIT-ICS-85/06, and then later published in ;Login: (the
Usenix newsletter).  The 14 systems we tested for our report included
Vaxen running 4.2 BSD, a Pyramid 90x, an AT&T 3B20S, an AT&T 7300, a
Sun 2, a Ridge, a Cyber and a Masscomp — each with its own OS and
support system.  I can't find a copy of the report still on the WWW
anywhere, but in short, the results were that NONE of the systems
tested passed all the tests, and several produced results that were
as far wrong as on the Prime system tested a few years earlier.
These were systems used regularly by engineering firms, scientists,
NASA, the NRC, and more.  Very scary results.

Today, we have people downloading code from the net and running it,
integrating it into their mission-critical systems.  The code is
produced without design, without formal testing, and by people
without adequate training to even understand there might be
problems.   The focus everyone seems to have is on buffer overflows,
but those are merely one symptom of sloppy software production.
There are lots of places where assumptions about the underlying
correctness of the system can be proven horribly wrong in long-time RISKS readers understand.

Last time I checked, Cody & Waite was out of print, and an online
auction site had copies for over $200 apiece.

I wonder how current-day systems would fare against these tests?
Given Bart Miller's experience with his "fuzz" testing over the last
two decades, I wouldn't want to bet that current math libraries work

Re: Flat train wheels (Ladkin, RISKS-24.51, Crepin-Leblond, R-24.52)

<"Peter B. Ladkin" <>>
Fri, 22 Dec 2006 07:58:27 +0100

I asked my railway-engineer colleague, Oliver Lemke, whether this phenomenon
was known in Germany. Oliver noted that it has been known for thirty-plus
years, ever since the introduction of the ET 420 EMUs for the Munich
Olympics in 1972. The 420 series was the first with only disk brakes.

Some locomotive series, for example the 101 series which has been used to
haul intercity passenger trains since 1996, are outfitted with "cleaning
brakes" (German: "Putzbremsen"), which don't have any braking effect but
clean any film from the wheels. The cleaning brakes operate automatically
every couple of kilometers or so. They were installed, not because of
braking problems, but because of problems starting and accelerating from a
stop with heavy loads under conditions of poor adhesion.

Peter B. Ladkin, Causalis Limited and University of Bielefeld

E-mail me at xx at yy dot zz

<Dan Jacobson <>>
Thu, 28 Dec 2006 23:48:54 +0800

Why do I, the non-spam fearing, find myself
needing to say "jidanni at jidanni dot org" more and more these days?

No, not to protect from spam, but to protect from the spam protectors!

More and more well meaning news and mailing list software "protects"
the addresses of spam fearers and non-fearers alike.

So if I want to ensure my e-mail address gets through unscathed, I must add
the aforementioned hiccups, lest potential respondents be forced to enter
some "click to reply" sign-up nightmare.

Please report problems with the web pages to the maintainer