The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 26 Issue 17

Monday 20 September 2010


Technical Engineering Risks
Peter Bernard Ladkin
JP Morgan Chase online service outage
Steven J Klein
Accidental triggering of Emergency Alert Systems
Danny Burstein
Ed Felten on Haystack
Jeremy Epstein
Malicious e-mail with executable pdf
Similar to Smail Mail: Insert <Slug> vs Insert "The Slug"
Mark Brader
The populist approach to computer security?
Steve Schafer
Spoiler Alert: Whodunit? Wikipedia Will Tell You
Noam Cohen via Monty Solomon
Private Paypal payments on the Web
Chris J Brady
Follow-up on Blu-ray HDCP Master Key crack
Re: Software glitches, systemic failure, airplane crashes
Dick Mills
Re: Scary e-mail—invite from Facebook
Merlyn Kline
Re: A Strong Password Isn't the Strongest Security
Don Norman
Dick Mills
Nick Brown
Info on RISKS (comp.risks)

Technical Engineering Risks

Peter Bernard Ladkin <>
Mon, 13 Sep 2010 09:04:48 +0200

It seems to me (and I may be wrong) that recent editions of Risks have
contained much more about the everyday-organisational effects of our
societies becoming more dependent on fallible digital systems, and less than
Risks used to have about engineering anomalies so engendered.

I would like to note that this is not because there are fewer such anomalies!

15 years ago, there were occasional reports in Aviation Week and Space
Technology of anomalies attributable to SW or programmable-electronic HW
design. Now, there are many more. I have just read p14 (of 58), the first
page of the technical "news briefs", of the September 6, 2010 edition. There
are two. This seems to me to be typical, and this time I am motivated to
note the situation.

The date for the launch of the Space-Based Infrared System (Sbirs) early
missile warning satellite has slipped a little to Spring 2011. "Flight
software for Lockheed Martin-built spacecraft is still being tested
following an anomaly that cropped up in a classified satellite with a
similar architecture."

An unmanned, remotely piloted rotary, wing aircraft called the Fire Scout
flying out of NAS Patuxent River apparently lost communications, and headed
out of its planned airspace and into restricted airspace used primarily for
military helicopters transporting government officials around Washington and
to Andrews AFB. The Fire Scout lost communications and detected it,
"squawked" the appropriate code on its transponder, but "due to a software
flaw, ignored preprogrammed commands that directed it to return to base
under those conditions".

This is not the first time that such things have happened with
remotely-piloted vehicles. There was a well-known incident near Nogales with
one of the Predator vehicles used by the Border Patrol.  (Only a brief
report in Risks, but still a report, in Risks-24.29 in 2006 by Mark Newton: . Full report available
through the synopsis at: )

My own experience is that such incidents in aviation, put down to anomalies
with programmable-electronic systems, are increasing. I regret that there
aren't enough Risks contributors to keep up with the news!

Peter Bernard Ladkin, Causalis Limited and University of Bielefeld

JP Morgan Chase online service outage

Steven J Klein <>
Fri, 17 Sep 2010 01:53:05 -0400

On the evening of Monday 13 Sep 2010, the Chase Bank online banking service
(which serves almost 17 million customers—including me) stopped working;
service wasn't restored until two days later.

>From a report in *Computerworld*:

  The bank, the nation's second largest, said in a separate statement that a
  "third party database company's software caused a corruption of systems
  information disabling our ability to process customer log-ins to" It added that the problem "resulted in a long recovery

Despite that statement, being unable to log in was far from the only
problem. Chase was also unable to process any online payments during the
outage; payments scheduled for Monday and Tuesday weren't made until
Wednesday, likely resulting in lots of customers seeing late fees from their
creditors. (Chase has promised to reimburse customers for any late fees.)

One thing that struck me as very poorly worded in the *Computerworld*
article was this sentence: "During the outage, customers couldn't use the
online banking site, but the bank's ATMs, branches and call centers were

Call centers most definitely were affected—they were overwhelmed by the
huge number of online users who reached for their phones when they couldn't
log in to the Chase website.

I'm not a risks expert, and I admit I'm finding it difficult to assess the
risks given how little information I know about the failure.  But I wouldn't
be surprised to learn the "third party database" had been upgraded just
before the outage.

There is at least one risk they anticipated and mitigated: A student at my
university tried to register the domain name f*, but found out it
was already registered by Chase!

Accidental triggering of Emergency Alert Systems

danny burstein <>
Wed, 15 Sep 2010 21:44:59 -0400 (EDT)

"The Society of Broadcast Engineers has alerted members that a radio ad for
an oil company [ARCO, a BP subsidiary] is tripping EAS units."  (EAS is the
modern incarnation of what used to be the "Emergency Broadcast System".)
[EAS header tones at the beginning of the ad apparently sometimes trigger
the EAS, at the proper frequencies and approximately correct data rate.

Ed Felten on Haystack

Jeremy Epstein <>
Tue, 14 Sep 2010 09:32:49 -0400

Haystack, a hyped technology that claimed to help political dissidents hide
their Internet traffic from their governments, has been pulled by its
promoters after independent researchers got a chance to study it and found
severe problems.

This should come as a surprise to nobody. Haystack exhibited the warning
signs of security snake oil: the flamboyant, self-promoting front man; the
extravagant security claims; the super-sophisticated secret formula that
cannot be disclosed; the avoidance of independent evaluation.  What's most
interesting to me is that many in the media, and some in Washington,
believed the Haystack hype, despite the apparent lack of evidence that
Haystack would actually protect dissidents. [...]

Jeremy Epstein, Senior Computer Scientist, SRI International, 1100 Wilson
Blvd, Suite 2800, Arlington VA  22209 703-247-8708,

Malicious e-mail with executable pdf

"Peter G. Neumann" <>
Sun, 12 Sep 2010 20:27:37 PDT

At 1300 EDT, 9 Sep 10, the US Computer Emergency Readiness Team (US CERT)
received a report from open source media (via the Department of Homeland
Security Office of Public Affairs) regarding fast-spreading malicious e-mail
activity, resulting in distributed denial-of-service type impacts against
e-mail enterprise servers.  US CERT has since received multiple reports from
both Federal agencies and private sector entities experiencing similar
impacts and is now in the process of collecting and analyzing samples of the
malware and developing mitigation strategies.  These attacks have the
potential to prevent, at a minimum, the efficient operations of US
Government e-mail systems.  The e-mail titled "Here you have" or "Just for
you" has a PDF attachment which, when clicked/opened, e-mails the malware
(from the worldwide web domain name "") to the
addresses contained in the user's local/Global Address List.  At 1830 EDT 9
Sep 10, the Department of Homeland Security (DHS) National Cyber Security
Center (NCSC) and the National Cybersecurity and Communications Integration
Center (NCCIC) conducted a conference call to report 6 Federal agencies and
an unknown number of private sector companies have been affected by a
computer virus that exploits e-mail address books/Global Address Lists.  The
NCSC reports a major media outlet is reporting on this story, and Public
Affairs guidance is being put together by the DHS National Protection and
Programs Directorate (NPPD).  The DHS NOC conducted a component and
interagency blast call at 1858 EDT 9 Sep 10 with no amplifnue [sic] to
monitor this event and will publish further reports as warranted.

Similar to Smail Mail: Insert <Slug> vs Insert "The Slug"

Mark Brader
Fri, 13 Aug 2010 08:31:57 -0400 (EDT)

This is a lovely tale of confusion in junkmail software.  A parametric
salutation (such as "Resident" or "to our neighbor at ...") was intended to
be defined by the junkmail sender, but its absence defaulted to "The Slug",
resulting in mass mailings going out "To THE SLUG at" the designated
address.  [PGN-ed]

The populist approach to computer security?

Steve Schafer <>
Mon, 20 Sep 2010 11:29:47 -0400

Recently, I was trying to diagnose a problem that prevented my Windows XP
computer from running Java applications. (It turned out that the problem was
an interaction between the Java runtime and the ClearType gamma setting, but
that's a story for another time...) Anyway, I was searching for clues, and
tried a Google search on "javaw.exe" (the Java runtime executable, which in
my case was starting up and then immediately exiting without any indication
of why).

The search turned up a number of websites where it appears that people
"vote" to rate the safety and security of various system files. For example:

I found it intriguing that much like the news these days, computer security
was coming down to little more than popular opinion. On this site, in
particular, we find about javaw.exe that:

 156 users rated it as not dangerous.
  17 users rated it as not so dangerous.
  63 users rated it as neutral.
  33 users rated it as little bit dangerous.
  25 users rated it as dangerous.
  21 users didn't rate it ("don't know").

Along with the votes were comments like, "It's a memory hog," "It will eat
memory like crazy on a powerful computer if left alone" and "It is a gateway
for pop-up ads," in addition to ones like, "Part of the SUN Java
runtime. Needed for Java programs."

Another more amusing example is dwwin.exe. This is the executable for
Dr. Watson, Microsoft's post-mortem bug logger. Since Dr. Watson launches
automatically after an application crashes with an unhandled exception,
people conclude that it's Dr. Watson itself that is crashing or is causing
the crash.

Steve Schafer

Spoiler Alert: Whodunit? Wikipedia Will Tell You

Monty Solomon <>
Sat, 18 Sep 2010 23:29:36 -0400

Noam Cohen, *The New York Times*, 17 Sep 2010

At the end of each performance of the Agatha Christie play "The Mousetrap,"
the person revealed to be the murderer steps forward and tells the audience
to "keep the secret of whodunit locked in your heart."
Even after 58 continuous years of performances in the West End of London,
the play's twist ending has been largely preserved by reviewers, guidebook
writers and the great bulk of the estimated 10 million people who have seen
the play.

There is one notable exception: Wikipedia. The encyclopedia's article about
the play succinctly summarizes its two acts and then, in a single sentence,
even more succinctly explains who the killer is.  That shocked Christie's
grandson, Matthew Prichard, who told the British newspapers late last month
that he was dismayed to learn that Wikipedia could not keep a secret. ...

Private Paypal payments on the Web

Chris J Brady <>
Thu, 16 Sep 2010 04:19:21 -0700 (PDT)

I guess when everything published on the web is visible to all there is a
risk of someone obtaining your financial details if they are released into
such a widespread pubic domain.

Such is this file:

I found this when searching Yahoo for someone with an unusual Dutch name.

Yahoo Search listed this link in the results. I then downloaded the CSV,
imported it into Excel, did a Select All and parsed the data records. Then I
did a Find for the person's name. He/she was there - listed quite a few

But against the Yahoo listing was the link: Cached - which when clicked upon
listed the CSV file's contents again, all nicely parsed.

Even less privacy was discovered when I went to - to search the
Wayback db.*/

And there was the file again, actually two versions:

When private financial details and transactions are uploaded to the
Internet, it doesn't take much for them to be captured and archived for all
to access - for all time. This represents a HUGE risk of trust on companies
such as NOT to publish such details. Sadly that trust is
betrayed all too frequently.

Follow-up on Blu-ray HDCP Master Key crack (RISKS-26.16)

"Peter G. Neumann" <>
Mon, 20 Sep 2010 10:33:43 PDT

"Intel Threatens to Sue Anyone Who Uses HDCP Crack"

  [If your technology is flawed or has been compromised, try legal remedies,
  e.g., under the DMCA.]

  Paul Kocher, chief scientist at Cryptography Research in San Francisco,
  said in a recent interview that somebody in the business of making
  HDCP-compatible devices, who had access to at least 50 individual device
  keys, would have been able to reconstruct the master key by analyzing
  "mathematical similarities" in the individual device keys. He said that
  was a vulnerability in the technology that was bound to be exploited.

See also,11311.html

Re: Software glitches, systemic failure, airplane crashes (R-26.15)

Dick Mills <>
Fri, 10 Sep 2010 11:27:26 -0400

So, we say that pilots are mis-trained if they are allowed to use overly
vigorous rudder actions that could lead to structure failure in real life.

Should we require the simulators to include finite element models of each
screw and strut to predict structural failure?  I think that's unrealistic.

Should we include a disclaimer at the start of every simulator training
session?  "Warning - simulations may be inaccurate."  That would have as
little positive effect as my GPS does when it warns me not to use it for
navigation every time I turn it on.

I think that simulators could include simple and practical tests to detect
when the operating state goes outside the realm of proven validity.  When
the tests fail, I think that the simulation should halt.  Figuratively
speaking, a deliberately placed blue screen of death is the appropriate
response.  Under the current culture, it is considered unacceptable that the
simulator should stop or crash in any circumstances.  That policy should be
reversed.  An abrupt and unexpected blue screen of death would provide a
memorable lesson that "this is not reality."

Re: Scary e-mail—invite from Facebook (Kuenning, RISKS-26.16)

Merlyn Kline <>
Mon, 20 Sep 2010 09:58:30 +0100

> The obvious RISK of this design is that your privacy is dependent on
> John Smith's decisions, not yours.

If you send an e-mail to a third party, they generally acquire your e-mail
address. After that you have no control over what they do with it. If you
have privacy concerns in this respect you should not send such third parties
e-mail without obscuring your address. That is your decision, not
theirs. Your privacy is always dependent on your own decisions (except where
legislation applies). Understanding complex systems well enough to make
those decisions correctly in daily life is another matter.  Undoubtedly
there are Risks there.

Re: A Strong Password Isn't the Strongest Security (Sampson, R-26.16)

Don Norman <>
Fri, 17 Sep 2010 17:29:45 -0700

Curt Sampson's comment on the Randy Stross column in the New York Times
(Sep. 4, 2010) contains several dangerous statements, in my not-so-humble

He states "clearly I'm preaching to the converted in this forum."  Well,
sorry. I am not one of his converted. In fact, his views are based on false
premises, namely, that complex passwords are essential for security and that
the security problem is due to the lack of proper education: maybe we should
have a "Passwords for Dummies" book he suggests.

Sampson claims to be a security expert.  It is security experts who need to
change, not the general public. Would anyone be surprised to learn that I
resent being called a "dummy"? Security administrators need to understand
human behavior, not engineering technology. Worse, Sampson relies on the
article alone without doing the simple homework of reading the articles by
the people Randy Stross was quoting and relying upon. Rule: Don't criticize
unless you do your homework.

Disclaimer: I am one of the sources. The *NY Times* column was a necessary
simplification, but I think it was well done. As part of the disclaimer, I
also had lunch with Stross (about a mile from our esteemed moderator's place
of work) where we discussed these issues among others (after the article had
been written, after he had done a phone interview with me for the article,
but before publication). The other major contributor to Stross's opinion
piece was Cormac Herley from Microsoft Research. Both of us also cited many
other research studies that buttress our points. Both of us have papers that
are really easy to find:

Bruce Schneier: Only amateurs attack machines; professionals target people.

Sampson seems to think that faulty education is why we have bad security. So
he himself seems to use a unique password for every system he uses,
intermixing case, digits and letters, etc. How does he remember them? He
writes them down. In the safety business, this is called "single point of

In my articles and books, I show that overly secure passwords (and locked
doors) get in the way of the work, so people bypass them. But many other
people (e.g., Herley and Schneier) show that password weakness is NOT the
major cause of break-ins. Who cares how secure the password if there is a
key logger in place? Or if the password is written down and pasted on the
front of the monitor?  Or if people will help you log in when you have
"forgotten" your password. (The solution, by the way, is NOT to make people
less helpful.) Most security breaches are inside jobs, USB drives (See RISKS
26.16), key loggers, etc. Although RISKS digest is not a reviewed
publication and its coverage is clearly biased toward the spectacular, it
still is a useful guide: I don't recall many instances of guessed passwords
or brute-force attacks. (Brute-force attacks are the easiest to circumvent
through simple technology, which is why you don't see them.  The onerous
password requirements are mainly directed against brute-force attacks. Duh.)

The correct solution requires several things.

1. Do not think of people who use computer systems as "dummies." Some of the
biggest breaches have happened to the senior management who, trying to get
their work done, circumvented the onerous security rules (that they
themselves probably authorized).

2. Recognize that extra-strong security gets in the way of legitimate users
of the system, so it is apt to be compromised by the most dedicated workers
(as well as by many not so dedicated).

3. Use secondary and tertiary checks. Two independent weak methods are
likely to be stronger than any single strong method, after people's natural
behavior is taken into account.

4. Recognize that this is a difficult problem and that no simple solution
will be adequate.

5. Read the National Research Councils' report "Toward Better Usability,
Security, and Privacy of Information Technology" (URL below). (Yes I was on
that committee.)  And note the three topics: Usability,
Security. Privacy. They are tightly linked. And attend (and listen to) the
yearly SOUPS Symposium On Usable Privacy and Security (URL below).

Each field alone has its problems. Security experts think primarily about
more onerous security requirements. Usability experts want everything to be
really easy. Privacy experts want privacy and visibility of how personal
information is used, while some (e.g., medical researchers) need access to
everyone's records (anonymity is OK, as long as individuals can be
tracked). These views seem mutually contradictory.  So any solution requires
members of all three communities to work in tandem.

You can't solve the problem in isolation. It is not a technical problem. It
is technical, social, psychological, political.

Don Norman. Nielsen Norman Group

Re: A Strong Password Isn't the Strongest Security (RISKS-26.15,16)

<Dick Mills <>
Fri, 10 Sep 2010 10:25:08 -0400

The discussion about overly complex password rules reminds me of sage advice
that Digital once published in a VAX security manual.  I'll paraphrase: The
definition of security must be broad.  Security aims to see that authorized
users, and only authorized users, succeed in doing their jobs.

The modern definition of computer security seems much narrower.  It focuses
on preventing unauthorized uses, and malware.  If security procedures hinder
authorized users from doing their jobs, security still succeeds under the
narrow definition, but fails under Digital's broader definition.

An onerous password policy is a form of denial of service attack.

Might things improve if we made security people responsible for productivity
of the good guys as well as denial of the bad guys?

Re: A Strong Password Isn't the Strongest Security (RISKS-26.15,16)

"BROWN Nick" <>
Thu, 16 Sep 2010 17:02:06 +0200

An additional irony of keyloggers is that the bad guys can typically see
your password better than you can, since they don't have every character
replaced by a black blob.  Only a very few programs (7-Zip, when asking for
a password on a protected archive, springs to mind) allow you to check a
box to say "I do not fear Tempest scanning, and there is nobody else in the
room.  Please let me see this password as I type it."

To impose passwords like fH%JK43-oe9 and then prevent people from seeing
what they're typing is just sadism.  It must cost millions per year in
password reset costs, even with automated delivery of new passwords to
e-mail addresses.

I've added this functionality to the Web applications which I maintain.  I
suggested its addition to a site which I use frequently, where I have
contact with the development team, and which has no major, banking-style
security issues.  Their reply was, "We've decided not to do this, because
it's not an industry-standard practice".

Please report problems with the web pages to the maintainer