The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 27 Issue 46

Wednesday 4 September 2013


Our Newfound Fear of Risk
Bruce Schneier
'Walkie-Talkie' skyscraper melts Jaguar car parts
Martyn Thomas
How the "Internet of Things" May Change the World
Matthew Kruk
"Video: PostgreSQL succeeds where MySQL fails"
Pete Babb via Gene Wirchenko
"Developers hack Dropbox and show how to access user data"
Lucas Mearian via Gene Wirchenko
No password is safe from new breed of cracking software via David Farber
Windows 8 Picture Passwords Easily Cracked
ACM TechNews
Password must be 10 characters and begin and end with a number
Test 'reveals Facebook, Twitter and Google snoop on e-mails'
Martin Delgado via Henry Baker
"IBM starts restricting hardware patches to paying customers"
Joab Jackson via Gene Wirchenko
The Ghost Messages of Yahoo's Recycled IDs
Lauren Weinstein
"Report: NSA pays millions for US telecom access"
Joab Jackson via Gene Wirchenko
Re: HuffPo Edward Snowden Impersonated NSA Officials
Dimitri Maziuk
Paul Schreiber
Re: In ACLU lawsuit, scientist demolishes NSA's `It's just metadata'
Amos Shapir
Re: Sensitive data left on hard drives
David Alexander
Re: Text a driver in New Jersey, and you could see your day in court
B.J. Herbison
Larry Sheldon
Paul Robinson
Re: DC, Maryland: Speed Camera Firms Move To Hide Evidence
Paul Robinson
Info on RISKS (comp.risks)

Our Newfound Fear of Risk

Bruce Schneier <>
Wed, 04 Sep 2013 12:37:09 -0500
Bruce Schneier, Our Newfound Fear of Risk

We're afraid of risk. It's a normal part of life, but we're increasingly
unwilling to accept it at any level. So we turn to technology to protect
us. The problem is that technological security measures aren't free. They
cost money, of course, but they cost other things as well. They often don't
provide the security they advertise, and—paradoxically—they often
increase risk somewhere else. This problem is particularly stark when the
risk involves another person: crime, terrorism, and so on. While technology
has made us much safer against natural risks like accidents and disease, it
works less well against man-made risks.

Three examples:

* We have allowed the police to turn themselves into a paramilitary
organization. They deploy SWAT teams multiple times a day, almost always in
nondangerous situations. They tase people at minimal provocation, often when
it's not warranted. Unprovoked shootings are on the rise. One result of
these measures is that honest mistakes—a wrong address on a warrant, a
misunderstanding—result in the terrorizing of innocent people, and more
death in what were once nonviolent confrontations with police.

* We accept zero-tolerance policies in schools. This results in ridiculous
situations, where young children are suspended for pointing gun-shaped
fingers at other students or drawing pictures of guns with crayons, and
high-school students are disciplined for giving each other over-the-counter
pain relievers. The cost of these policies is enormous, both in dollars to
implement and its long-lasting effects on students.

* We have spent over one trillion dollars and thousands of lives fighting
terrorism in the past decade—including the wars in Iraq and Afghanistan
-- money that could have been better used in all sorts of ways. We now know
that the NSA has turned into a massive domestic surveillance organization,
and that its data is also used by other government organizations, which then
lie about it. Our foreign policy has changed for the worse: we spy on
everyone, we trample human rights abroad, our drones kill indiscriminately,
and our diplomatic outposts have either closed down or become fortresses. In
the months after 9/11, so many people chose to drive instead of fly that the
resulting deaths dwarfed the deaths from the terrorist attack itself,
because cars are much more dangerous than airplanes.

There are lots more examples, but the general point is that we tend to
fixate on a particular risk and then do everything we can to mitigate it,
including giving up our freedoms and liberties.

There's a subtle psychological explanation. Risk tolerance is both cultural
and dependent on the environment around us. As we have advanced
technologically as a society, we have reduced many of the risks that have
been with us for millennia. Fatal childhood diseases are things of the past,
many adult diseases are curable, accidents are rarer and more survivable,
buildings collapse less often, death by violence has declined considerably,
and so on. All over the world—among the wealthier of us who live in
peaceful Western countries—our lives have become safer.

Our notions of risk are not absolute; they're based more on how far they are
from whatever we think of as "normal." So as our perception of what is
normal gets safer, the remaining risks stand out more. When your population
is dying of the plague, protecting yourself from the occasional thief or
murderer is a luxury. When everyone is healthy, it becomes a necessity.

Some of this fear results from imperfect risk perception. We're bad at
accurately assessing risk; we tend to exaggerate spectacular, strange, and
rare events, and downplay ordinary, familiar, and common ones. This leads us
to believe that violence against police, school shootings, and terrorist
attacks are more common and more deadly than they actually are—and that
the costs, dangers, and risks of a militarized police, a school system
without flexibility, and a surveillance state without privacy are less than
they really are.

Some of this fear stems from the fact that we put people in charge of just
one aspect of the risk equation. No one wants to be the senior officer who
didn't approve the SWAT team for the one subpoena delivery that resulted in
an officer being shot. No one wants to be the school principal who didn't
discipline—no matter how benign the infraction—the one student who
became a shooter. No one wants to be the president who rolled back
counterterrorism measures, just in time to have a plot succeed. Those in
charge will be naturally risk averse, since they personally shoulder so much
of the burden.

We also expect that science and technology should be able to mitigate these
risks, as they mitigate so many others. There's a fundamental problem at the
intersection of these security measures with science and technology; it has
to do with the types of risk they're arrayed against. Most of the risks we
face in life are against nature: disease, accident, weather, random
chance. As our science has improved—medicine is the big one, but other
sciences as well—we become better at mitigating and recovering from those
sorts of risks.

Security measures combat a very different sort of risk: a risk stemming from
another person. People are intelligent, and they can adapt to new security
measures in ways nature cannot. An earthquake isn't able to figure out how
to topple structures constructed under some new and safer building code, and
an automobile won't invent a new form of accident that undermines medical
advances that have made existing accidents more survivable. But a terrorist
will change his tactics and targets in response to new security measures. An
otherwise innocent person will change his behavior in response to a police
force that compels compliance at the threat of a Taser. We will all change,
living in a surveillance state.

When you implement measures to mitigate the effects of the random risks of
the world, you're safer as a result. When you implement measures to reduce
the risks from your fellow human beings, the human beings adapt and you get
less risk reduction than you'd expect—and you also get more side effects,
because we all adapt.

We need to relearn how to recognize the trade-offs that come from risk
management, especially risk from our fellow human beings. We need to relearn
how to accept risk, and even embrace it, as essential to human progress and
our free society. The more we expect technology to protect us from people in
the same way it protects us from nature, the more we will sacrifice the very
values of our society in futile attempts to achieve this security.

This essay previously appeared on

'Walkie-Talkie' skyscraper melts Jaguar car parts

Martyn Thomas <>
Mon, 02 Sep 2013 19:29:55 +0100
A risk overlooked in the CAD program?

  [This is strange.  A London skyscraper under construction is apparently
  being blamed for intensifying the sun's rays and reflecting light on a
  nearby automobile in which various parts melted.  Martyn suggests that the
  possibility of such an occurrence might have been ignored by the
  architectural CAD program used to design and spec the building.
  Waggin' the tale of the Jaguar?   PGN]

How the "Internet of Things" May Change the World

"Matthew Kruk" <>
Mon, 2 Sep 2013 23:50:25 -0600

"Video: PostgreSQL succeeds where MySQL fails" (Pete Babb)

Gene Wirchenko <>
Wed, 04 Sep 2013 10:30:48 -0700
Pete Babb, InfoWorld, 03 Sep 2013
Head-to-head comparison shows MySQL failing to report major data
errors, which would lead to big headaches for developers

selected text:

In the above video, Conery sets up a basic MySQL query, including a
directive that nulls should not be allowed. He then intentionally tries to
add data with nulls, hoping that MySQL will catch the error.  It
doesn't. Conery notes, "MySQL decided, 'You tried to insert null, but what
you really meant was zero.'

"Developers hack Dropbox and show how to access user data" (Lucas Mearian)

Gene Wirchenko <>
Fri, 30 Aug 2013 13:35:17 -0700
Lucas Mearian, Computerworld, 28 Aug 2013
The cloud storage provider's two-factor authentication was bypassed
to gain access to user data

No password is safe from new breed of cracking software -

David Farber <>
Sun, 1 Sep 2013 15:40:16 -0400

No password is safe from new breed of cracking software.
Chances are you need to change your password. No matter how long it is.
  [This article originally appeared on The Daily Dot.]

Over the weekend, the free password cracking and recovery tool
oclHashcat-plus released a new version, 0.15, that can handle passwords up
to 55 characters. It works by guessing a lot of common letter
combinations. A lot. Really really fast.

Other long-string password-crackers exist, such as Hashcat and
oclHashcat-lite, though they take a great deal more time to cycle
through. This improvement runs at 8 million guesses per second while also
allowing users to cut down the number of guesses required by shaping their
attacks based on the password-construction protocol followed by a company or

A combination of increasing awareness of official scrutiny, such as the NSA
leaks, growing instances of hacking of all kinds and leaked password lists,
has inspired users to radically lengthen their passwords and use passphrases

As Dan Goodin noted in Ars Technica, “Crackers have responded by expanding
the dictionaries they maintain to include phrases and word combinations
found in the Bible, common literature, and in online discussions.''

One security researcher cracked the passphrase
  Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn1
-- a phrase from an H.P. Lovecraft horror story. It was less impossible than
it was super easy, crackable in minutes, because it was in an easily
available hacker word list.

The release notes state that the ability to target increased character
counts was their most requested change in a development process which took
the team six months, who modified 618,473 lines of source code, more than
half the code in the product.

Windows 8 Picture Passwords Easily Cracked

ACM TechNews <technews@HQ.ACM.ORG>
Wed, 4 Sep 2013 11:59:27 -0400
   [From ACM TechNews; 4 Sep 2013]
 Read the TechNews Online at:

[Source: *InformationWeek*, 30 Aug 2013, Thomas Claburn]

Microsoft Windows 8's picture gesture authentication (PGA) system is not
difficult to crack, according to security researchers from Arizona State and
Delaware State universities.  The researchers say their experimental model
and attack framework enabled it to crack 48 percent of passwords for
previously unseen pictures in one dataset and 24 percent in another, in a
paper presented at the recent Usenix Conference in August.  The researchers
also believe their results could be improved with a larger training set and
stronger picture-categorization and computer-vision techniques.  Windows 8
offers gesture-based passwords and traditional text-based passwords.
Setting up a gesture-based password involves choosing a photo from the
Picture Library folder and drawing three points on the image to be stored as
grid coordinates.  However, users tend to pick common points of interest,
such as eyes, faces, or discrete objects, and the passwords derived from
this constrained set have much less variability than randomly generated
passwords.  The researchers suggest Microsoft could implement a
picture-password-strength meter, and integrate its PGA attack framework to
inform users of the potential number of guesses it would take to access the

Password must be 10 characters and begin and end with a number

Sun, 01 Sep 2013 11:41:21 +0800
Signing up at
Password __________ (Must be 10 characters and begin and end with a number)

Gee, doesn't pinning it down so firmly merely help the crackers?
PCRE /^\d.{8}\d$/

   [Yes.  Old topic in RISKS, still lives.  PGN.]

Test 'reveals Facebook, Twitter and Google snoop on e-mails' (Martin Delgado)

Henry Baker <>
Sun, 01 Sep 2013 13:41:33 -0700
Sunday, Sep 01 2013 9PM  87F 12AM 84F 5-Day Forecast

Test 'reveals Facebook, Twitter and Google snoop on e-mails': Study of net
giants spurs new privacy concerns

* Study set out to test confidentiality of 50 of the biggest Internet companies
* Researchers sent unique web address in private messages through firms
* They found six of the companies opened the link from the message

Martin Delgado, *Daily Mail*, 31 Aug 2013

Facebook, Twitter and Google have been caught snooping on messages sent
across their networks, new research claims, prompting campaigners to express
concerns over privacy.

The findings emerged from an experiment conducted following revelations by
US security contractor Edward Snowden about government snooping on Internet

Cyber-security company High-Tech Bridge set out to test the confidentiality
of 50 of the biggest Internet companies by using their systems to send a
unique web address in private messages.

Experts at its Geneva HQ then waited to see which companies clicked on the

During the ten-day operation, six of the 50 companies tested were found to
have opened the link.

Among the six were Facebook, Twitter, Google and discussion forum Formspring.

High-Tech Bridge chief executive Ilia Kolochenko said: “We found they
were clicking on links that should be known only to the sender and

If the links are being opened, we cannot be sure that the contents of
messages are not also being read.

All the social network sites would like to know as much as possible about
our hobbies and shopping habits because the information has a commercial

“The fact that only a few companies were trapped does not mean others are
not monitoring their customers. They may simply be using different
techniques which are more difficult to detect.''

Earlier this year scientists in Germany claimed another big computer
company, Microsoft, was spying on customers using its Skype instant
messaging service.

Facebook declined to comment on the latest research but said it had complex
automated systems in place to combat phishing (Internet identity fraud) and
reduce malicious material.

Twitter also declined to comment directly but said it used robotic systems
to bar spam messages from customer accounts.

A source at Google said: “There is nothing new here. It simply isn't an

An independent expert explained: “In principle these companies should not
be opening the links, but in practice they are giving a service to
customers.  The protection provided outweighs any potential commercial

But campaigners called for stricter safeguards.

Nick Pickles, director of pressure group Big Brother Watch, said: “This is
yet another reminder that profit comes before privacy every day for some
businesses.  Companies such as Google and Facebook rely on capturing as much
data as possible to enhance their advertising targeting.  They intrude on
our privacy to build an ever more detailed picture of our lives.''

"IBM starts restricting hardware patches to paying customers" (Joab Jackson)

Gene Wirchenko <>
Fri, 30 Aug 2013 14:16:40 -0700
Joab Jackson, InfoWorld, 28 Aug 2013
Following an Oracle practice, IBM starts to restrict hardware patches
to holders of maintenance contracts

The Ghost Messages of Yahoo's Recycled IDs

Lauren Weinstein <>
Tue, 3 Sep 2013 19:40:04 -0700
  Eva Chan knows the value of a good username. She's had @EC on Twitter
  "longer than Twitter has had vowels." So when Yahoo started offering
  recycled user IDs, she put a few names on her wishlist. A little later,
  Yahoo gave her one of those names.  Then she started getting e-mails about
  a stranger's cancer.  (Medium via NNSquad)

"Report: NSA pays millions for US telecom access" (Joab Jackson)

Gene Wirchenko <>
Wed, 04 Sep 2013 10:33:34 -0700
Joab Jackson, InfoWorld, 30 Aug 2013
The Washington Post reports the NSA paid telecom companies $278 million
this fiscal year to intercept phone calls, e-mail, and instant messages

Re: HuffPo Edward Snowden Impersonated NSA Officials (RISKS-27.45)

Dimitri Maziuk <>
Fri, 30 Aug 2013 15:26:59 -0500
> 'Every day, they are learning how brilliant [Snowden] was, an anonymous
> former intelligence official told NBC, `'This is why you don't hire
> brilliant people for jobs like this. You hire smart people. Brilliant
> people get you in trouble.''

As a Unix systems administrator, I have access to files owned by, or can
assume the identity of, any user of this system. Including my superiors --
there's nothing brilliant about that, it's how Unix works.

(I haven't done Windows since last century, so I'm not sure what security
knobs are available in the recent versions.  I expect the above is also true
of MS Windows—and OSX of course has a Unix inside.)


- is it that NSA is using its own highly secure OS where the administrator's
access is limited, and Snowden brilliantly hacked through its security
layers? And if so, I'm curious: how do the subcontractors' computers
interoperate with it, what kind of security clearance do you need to see the
API, what does EULA look like, and so on.

- or is is that NSA and its subcontractors are using COTS OS and have zero
to no understanding of the levels of security and access actually afforded
by the system? Or if they do understand, how do they subcontract
sysadminning to someone without the highest NSAnet security clearance?

Dimitri Maziuk, Programmer/sysadmin, BioMagResBank, UW-Madison

Re: HuffPo Edward Snowden Impersonated NSA Officials (Kramer, R-27 45)

Paul Schreiber <>
Fri, 30 Aug 2013 20:57:02 -0400
> 'Every day, they are learning how brilliant [Snowden] was, ...

To me, this sounds like a nontechnical user trying to explain how sudo
su [1] works.  `Impersonating' is too attention-grabbing.

[1] Or its GUI equivalent for their Intranet (View page as ...)

Re: In ACLU lawsuit, scientist demolishes NSA's `It's just metadata' (RISKS-27.44)

Amos Shapir <>
Sun, 1 Sep 2013 18:32:20 +0300
If current laws and technology were in effect 40 years ago, Nixon wouldn't
need clumsy "plumbers"—the NSA could have bugged the Watergate offices
legally (collecting only "metadata" of course), Deep Throat would be sent to
jail, and the Washington Post would be prohibited from reporting anything
about the whole affair!

Re: Sensitive data left on hard drives

David Alexander <>
Sat, 31 Aug 2013 07:34:53 +0100 (BST)
The only news aspect of this article is that people are still doing it.

Andy Jones and Andrew Blyth at the University of Glamorgan were doing
surveys like this and publishing the results at least 10 years ago, with the
same findings.  I watched the movie "Grosse Point Blank" again recently and
was amused to see Joan Cusack 'destroying' a PC by hitting the casing with a
club hammer. The really funny thing is that some people actually think it
works --
  <irony>I presume the blows must knock the data bits off of the surface of
  the hard drive </irony>

Re: Text a driver in New Jersey, and you could see your day in court

"B.J. Herbison" <>
Sun, 01 Sep 2013 19:08:32 -0400
> Even the theoretical concept of holding the person at the other end of an
> electronic communication (hell, even another person just talking in the same
> vehicle) responsible for a driver's stupidity is beyond ludicrous.

I disagree. If a passenger intentionally distracts a driver and a crash
occurs the passenger has liability for the crash. Moving the distractor
outside of the vehicle electronically shouldn't reduce the liability.

The key though is "knows that the recipient is driving and texting".
That is often unknowable and usually hard to prove.

Re: Text a driver in New Jersey, and you could see your day in court (RISKS-27.45)

Larry Sheldon <>
Fri, 30 Aug 2013 16:45:36 -0500
There is no word in my vocabulary for how wrong this.

One of my typical uses of electronic messaging is and has long been sending
messages to people I know will be, at the time, asleep, eating a meal, in a
meeting, or in some other way indisposed to real-time conversation.

Under this insanity, the only safe thing for me is to never ever
originate a message that might conceivably be delivered to a mobile device.

Re: Text a driver in New Jersey, and you could see your day in court (RISKS-27.45)

Paul Robinson <>
Mon, 2 Sep 2013 18:44:13 -0700 (PDT)
... holding responsible for a driver's stupidity is beyond ludicrous.

And unconstitutional. This violates a number of United States Supreme Court
-- and other courts—decisions on a court's jurisdiction to hale a distant
defendant into court to defend a lawsuit.

Desktop Techs., Inc. v. Colorworks Reprod. & Design, 1999 U.S. Dist. Lexis
1034 (1999) is pretty much on point.  A Canadian company merely running a
website was not subject to jurisdiction in Pennsylvania.  A mere usenet
posting is of less quality for holding jurisdiction than a website.  Griffis
v. Luban, 646 N.W.2d 527 (2002 Minn.) found that a Usenet posting does not
cause the poster to be subject to the jurisdiction of a foreign state.

A text message doesn't even rise to the level of a Usenet posting, let alone
a website, and therefore there should be no grounds to hold a person sending
texts or e-mails, absent some criminal behavior such as threats or stalking
or other First Amendment unprotected activity, liable for the transmission
or give the courts standing to bring the sender in as the party to a case.

There must be at least minimum contact with a state for the courts there to
have jurisdiction.  Hanson v. Denckla, 357 U.S. 235, 78 S. Ct. 1228, 2
L. Ed. 2d 1283 (1958); Helicopteros Nacionales de Colombia, S.A. v. Hall,
466 U.S. 408, 104 S.Ct. 1868, 80 L.Ed.2d 404 (1984); International Shoe
Co. v. Washington, 326 U.S. 310, 66 S. Ct. 154, 90 L. Ed. 95 (1945); Shaffer
v. Heitner, 433 U.S. 186, 97 S. Ct. 2569, 53 L. Ed. 2d 683 (1977);
World-Wide Volkswagen Corp. v. Woodson, 444 U.S. 286, 100 S. Ct. 559, 62
L. Ed. 2d 490 (1980).

Paul Robinson <> (My blog)

Re: DC, Maryland: Speed Camera Firms Move To Hide Evidence (Burstein, RISKS-27.41)

Paul Robinson <>
Mon, 2 Sep 2013 16:42:04 -0700 (PDT)
It doesn't say if it was a British patent, which would mean it was
unpatented in the United States and no royalties would be due, or if it was
(or was also) patented in the U.S., in which case the patent, under the
rules then, expired 17 years after issuance, and that's only if the
intervening maintenance fees on the patent were also paid, which are due at
3, 7 and 11 years after issuance for all patents issued after December 12,
1980 or the patent automatically and irrevocably expires 6 months after the
maintenance fee is not paid; paying it late will not reinstate the patent.

So the patent would have expired at best, about 13 years ago. Now the rules
are even stricter on expirations, you could tie a patent up in holds by
constantly refiling with amendments (some inventors did that to try to
capture current practices which were done in a way that unknowingly would
infringe upon an applied for patent if the patent's original filing were
revised to cover new practices the inventor discovered after filing the
application), now, to stop that "practice" (pun unintentional; the filing of
patents is called a "practice") U.S. patents expire 17 years after issuance,
20 years after the first filing, or six months after non-payment of
maintenance fees, whichever comes first.

Paul Robinson <> (My blog)

Please report problems with the web pages to the maintainer