The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 27 Issue 12

Monday 24 December 2012


"Kempsey flood defence failure due to waterlogged sensor"
David J Taylor
Zeno proven correct, after all: motionless car speeding!
Henry Baker
Wells Fargo's website buckles under flood of traffic
Monty Solomon
Facebook and Gmail Have Outages
Jonathan B Spira
What Instagram's New Terms of Service Mean for You
Wortham/Bilton via Monty Solomon
Instagram Does an About-Face
Perlroth/Wortham via Monty Solomon
Instagram: 'Wait, Wait! That's Not What We Meant!'
Mike Masnick via Monty Solomon
Stabuniq malware found on servers at U.S. financial institutions
Monty Solomon
"Burdens of Proof: Cryptographic Culture and Evidence Law in the Age of Electronic Documents"
J-F Blanchette via Lauren Weinstein
NSA document on iOS security
Gabe Goldberg
NSA targeting domestic computer systems in secret test
Declan McCullagh
How To Pirate Windows 8 Metro Apps, Bypass In-app Purchases
Slashdot via Lauren Weinstein
"You're not anonymous. I know your name, email, and company."
Darren Nix via Lauren Weinstein
3D-Printing Firm Makerbot Cracks Down On Printable Guns
Henry Baker
Morgan Freeman Viral Newtown Quote Was Fake
Lauren Weinstein
Customer Service Social Engineering Scam on Amazon
Chris Cardinal via Lauren Weinstein
Iranian data-wiper
Feudal Security
Bruce Schneier
Book Review: Harvey Molotch, "Against Security"
Bruce Schneier
Info on RISKS (comp.risks)

"Kempsey flood defence failure due to waterlogged sensor"

"David J Taylor" <>
Fri, 14 Dec 2012 17:36:34 -0000

Admittedly, the system did work twice, but it sounds to me as if there was
inadequate backup....

Zeno proven correct, after all: motionless car speeding!

Henry Baker <>
Thu, 13 Dec 2012 11:16:22 -0800
Baltimore issues speed camera ticket to motionless car
By Scott Calvert, The Baltimore Sun, 12 Dec 2012 <>,0,6559038.story

Owner calls it "shockingly obvious" his car was not moving

An automatic speed camera citation was issued to a car owned by Daniel Doty
for going 38 in a 25. But there was a problem, as his car was standing
still. (Baltimore Sun video)

The Baltimore City speed camera ticket alleged that the four-door Mazda
wagon was going 38 miles per hour in a 25-mph zone — and that owner
Daniel Doty owed $40 for the infraction.

But the Mazda wasn't speeding.

It wasn't even moving.

The two photos printed on the citation as evidence of speeding show the car
was idling at a red light with its brake lights illuminated. A three-second
video clip also offered as evidence shows the car motionless, as traffic
flows by on a cross street.

The camera that wrongly ticketed Doty on April 24 is in Northeast Baltimore
in the 1700 block of E. Cold Spring Lane, at the intersection with Hillen
Road. It is the seventh city speed camera that The Baltimore Sun has shown
to have produced inaccurate citations bearing erroneous speed readings.

Doty's is the first case in which the vehicle was clearly stationary. City
officials gave no explanation for how it happened.

Doty, a lawyer who lives in Lauraville, said he and his wife were amazed
that the ticket was issued, calling it "shockingly obvious" from the images
that the car was stopped. He has challenged the ticket and is scheduled to
appear in District Court on Friday.

"It was like someone was so obviously asleep at the switch," he said
Wednesday. "I thought that was not supposed to happen."

The city's speed camera contractor, Xerox State and Local Solutions, says
each potential citation goes through two layers of review to weed out any
that have a deficiency, such as an illegible license plate.

Then a Baltimore police officer must review the citation before approving it
for issuance to the vehicle owner. Each citation says the officer swears or
affirms that the car was going at least 12 mph over the speed limit "based
on inspection of the recorded images." The officer's signature is also

The Sun asked city officials why Doty's ticket was issued. Transportation
Department spokeswoman Adrienne Barnes offered no explanation but said the
agency would have more to say at Friday's meeting of a task force set up by
Mayor Stephanie Rawlings-Blake to study the city's entire speed and red
light camera program. The city has 83 speed cameras and 81 red light

It isn't clear from the signature on the citation which police officer
reviewed Doty's ticket, and police spokesman Anthony Guglielmi didn't say
when asked, but added, "The department finds any error unacceptable." The
department has said that a single officer can review up to 1,200 citations
in a given day.

Xerox spokesman Chris Gilligan did not address Doty's citation. He noted in
a statement that a "system-wide audit of the Baltimore photo enforcement
program is ongoing and has resulted in implementing an additional manual
review of citations at all camera locations."

The Sun recently published an investigation focusing on the city's speed
camera program, which has generated more than $48 million since it began
three years ago. The investigation found that citations can be inaccurate
and that judges routinely throw out tickets for a range of problems.

The Sun has also shown that it is impossible for motorists to verify the
alleged speeds with the information printed on tickets issued by Baltimore
County, Howard County and the State Highway Administration.

Since the articles' publication, several lawmakers have called for changes
to the state law that governs the way the city and other jurisdictions
operate speed camera programs. Gov. Martin O'Malley said Tuesday that state
law bars contractors from being paid based on the number of citations issued
or paid—an approach used by Baltimore City, Baltimore County, Howard
County and elsewhere.

"The law says you're not supposed to charge by volume. I don't think we
should charge by volume," O'Malley said. "If any county is, they need to
change their program."

  [Also noted by Jeremy Epstein:
    Of course, one could argue that it was hurtling through space with the
    rest of the earth at hundreds to millions of MPH (depending on what you
    include or exclude in the measurement), but earthbound traffic laws are
    generally written in terms relative to the speed of the earth.

Wells Fargo's website buckles under flood of traffic

Monty Solomon <>
Sat, 22 Dec 2012 14:17:10 -0500
Wells Fargo's website buckles under flood of traffic

Facebook and Gmail Have Outages (From Dave Farber's IP)

"Jonathan B Spira" <>
Dec 10, 2012 6:47 PM
Another day in the big city....

Tech Outages: Facebook Offline, Gmail and Google Services Down

Gmail, which has gone down multiple times in the past several years, was
down earlier today.  The outage included other Google services such as
Google Play, Google Drive, Google Calendar, and Chrome Sync.  Some Chrome
browser users reported on Twitter that loading Gmail would crash their
browsers during the outage.  It was not clear as to how many users were
impacted.  Some did report that they were able to use Gmail.  [...]

What Instagram's New Terms of Service Mean for You (Wortham/Bilton)

Monty Solomon <>
Tue, 18 Dec 2012 10:39:40 -0500
Jenna Wortham and Nick Bilton, *The New York Times*, 17 Dec 2012

Instagram released an updated version of its privacy policy and terms of
service on Monday, and they include lengthy stipulations on how photographs
uploaded by users may be used by Instagram and its parent company, Facebook.

The changes, which will go into effect 16 Jan 2013, will not apply to
pictures shared before that date.

Facebook and Instagram have both hinted at plans to incorporate
advertisements into Instagram's application, although they have declined to
provide details about how and when ads would be deployed.  These freshly
drafted terms give the first glimpse of what the companies might have
planned. Here's a quick rundown of what the new terms, the most significant
changes in Instagram's short history, could mean for users. ...

Instagram Does an About-Face (Perlroth/Wortham)

Monty Solomon <>
Fri, 21 Dec 2012 01:35:44 -0500
By Nicole Perlroth and Jenna Wortham, *The New York Times*, 20 Dec 2012

San Francisco - In the aftermath of the uproar over changes to Instagram's
privacy policy and terms of service earlier this week, the company did an
about-face late Thursday.  In a blog post on the company's site, Kevin
Systrom, Instagram's co-founder, said that where advertising was concerned,
the company would revert to its previous terms of service, which have been
in effect since October 2010. ...

  [Lauren Weinstein commented in NNSquad: “ Egads.  This whole saga has
  been incredibly embarrassing for Facebook/Instagram.  I'd sure like to
  know who the blazes vetted the original terrible changes in the TOS!  And
  what does this *really* mean going forward?''  PGN]

Instagram: 'Wait, Wait! That's Not What We Meant!'

Monty Solomon <>
Tue, 18 Dec 2012 21:12:12 -0500
Mike Masnick, *Techdirt*, 18 Dec 2012

So, as the deluge of hate towards Instagram got louder and louder concerning
its terms of service change, the company has now come out and said that it
will change the terms and, of course, that it never meant them to be read
the way people were interpreting them, and that it plans to adjust the terms
so that people aren't so damn angry at them. On the question of "advertising
on Instagram" they note. ...

Stabuniq malware found on servers at U.S. financial institutions

Monty Solomon <>
Sat, 22 Dec 2012 14:16:18 -0500

"Burdens of Proof: Cryptographic Culture and Evidence Law in the Age of Electronic Documents"

Lauren Weinstein <>
Mon, 24 Dec 2012 12:59:25 -0800  (Slashdot via NNSquad)

  In Burdens of Proof: Cryptographic Culture and Evidence Law in the Age of
  Electronic Documents, author Jean-Francois Blanchette observes that the
  move to a paperless society means that paper-based evidence needs to be
  recreated in the digital world. It also requires an underlying security
  functionality to flow seamlessly across organizations, government agencies
  and the like. While the computing power is there, the ability to create a
  seamless cryptographic culture is much slower in coming."

NSA document on iOS security

Gabe Goldberg <>
Tue, 11 Dec 2012 21:22:10 -0500
This document provides security-related usage and configuration
recommendations for Apple iOS devices such as the iPhone, iPad, and iPod
touch. This document does not constitute Department of Defense (DoD) or
United States Government (USG) policy, nor is it an endorsement of any
particular platform; its purpose is solely to provide security
recommendations.  This guide may outline procedures required to implement or
secure certain features, but it is also not a general-purpose
configuration manual.

In case you have an i<anything>—but 37 pages!

NSA targeting domestic computer systems in secret test (Declan McCullagh)

"Peter G. Neumann" <>
Mon, 24 Dec 2012 12:15:29 PST
Declan McCullagh, CNET news, 23 Dec 2012
Revealed: NSA targeting domestic computer systems in secret test

The National Security Agency's Perfect Citizen program hunts for
vulnerabilities in "large-scale" utilities, including power grid and gas
pipeline controllers, new documents from EPIC show.

Newly released files show a secret National Security Agency program is
targeting the computerized systems that control utilities to discover
security vulnerabilities, which can be used to defend the United States or
disrupt the infrastructure of other nations.

The NSA's so-called Perfect Citizen program conducts "vulnerability
exploration and research" against the computerized controllers that control
"large-scale" utilities including power grids and natural gas pipelines, the
documents show. The program is scheduled to continue through at least
September 2014.

The Perfect Citizen files obtained by the Electronic Privacy Information
Center and provided to CNET shed more light on how the agency aims to defend
-- and attack—embedded controllers. The NSA is reported to have developed
Stuxnet, which President Obama secretly ordered to be used against Iran's
nuclear program, with the help of Israel.

U.S. officials have warned for years, privately and publicly, about the
vulnerability of the electrical grid to cyberattacks. Gen. Martin Dempsey,
the chairman of the Joint Chiefs of Staff, told a congressional committee in
February: "I know what we [the U.S.] can do and therefore I am
extraordinarily concerned about the cyber capabilities of other nations." If
a nation gave such software to a fringe group, Dempsey said, "the next thing
you know could be into our electrical grid."

Discussions about offensive weapons in the U.S. government's electronic
arsenal have gradually become more public. One NSA employment posting for a
Control System Network Vulnerability Analyst says the job involves "building
proof-of concept exploits," and an Air Force announcement in August called
for papers discussing "Cyberspace Warfare Attack" capabilities. The
Washington Post reported last month that Obama secretly signed a directive
in October outlining the rules for offensive "cyber-operations."

"Sabotage or disruption of these industries can have wide-ranging negative
effects including loss of life, economic damage, property destruction, or
environmental pollution," the NSA concluded in a public report (PDF)
discussing industrial control systems and their vulnerabilities.

The 190 pages of the NSA's Perfect Citizen files, which EPIC obtained
through the Freedom of Information Act last week, are heavily redacted.  At
least 98 pages were completely deleted for a number of reasons, including
that portions are "classified top secret," and could "cause exceptionally
grave damage to the national security" if released, according to an
accompanying letter from Pamela Phillips, chief of the NSA's FOIA office.

How To Pirate Windows 8 Metro Apps, Bypass In-app Purchases

Lauren Weinstein <>
Tue, 11 Dec 2012 17:10:07 -0800  (Slashdot via NNSquad)

  "The principal engineer for Nokia's WP7 and WP8 devices, Justin Angel, has
  demonstrated, in rather frank detail, how to pirate Windows 8 Metro apps,
  how to bypass in-app purchases, and how to remove in-game ads.

"You're not anonymous. I know your name, email, and company." (Darren Nix)

Lauren Weinstein <>
Wed, 12 Dec 2012 11:20:09 -0800
  "Sumit Suman recently visited a site, did not sign up for anything, did
  not connect via social media, but got a personal email from the site the
  next day.  Here's how they did it.  I've learned that there is a "website
  intelligence" network that tracks form submissions across their customer
  network.  So, if a visitors fills out a form on Site A with their name and
  email, Site B knows their name and email too as soon as they land on the
  site." - Darren Nix  (42 Floors via NNSquad)

3D-Printing Firm Makerbot Cracks Down On Printable Guns

Henry Baker <>
Wed, 19 Dec 2012 16:26:51 -0800
[FYI—Just as higher temperatures "unify" different forces & particles in
physics, the ongoing march of information technology "unifies" different
human rights: the 3D printer makes the 2nd Amendment a part of the 1st
Amendment.  I would imagine that newly proposed gun legislation will require
prior background checks before purchasing of a 3D printer and the
registration of all 3D printers with the ATF.  DMCA redux?]

Andy Greenberg, Forbes Staff, 19 Dec 2012

3D-Printing Firm Makerbot Cracks Down On Printable Gun Designs

You have the right to bear arms. But you don't necessarily have the right to
upload them.

In the wake of one of worst shooting incidents in American history, the
3D-printing firm Makerbot has deleted a collection of blueprints for gun
components from Thingiverse, its popular user-generated content website that
hosts 3D-printable files. Though Thingiverse has long banned designs for
weapons and their components in its terms of service, it rarely enforced the
practice until the last few days, when the company's lawyer sent notices to
users that their software models for gun parts were being purged from the

One letter forwarded to me by Thingiverse user Michael Guslick, for
instance, explained that a design for an AR-15 trigger guard he uploaded to
the site violated its rule that users not “collect, upload, transmit,
display or distribute any User Content [that] promotes illegal activities or
contributes to the creation of weapons. ...  In exercising our policy
enforcement discretion, we have decided to remove the content as of today.''

When I checked Thingiverse earlier this month for gun components, it was
easy enough to find firearm parts such as the `lower receivers' for several
models of semiautomatic rifles and handguns. Those designs had sparked
controversy by potentially circumventing gun laws: The lower receiver is the
`body' of a gun, and its most regulated component. So 3D-printing that piece
at home and attaching other parts ordered by mail might allow a lethal
weapon to be obtained without any legal barriers or identification.

Guslick, a Wisconsin IT administrator whose experiments with a 3D-printed
AR-15 lower receiver drew attention to the issue of 3D-printable weapons
earlier this year, speculated that the removal of the files was linked with
the Newtown, Connecticut gun massacre that killed 20 children and seven
adults in an elementary school last week.  “Correlation is not causation,
but it seems pretty clear that the tragic shooting in [Connecticut] last
week is the impetus for removal of some designs on Thingiverse,'' he wrote
to me in an email. But Guslick pointed out that several gun-related items
remained on the site, including a Glock magazine and Ruger pistol grip.
“I'm not sure if those are targeted for takedown as well, or if only AR-15
compatible designs are being removed (given that the popular rifle has been
utterly demonized in the media over the past few days, I suppose that may be

Makerbot, for its part, included no mention of the Newtown shootings in a
statement sent to me about the gun takedowns.  “Makerbot's focus is to
empower the creative process and make things for good,'' writes Makerbot
spokesperson Jenifer Howard.  “Thingiverse has been going through an
evolution recently and has had numerous changes and updates. Reviewing some
of the content that violates Thingiverse's Terms of Service is part of this

In the past, Makerbot chief executive and founder Bre Pettis has remained
ambivalent about guns on Thingiverse, which has become the world's most
popular sharing platform for 3D-printing files. When I asked him about the
issue last month, Pettis pointed to the terms of service ban on weapons, but
added that the site goes largely unpoliced. He was more explicit in a blog
post last year: “The cat is out of the bag.  And that cat can be armed with
guns made with printed parts.''

That freewheeling outlook contrasted with other 3D printing services like
Shapeways, which bans the uploading of even gun-like toys more than 10
centimeters in length.

Makerbot's move to follow suit may have also been inspired in part by a
group calling itself Defense Distributed, which announced its intention to
create an entirely 3D-printable gun in August and planned to potentially
upload it to Thingiverse. In early December the group posted a YouTube video
of its first experiment with an AR-15 built from a 3D-printed lower
receiver. (The 3D printed piece broke after six shots.)

In response to Makerbot's crackdown, Defense Distributed founder Cody Wilson
wrote to me in an e-mail, saying that the group plans to create its own site
for hosting `fugitive' 3D printable gun files in the next few hours.

Morgan Freeman Viral Newtown Quote Was Fake [+freedom of information]

Lauren Weinstein <>
Mon, 17 Dec 2012 09:56:56 -0800  (*Atlantic* via NNSquad)

  "Speaking through his publicist, Freeman denied making any statement
  regarding the shootings. "He said the actor's camp was trying to determine
  the origin of the hoax statement," reported The Wrap's Todd Cunningham. If
  you were on Facebook or Twitter over the past two days, you probably saw
  some permutation of this meme being shared ..."

 - - -

Not only was the viral quote attributed to Morgan false, but I will add that
in the 21st century, attempting to suppress information in such a manner can
only attract more attention, and is technically impossible as well.

Customer Service Social Engineering Scam on Amazon (Chris Cardinal)

Lauren Weinstein <>
Mon, 17 Dec 2012 22:58:16 -0800  ( via NNSquad)

  "Someone has devised a relatively simple way of defrauding and
  they require very little hard information to pull it off. While this story
  is still developing, I'm writing this up in an effort to make Amazon aware
  of the problem and hopefully help them tighten their call center and live
  chat security." - Chris Cardinal

Iranian data-wiper

"Peter G. Neumann" <>
Tue, 18 Dec 2012 18:55:54 PST

Feudal Security

Bruce Schneier <schneier@SCHNEIER.COM>
Sat, 15 Dec 2012 01:41:53 -0600
CRYPTO-GRAM, December 15, 2012, by Bruce Schneier Chief Security Technology
Officer, BT,

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.  For back issues, or to
subscribe, visit <>.

It's a feudal world out there.

Some of us have pledged our allegiance to Google: We have Gmail accounts, we
use Google Calendar and Google Docs, and we have Android phones. Others have
pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads;
and we let iCloud automatically synchronize and back up everything. Still
others of us let Microsoft do it all. Or we buy our music and e-books from
Amazon, which keeps records of what we own and allows downloading to a
Kindle, computer, or phone. Some of us have pretty much abandoned e-mail
altogether... for Facebook.

These vendors are becoming our feudal lords, and we are becoming their
vassals. We might refuse to pledge allegiance to all of them—or to a
particular one we don't like. Or we can spread our allegiance around.  But
either way, it's becoming increasingly difficult to not pledge allegiance to
at least one of them.

Feudalism provides security. Classical medieval feudalism depended on
overlapping, complex, hierarchical relationships. There were oaths and
obligations: a series of rights and privileges. A critical aspect of this
system was protection: vassals would pledge their allegiance to a lord, and
in return, that lord would protect them from harm.

Of course, I'm romanticizing here; European history was never this simple,
and the description is based on stories of that time, but that's the general

And it's this model that's starting to permeate computer security today.

Traditional computer security centered around users. Users had to purchase
and install anti-virus software and firewalls, ensure their operating system
and network were configured properly, update their software, and generally
manage their own security.

This model is breaking, largely due to two developments:

1. New Internet-enabled devices where the vendor maintains more control over
the hardware and software than we do—like the iPhone and Kindle; and

2. Services where the host maintains our data for us—like Flickr and

Now, we users must trust the security of these hardware manufacturers,
software vendors, and cloud providers.

We choose to do it because of the convenience, redundancy, automation, and
sharability. We like it when we can access our e-mail anywhere, from any
computer. We like it when we can restore our contact lists after we've lost
our phones. We want our calendar entries to automatically appear on all of
our devices. These cloud storage sites do a better job of backing up our
photos and files than we would manage by ourselves; Apple does a great job
keeping malware out of its iPhone apps store.

In this new world of computing, we give up a certain amount of control, and
in exchange we trust that our lords will both treat us well and protect us
from harm. Not only will our software be continually updated with the newest
and coolest functionality, but we trust it will happen without our being
overtaxed by fees and required upgrades. We trust that our data and devices
won't be exposed to hackers, criminals, and malware. We trust that
governments won't be allowed to illegally spy on us.

Trust is our only option. In this system, we have no control over the
security provided by our feudal lords. We don't know what sort of security
methods they're using, or how they're configured. We mostly can't install
our own security products on iPhones or Android phones; we certainly can't
install them on Facebook, Gmail, or Twitter. Sometimes we have control over
whether or not to accept the automatically flagged updates—iPhone, for
example—but we rarely know what they're about or whether they'll break
anything else. (On the Kindle, we don't even have that freedom.)

I'm not saying that feudal security is all bad. For the average user, giving
up control is largely a good thing. These software vendors and cloud
providers do a lot better job of security than the average computer user
would. Automatic cloud backup saves a lot of data; automatic updates prevent
a lot of malware. The network security at any of these providers is better
than that of most home users.

Feudalism is good for the individual, for small startups, and for
medium-sized businesses that can't afford to hire their own in-house or
specialized expertise. Being a vassal has its advantages, after all.

For large organizations, however, it's more of a mixed bag. These
organizations are used to trusting other companies with critical corporate
functions: They've been outsourcing their payroll, tax preparation, and
legal services for decades. But IT regulations often require audits. Our
lords don't allow vassals to audit them, even if those vassals are
themselves large and powerful.

Yet feudal security isn't without its risks.

Our lords can make mistakes with security, as recently happened with Apple,
Facebook, and Photobucket. They can act arbitrarily and capriciously, as
Amazon did when it cut off a Kindle user for living in the wrong
country. They tether us like serfs; just try to take data from one digital
lord to another.

Ultimately, they will always act in their own self-interest, as companies do
when they mine our data in order to sell more advertising and make more
money. These companies own us, so they can sell us off—again, like serfs
-- to rival lords...or turn us in to the authorities.

Historically, early feudal arrangements were ad hoc, and the more powerful
party would often simply renege on his part of the bargain.  Eventually, the
arrangements were formalized and standardized: both parties had rights and
privileges (things they could do) as well as protections (things they
couldn't do to each other).

Today's Internet feudalism, however, is ad hoc and one-sided. We give
companies our data and trust them with our security, but we receive very few
assurances of protection in return, and those companies have very few
restrictions on what they can do.

This needs to change. There should be limitations on what cloud vendors can
do with our data; rights, like the requirement that they delete our data
when we want them to; and liabilities when vendors mishandle our data.

Like everything else in security, it's a trade-off. We need to balance that
trade-off. In Europe, it was the rise of the centralized state and the rule
of law that undermined the ad hoc feudal system; it provided more security
and stability for both lords and vassals. But these days, government has
largely abdicated its role in cyberspace, and the result is a return to the
feudal relationships of yore.

Perhaps instead of hoping that our Internet-era lords will be sufficiently
clever and benevolent—or putting our faith in the Robin Hoods who block
phone surveillance and circumvent DRM systems—it's time we step in in our
role as governments (both national and international) to create the
regulatory environments that protect us vassals (and the lords as
well). Otherwise, we really are just serfs.

A version of this essay was originally published on

Book Review: Harvey Molotch, "Against Security" (from CRYPTOGRAM)

Bruce Schneier <schneier@SCHNEIER.COM>
Sat, 15 Dec 2012 01:41:53 -0600
Against Security: How We Go Wrong at Airports, Subways, and Other Sites of
Ambiguous Danger, by Harvey Molotch, Princeton University Press, 278 pages,

Security is both a feeling and a reality, and the two are different
things. People can feel secure when they're actually not, and they can be
secure even when they believe otherwise.

This discord explains much of what passes for our national discourse on
security policy. Security measures often are nothing more than security
theater, making people feel safer without actually increasing their

A lot of psychological research has tried to make sense out of security,
fear, risk, and safety. But however fascinating the academic literature is,
it often misses the broader social dynamics. New York University's Harvey
Molotch helpfully brings a sociologist's perspective to the subject in his
new book "Against Security."

Molotch delves deeply into a few examples and uses them to derive general
principles. He starts "Against Security" with a mundane topic: the security
of public restrooms. It's a setting he knows better than most, having
authored "Toilet: The Public Restroom and the Politics of Sharing" (New York
University Press) in 2010. It turns out the toilet is not a bad place to
begin a discussion of the sociology of security.

People fear various things in public restrooms: crime, disease,
embarrassment. Different cultures either ignore those fears or address them
in culture-specific ways. Many public lavatories, for example, have no-touch
flushing mechanisms, no-touch sinks, no-touch towel dispensers, and even
no-touch doors, while some Japanese commodes play prerecorded sounds of
water running, to better disguise the embarrassing tinkle.

Restrooms have also been places where, historically and in some locations,
people could do drugs or engage in gay sex. Sen. Larry Craig (R-Idaho) was
arrested in 2007 for soliciting sex in the bathroom at the
Minneapolis-St. Paul International Airport, suggesting that such behavior is
not a thing of the past. To combat these risks, the managers of some
bathrooms—men's rooms in American bus stations, in particular—have
taken to removing the doors from the toilet stalls, forcing everyone to
defecate in public to ensure that no one does anything untoward (or unsafe)
behind closed doors.

Subsequent chapters discuss security in subways, at airports, and on
airplanes; at Ground Zero in lower Manhattan; and after Hurricane Katrina in
New Orleans. Each of these chapters is an interesting sociological
discussion of both the feeling and reality of security, and all of them make
for fascinating reading. Molotch has clearly done his homework, conducting
interviews on the ground, asking questions designed to elicit surprising

Molotch demonstrates how complex and interdependent the factors that
comprise security are. Sometimes we implement security measures against one
threat, only to magnify another. He points out that more people have died in
car crashes since 9/11 because they were afraid to fly—or because they
didn't want to deal with airport security—than died during the terrorist
attacks. Or to take a more prosaic example, special "high-entry" subway
turnstiles make it much harder for people to sneak in for a free ride but
also make platform evacuations much slower in the case of an emergency.

The common thread in "Against Security" is that effective security comes
less from the top down and more from the bottom up. Molotch's subtitle
telegraphs this conclusion: "How We Go Wrong at Airports, Subways, and Other
Sites of Ambiguous Danger." It's the word *ambiguous* that's important
here. When we don't know what sort of threats we want to defend against, it
makes sense to give the people closest to whatever is happening the
authority and the flexibility to do what is necessary. In many of Molotch's
anecdotes and examples, the authority figure—a subway train driver, a
policeman—has to break existing rules to provide the security needed in a
particular situation. Many security failures are exacerbated by a reflexive
adherence to regulations.

Molotch is absolutely right to hone in on this kind of individual initiative
and resilience as a critical source of true security. Current U.S. security
policy is overly focused on specific threats. We defend individual buildings
and monuments. We defend airplanes against certain terrorist tactics: shoe
bombs, liquid bombs, underwear bombs. These measures have limited value
because the number of potential terrorist tactics and targets is much
greater than the ones we have recently observed. Does it really make sense
to spend a gazillion dollars just to force terrorists to switch tactics? Or
drive to a different target? In the face of modern society's ambiguous
dangers, it is flexibility that makes security effective.

We get much more bang for our security dollar by not trying to guess what
terrorists are going to do next. Investigation, intelligence, and emergency
response are where we should be spending our money. That doesn't mean mass
surveillance of everyone or the entrapment of incompetent terrorist
wannabes; it means tracking down leads—the sort of thing that caught the
2006 U.K. liquid bombers. They chose their tactic specifically to evade
established airport security at the time, but they were arrested in their
London apartments well before they got to the airport on the strength of
other kinds of intelligence.

In his review of "Against Security" in "Times Higher Education," aviation
security expert Omar Malik takes issue with the book's seeming
trivialization of the airplane threat and Molotch's failure to discuss
terrorist tactics. "Nor does he touch on the multitude of objects and
materials that can be turned into weapons," Malik laments. But this is
precisely the point. Our fears of terrorism are wildly out of proportion to
the actual threat, and an analysis of various movie-plot threats does
nothing to make us safer.

In addition to urging people to be more reasonable about potential threats,
Molotch makes a strong case for optimism and kindness. Treating every air
traveler as a potential terrorist and every Hurricane Katrina refugee as a
potential looter is dehumanizing. Molotch argues that we do better as a
society when we trust and respect people more. Yes, the occasional bad thing
will happen, but 1) it happens less often, and is less damaging, than you
probably think, and 2) individuals naturally organize to defend each
other. This is what happened during the evacuation of the Twin Towers and in
the aftermath of Katrina before official security took over. Those in charge
often do a worse job than the common people on the ground.

While that message will please skeptics of authority, Molotch sees a role
for government as well. In fact, many of his lessons are primarily aimed at
government agencies, to help them design and implement more effective
security systems. His final chapter is invaluable on that score, discussing
how we should focus on nurturing the good in most people—by giving them
the ability and freedom to self-organize in the event of a security
disaster, for example—rather than focusing solely on the evil of the very
few. It is a hopeful yet realistic message for an irrationally anxious
time. Whether those government agencies will listen is another question

Amazon link to book:

This review was originally published at

Please report problems with the web pages to the maintainer