The RISKS Digest
Volume 27 Issue 38

Friday, 26th July 2013

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Star Wars Redux
Peter G. Neumann
Hackers Reveal Nasty New Car Attacks--With Me Behind The Wheel
Andy Greenberg via Steve Goldstein via Dewayne Hendricks
The risks of DNA "Certainty"
Bob Frankston
Cybersecurity hacking estimates exaggerated for profit
Lauren Weinstein
PIN-Punching Robot Cracks Phone's Security Code In 24 Hours
Andy Greenberg via Henry Baker
"Researchers spot new breed of infected Android apps in the wild"
Ted Samson via Gene Wirchenko
"SIM cards vulnerable to hacking, says researcher"
Jeremy Kirk via Gene Wirchenko
Citi Bike Accidentally Exposes Customer Credit Card Information
Ted Mann via Jim Reisert
Re: PayPal 'credits' US man $92 quadrillion in error
Mark Brader
Bill Stewart
Chris Drewe
Fool proofs?
Re: UBS fined $30
000 for a typing error
Bertrand Meyer
Re: Government Destroys $170k of Hardware ...
Rob Slade
Hardware destruction in perspective
Steve Lamont
Re: "How the Pentagon's payroll quagmire traps soldiers
Gene Wirchenko
Re: "How to Build Versatile and Reusable Software"
Gene Wirchenko
REVIEW: "Intelligent Internal Control and Risk Management", Leitch
Rob Slade
Info on RISKS (comp.risks)

Star Wars Redux

"Peter G. Neumann" <neumann@csl.sri.com>
Thu, 25 Jul 2013 13:47:06 PDT
In the very first RISKS issue, RISKS-1.01 (1 Aug 1985), I posited the need
for open discussion of the risks involved in the Strategic Defense
Initiative, and suggested that past experiences with very large projects
requiring extensive new technology and system engineering were generally
less successful than anticipated:

  ... the problems of developing software for critical environments are very
  pervasive—and not just limited to strategic defense.  But what we learn
  in discussing the feasibility of the strategic defense initiative could
  have great impact on the uses that computers find in other critical
  environments.  In general, we may find that the risks are far too high in
  many of the critical computing environments on which we depend.  We may
  also be led to techniques for developing better systems that can
  adequately satisfy all of their critical requirements—and continue to
  do so.  But perhaps most important of all is the increased awareness that
  can come from intelligent discussion.  Thus, an open forum on this subject
  is very important.

An editorial in *The New York Times* today revisits this thorny subject,
and suggests that we are still far away from what might be needed:

  A Failure to Intercept:
  Will America ever have effective ground-based missile defense?

  After 30 years of research and an estimated $250 billion investment, the
  Pentagon's defense program against intercontinental ballistic missiles
  ... had another failed test this month... the third consecutive dud.  The
  military has tested the ground-based midcourse defense system 16 times;
  only eight were successful, the last in 2008.  One might expect the record
  to be near perfect since the tests are rigged ...  “controlled scripted
  environment.''

  Two studies ... have expressed new doubts ...

  But it doesn't make sense to keep throwing money at a flawed system
  without correcting the problems first.

The entire editorial is worth reading carefully, although it may not be new
news to many long-time RISKS readers.


Hackers Reveal Nasty New Car Attacks--With Me Behind The Wheel

Dewayne Hendricks <dewayne@warpspeed.com>
July 25, 2013 10:51:07 AM EDT
  [Note: This item comes from friend Steve Goldstein.  DLH][via Dave Farber]

From: Steve Goldstein <steve.goldstein@cox.net>
Subject: Hackers Reveal Nasty New Car Attacks--With Me Behind The Wheel
  (Video) - Forbes
Date: July 25, 2013 7:21:34 AM PDT

Andy Greenberg, *Forbes*, 24 Jul 2013
<http://www.forbes.com/sites/andygreenberg/2013/07/24/hackers-reveal-nasty-new-car-attacks-with-me-behind-the-wheel-video/>

Stomping on the brakes of a 3,500-pound Ford Escape that refuses to stop --
or even slow down—produces a unique feeling of anxiety. In this case it
also produces a deep groaning sound, like an angry water buffalo bellowing
somewhere under the SUV's chassis. The more I pound the pedal, the louder
the groan gets --along with the delighted cackling of the two hackers
sitting behind me in the backseat.

Luckily, all of this is happening at less than 5mph. So the Escape merely
plows into a stand of 6-foot-high weeds growing in the abandoned parking lot
of a South Bend, Ind. strip mall that Charlie Miller and Chris Valasek have
chosen as the testing grounds for the day's experiments, a few of which are
shown in the video below. (When Miller discovered the brake-disabling trick,
he wasn't so lucky: The soccer-mom mobile barreled through his garage,
crushing his lawn mower and inflicting $150 worth of damage to the rear
wall.)

“Okay, now your brakes work again,'' Miller says, tapping on a beat-up
MacBook connected by a cable to an inconspicuous data port near the parking
brake. I reverse out of the weeds and warily bring the car to a stop. “When
you lose faith that a car will do what you tell it to do,'' he adds after we
jump out of the SUV, “it really changes your whole view of how the thing
works.''

This fact, that a car is not a simple machine of glass and steel but a
hackable network of computers, is what Miller and Valasek have spent the
last year trying to demonstrate. Miller, a 40-year-old security engineer at
Twitter, and Valasek, the 31-year-old director of security intelligence at
the Seattle consultancy IOActive, received an $80,000-plus grant last fall
from the mad-scientist research arm of the Pentagon known as the Defense
Advanced Research Projects Agency to root out security vulnerabilities in
automobiles.

The duo plans to release their findings and the attack software they
developed at the hacker conference Defcon in Las Vegas next month—the
better, they say, to help other researchers find and fix the auto industry's
security problems before malicious hackers get under the hoods of
unsuspecting drivers. The need for scrutiny is growing as cars are
increasingly automated and connected to the Internet, and the problem goes
well beyond Toyota and Ford. Practically every American car maker now offers
a cellular service or Wi-Fi network like General Motors' OnStar, Toyota's
Safety Connect and Ford's SYNC. Mobile-industry trade group the GSMA
estimates revenue from wireless devices in cars at $2.5 billion today and
projects that number will grow tenfold by 2025. Without better security it's
all potentially vulnerable, and automakers are remaining mum or downplaying
the issue.

As I drove their vehicles for more than an hour, Miller and Valasek showed
that they've reverse-engineered enough of the software of the Escape and the
Toyota Prius (both the 2010 model) to demonstrate a range of nasty
surprises: everything from annoyances like uncontrollably blasting the horn
to serious hazards like slamming on the Prius' brakes at high speeds. They
sent commands from their laptops that killed power steering, spoofed the GPS
and made pathological liars out of speedometers and odometers. Finally they
directed me out to a country road, where Valasek showed that he could
violently jerk the Prius' steering at any speed, threatening to send us into
a cornfield or a head-on collision. “Imagine you're driving down a highway
at 80 ,'' Valasek says. “You're going into the car next to you or into
oncoming traffic. That's going to be bad times.''

A Ford spokesman says the company takes hackers “very seriously,'' but
Toyota, for its part, says it isn't impressed by Miller and Valasek's
stunts: Real car hacking, the company's safety manager John Hanson argues,
wouldn't require physically jacking into the target car. “Our focus, and
that of the entire auto industry, is to prevent hacking from a remote
wireless device outside of the vehicle,'' he writes in an e-mail, adding
that Toyota engineers test its vehicles against wireless attacks. “We
believe our systems are robust and secure. ...

Dewayne-Net RSS Feed: <http://www.warpspeed.com/wordpress>


The risks of DNA "Certainty"

"Bob Frankston" <Bob19-0501@bobf.frankston.com>
Thu, 25 Jul 2013 10:23:36 -0400
The seeming certainty of DNA evidence leads to false confidence in the
results and insufficient scrutiny. And, once again, we are at the risk of
bad math. A million in one match seems leave no doubt but if you have a
database with 10 million people you'll get ten matches. But when a jury is
just told that there is a one in a million chance of a mismatch then they
will have to convict.
http://www.nytimes.com/2013/07/25/opinion/high-tech-high-risk-forensics.html?ref=opinion


Cybersecurity hacking estimates exaggerated for profit

Lauren Weinstein <lauren@vortex.com>
July 24, 2013 1:50:55 AM EDT
  "A $1 trillion estimate of the global cost of hacking cited by President
  Barack Obama and other top officials is a gross exaggeration, according to
  a new study commissioned by the company responsible for the earlier
  approximation.  A preliminary report being released Monday by the Center
  for Strategic and International Studies and underwritten by Intel Corp's
  (INTC.O) security software arm McAfee implicitly acknowledges that
  McAfee's previous figure could be triple the real number."
  http://j.mp/167eQG4  (Reuters via NNSquad)

As we've been saying all along. Follow the money!


"Researchers spot new breed of infected Android apps in the wild" (Ted Samson)

Gene Wirchenko <genew@telus.net>
Thu, 25 Jul 2013 10:30:09 -0700
Ted Samson, InfoWorld, 24 Jul 2013
Cyber criminals have successfully exploited a recently discovered
vulnerability to infect legit apps without invalidating their digital
signatures
http://www.infoworld.com/t/android/researchers-spot-new-breed-of-infected-android-apps-in-the-wild-223400


"SIM cards vulnerable to hacking, says researcher" (Jeremy Kirk)

Gene Wirchenko <genew@telus.net>
Tue, 23 Jul 2013 12:08:22 -0700
Jeremy Kirk, InfoWorld, 22 Jul 2013
Millions of phones could be at risk due to the use of a 1970s-era
encryption standard
http://www.infoworld.com/d/mobile-technology/sim-cards-vulnerable-hacking-says-researcher-223141


PIN-Punching Robot Cracks Phone's Security Code In 24 Hours

Henry Baker <hbaker1@pipeline.com>
Mon, 22 Jul 2013 07:56:21 -0700
Andy Greenberg, *Forbes*, 22 Jul 2013
Covering the worlds of data security, privacy and hacker culture.
http://www.forbes.com/sites/andygreenberg/2013/07/22/pin-punching-robot-can-crack-your-phones-security-code-in-less-than-24-hours/

There's nothing particularly difficult about cracking a smartphone's
four-digit PIN code.  All it takes is a pair of thumbs and enough
persistence to try all 10,000 combinations. But hackers hoping to save time
and avoid arthritis now have a more efficient option: Let a cheap,
3D-printable robot take care of the manual labor.

At the Defcon hacker conference in Las Vegas early next month, security
researchers Justin Engler and Paul Vines plan to show off the R2B2, or
Robotic Reconfigurable Button Basher, a piece of hardware they built for
around $200 that can automatically punch PIN numbers at a rate of about one
four-digit guess per second, fast enough to crack a typical Android phone's
lock screen in 20 hours or less.

“There's nothing to stop someone from guessing all the possible PINs,''
says Engler, a security engineer at San Francisco-based security consultancy
iSec Partners. “We often hear `no one would ever do that.' We wanted to
eliminate that argument. This was already easy, it had just never been done
before.''

Engler and Vines built their bot, shown briefly in the video above, from
three $10 servomotors, a plastic stylus, an open-source Arduino
microcontroller, a collection of plastic parts 3D-printed on their local
hackerspace's Makerbot 3D printer, and a five dollar webcam that watches the
phone's screen to detect if it's successfully guessed the password. The
device can be controlled via USB, connecting to a Mac or Windows PC that
runs a simple code-cracking program. The researchers plan to release both
the free software and the blueprints for their 3D-printable parts at the
time of their Defcon talk.

In addition to their finger-like R2B2, Engler and Vines are also working on
another version of their invention that will instead use electrodes attached
to a phone's touchscreen, simulating capacitative screen taps with faster
electrical signals. That bot, which they're calling the Capacitative
Cartesian Coordinate Brute-force Overlay, remains a work in progress, Engler
says, though he plans to have it ready for Defcon.

Not all PIN-protected devices are susceptible to the R2B2's brute force
attack, Engler admits. Apple's iOS, for instance, makes the user wait
increasing lengths of time after each incorrect PIN guess. After just a
handful of wrong answers, the phone can lock out a would-be hacker for hours
before granting access to the PIN pad again.

But every Android phone that Engler and Vines tested was set by default to
use a much less stringent safeguard, delaying the user just 30 seconds after
every five guesses. At that rate, the robot can still guess five PINs every
35 seconds, or all 10,000 possibilities in 19 hours and 24 minutes.

Given that the robot's software can be programmed to guess PINs in any order
the user chooses, it may be able to crack phones far faster than that 20
hour benchmark. One analysis of common PINs showed that more than 26% of
users choose one of twenty common PINs. If R2B2 is set to try easily-guessed
PINs first, it could crack one in four Android users' phones in less than
five minutes, and half of those phones in less than an hour.

Physically typing thousands of PIN codes, even with a clever robot's help,
isn't necessarily the easiest way to gain access to a phone's
data. Forensics software firm Micro Systemation released a video last year
-- since removed from YouTube—showing that it can digitally brute-force
an iPhone's PIN by using the same “jailbreak'' hacks that many iPhone
owners use to remove installation restrictions on their devices. Google has
been known to cooperate with law enforcement to bypass the lockscreens of
criminal suspects' phones, and Apple will in some cases crack a phone's
security and give the user's data to police if officers mail the phone to
the company.

But Engler argues that the R2B2 helps to raise attention to the insecurity
of crackable four-digit PINs in ways that software tools don't. Even a
six-digit PIN, an option on many phones, would take R2B2 as much as 80 days
longer to crack than the default four-digit passcode. “When you see a robot
working like this, you think, `maybe I should have a longer PIN','' says
Engler. '' If I'm a CEO, a four digit PIN is a problem, because it's worth
20 hours to break in and get my confidential emails.''

Engler and Vines aren't the first to create an automated, physical
PIN-cracking tool. Another hacker who calls himself JJ showed off a similar
robot earlier in the year that could crack the four-digit PIN of a Garmin
Nuvi GPS device, shown in the video below.

But Engler's and Vine's invention is meant to be far more versatile.  In
addition to cracking phones' lockscreens, Engler says he and Vines plan to
keep improving the robot so that it can be adapted to crack the PIN codes
used in specific smartphone apps, or even to press the mechanical buttons on
non-touchscreen devices like ATMs, hotel safes and combination locks. And in
his daily work of auditing clients' security, breaking into a corporate
smartphone represents a far more serious threat than accessing the data of
any GPS device.

“We used to joke that we'd have to hire an intern to press all these
buttons,'' says Engler. “It turns out it's much better to get the
intern to help make the robot. Then he also has time to get coffee.''

Andy Greenberg, Forbes Staff

I'm a technology, privacy, and information security reporter and most
recently the author of the book This Machine Kills Secrets, a chronicle of
the history and future of information leaks, from the Pentagon Papers to
WikiLeaks and beyond. I've covered the hacker beat for Forbes since 2007,
with frequent detours into digital miscellania like switches, servers,
supercomputers, search, e-books, online censorship, robots, and China. My
favorite stories are the ones where non-fiction resembles science
fiction. My favorite sources usually have the word "research" in their
titles. Since I joined Forbes, this job has taken me from an autonomous car
race in the California desert all the way to Beijing, where I wrote the
first English-language cover story on the Chinese search billionaire Robin
Li for Forbes Asia. Black hats, white hats, cyborgs, cyberspies, idiot
savants and even CEOs are welcome to email me at agreenberg (at) forbes.com.
My PGP public key can be found here.


Citi Bike Accidentally Exposes Customer Credit Card Information (Ted Mann)

Jim Reisert AD1C <jjreisert@alum.mit.edu>
Wed, 24 Jul 2013 11:40:23 -0600
Ted Mann, *Wall Street Journal*, 23 Jul 2013

A Citi Bike software glitch accidentally exposed sensitive personal and
financial information—including credit card numbers—of more than 1,000
of its account holders, the bike sharing program's operators wrote in a
letter last week to the affected customers.

The data breach occurred on April 15, according to a letter sent to a Citi
Bike member reviewed by The Wall Street Journal. The letter was dated July
19.

The security breach was discovered and corrected at the end of May.  It
affected 1,174 customers who signed up for $95 annual memberships to the
program, said Seth Solomonow, a spokesman for the city Department of
Transportation, which launched Citi Bike and controls all of the system's
communications to the public.

He did not explain the delay between the identification of the security flaw
and notification of affected users.

http://blogs.wsj.com/metropolis/2013/07/23/citi-bike-accidentally-exposes-customer-credit-card-information/

Tune into another Risks Digest issue to discover the next risk resulting
from the Citi Bikes in NYC!


Re: PayPal 'credits' US man $92 quadrillion in error (RISKS-27.37)

Mark Brader
Mon, 22 Jul 2013 16:43:52 -0400 (EDT)
Bob Gezelter giveth and taketh away $91,908 trillion!

> From: Amos Shapir <amos083@gmail.com>
> The article claims the erroneous sum was $92,233,720,368,547,800...
> (Being a computer nerd I had to check—this number is almost exactly
> 2^63 * 0.01, so the credit was probably meant to be 1 cent).

In fact $2^63 * 0.01 is $92,233,720,368,547,758.08, so if the indicated
amount was not rounded, it is larger than that by $41.92.  It does seem
likely that PayPal is storing its monetary amounts in 64-bit words using
integers in cents, but I don't think the deduction that the intended
amount was 1 cent holds up.


Re: PayPal 'credits' US man $92 quadrillion in error (RISKS-27.37)

Bill Stewart <bill.stewart@pobox.com>
Tue, 23 Jul 2013 14:33:46 -0700
That's an advantage of 64-bit arithmetic; errors like this used to be ~$21M
or $2.1B.  Nobody's bank or financial institution would be able to handle a
US$92 trillion transfer, even if they didn't have good sanity checking, but
some might let $2B slip by accidentally, and almost any of them could afford
$21M and might not notice the error for a while.  And while most of them
probably couldn't handle the interest payments on a day's float of $92T, the
interest on $21M would probably have gotten paid.


Re: PayPal 'credits' US man $92 quadrillion in error (RISKS-27.37)

Chris Drewe <e767pmk@yahoo.co.uk>
Wed, 24 Jul 2013 22:19:10 +0100
My question: we all had a good laugh, but could it have worked out like
this?  Suppose your bank erroneously deposits a large sum into your current
account.  The mistake is quickly spotted and corrected, so no harm done.
However, this unusual activity sets off the bank's automated
anti-money-laundering alerts.  Therefore, you suddenly have a bunch of
Government heavies on your doorstep, who want to know where that money came
from, and, more importantly, just where is it now?  A mistake by your bank,
you say..?  Better find yourself a good lawyer, quickly.


Fool proofs? (Re: UBS fined $30,000 for a typing error, Kimmeringer)

Bertrand Meyer <Bertrand.Meyer@inf.ethz.ch>
Tue, 23 Jul 2013 23:57:36 +0200
In RISKS-27.37: "The system will now be changed to be fool-proved (again)."

Are you sure this phrasing is what is intended? To me a "fool-proved" system
is one in which the Coq/Isabelle/Spark verification was assigned to the team
member who perhaps wasn't the brightest in the lot. This might happen for
example to NAS if Martyn Thomas's suggestion in the same issue is followed.

There may be something very subtle here that eludes me, but if not I think
you mean fool-proof, or possibly fool-proofed. (Two occurrences in the
entry.)

  [lothar agrees.  PGN]
    [A fool and his proofs are soon parted...  Lindsay Marshall]


Re: Government Destroys $170k of Hardware ... (RISKS-27.37)

Rob Slade <rmslade@shaw.ca>
Tue, 23 Jul 2013 00:39:10 -0700
Once upon a time, many years ago, a school refused to take my advice
(mediated through my brother) as to what to do about a very simple computer
virus infection.  The infection in question was Stoned, which was a boot
sector infector.  BSIs generally do not affect data, and (and this is the
important point) are not eliminated by deleting files on the computer, and
often not even by reformatting the hard disk.  (At the time there were at
least a dozen simple utilities for removing Stoned, most of them free.)

The school decided to cleanse it's entire computer network by boxing it up,
shipping it back to the store, and having the store reformat everything.
Which the store did.  The school lost it's entire database of student
records, and all databases for the library.  Everything had to be
re-entered.  By hand.

I've always thought this was the height of computer virus stupidity, and
that the days when anyone would be so foolish were long gone.

I was wrong.  On both counts.

Malware is my field, and so I often sound like a bit of a nut, pointing out
issues that most people consider minor.  However, malware, while now
recognized as a threat, is a field that extremely few people, even in the
information security field, study in any depth.  Most general security texts
(and, believe me, I know almost all of them) touch on it only tangentially,
and often provide advice that is long out of date.

With that sort of background, I can, unfortunately, see this sort of thing
happening again.

victoria.tc.ca/techrev/rms.htm http://www.infosecbc.org/links
http://blogs.securiteam.com/index.php/archives/author/p1/


Hardware destruction in perspective (Re: RISKS-27.37)

Steve Lamont
Wed, 24 Jul 2013 08:15:12 -0700
> This is a story about government incompetence on the grossest, most
> unforgivable scale. Here's how the Economic Development Administration
> unnecessarily spent $2.75 million to fight a common case of malware.

While this is indeed a waste of time and money, a bit of perspective:

At current funding levels, $170,000 is approximately what the government
spends in 54 seconds in Afghanistan and Iraq.

	http://costofwar.com/about/counters/

The entire enterprise ($2.75 million) comes down to a bit under 15 minutes
of war.

According to my calculations, we're spending $3,125 a *second* for our two
wars and I'd be hard put to show how much economic development we've gotten
from either.   spl


Re: "How the Pentagon's payroll quagmire traps soldiers (RISKS-27.37)

Gene Wirchenko <genew@telus.net>
Mon, 22 Jul 2013 20:07:09 -0700
http://preview.reuters.com/2013/7/9/wounded-in-battle-stiffed-by-the-pentagon
is a great article, but I suppose that it is ironic that Reuters has screwed
up in the implementation.  I got a message that I needed to have JavaScript
enabled to read the article.  The whole article was sent to my browser
though.  I went through the source, removed the <noscript></noscript> block,
saved the file locally, and then was able to read it.


Re: "How to Build Versatile and Reusable Software" (RISKS-27.37)

Gene Wirchenko <genew@telus.net>
Mon, 22 Jul 2013 20:27:49 -0700
Another amusing link:

http://www.law.com/jsp/lawtechnologynews/PubArticleLTN.jsp?id02611396558&kw=How_to_Build_Versatile_and_Reusable_Software

I called it up, read the first page, then clicked for the second, and only
then noted the message on the page of "A browser or device that allows
javascript is required to view this content."  The second page loaded just
fine.


REVIEW: "Intelligent Internal Control and Risk Management", Leitch

Rob Slade <rmslade@shaw.ca>
Mon, 22 Jul 2013 12:27:57 -0700
BKIICARM.RVW   20121210

"Intelligent Internal Control and Risk Management", Matthew Leitch,
2008, 978-0-566-08799-8, U$144.95
%A   Matthew Leitch
%C   Gower House, Croft Rd, Aldershot, Hampshire, GU11 3HR, England
%D   2008
%G   978-0-566-08799-8 0-566-08799-5
%I   Gower Publishing Limited
%O   U$114.95 www.gowerpub.com
%O  http://www.amazon.com/exec/obidos/ASIN/0566087995/robsladesinterne
  http://www.amazon.co.uk/exec/obidos/ASIN/0566087995/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/0566087995/robsladesin03-20
%O   Audience i- Tech 1 Writing 1 (see revfaq.htm for explanation)
%P   253 p.
%T   "Intelligent Internal Control and Risk Management"

The introduction indicates that this book is written from the risk
management perspective of the financial services industry, with a
concentration on Sarbanes-Oxley, COSO, and related frameworks.  There
is an implication that the emphasis is on designing new controls.

Part one, "The Bigger Picture," provides a history of risk management
and internal controls.  Chapter one asks how much improvement is
possible through additional controls.  The author's statement that
"[w]hen an auditor, especially an external auditor, recommends an
improvement control it is usually with little concern for the cost of
implementing or operating that control [or improved value].  The
auditor wants to feel `covered' by having recommended something in the
face of a risk that exists, at least in theory" is one that is
familiar to anyone in the security field.  Leitch goes on to note that
there is a disparity between providing real value and revenue
assurance, and the intent of this work is increasing the value of
business risk controls.  The benefits of trying quality management
techniques, as well as those of quantitative risk management, are
promoted in chapter two.  Chapter three appears to be a collection of
somewhat random thoughts on risk.  Psychological factors in assessing
risk, and the fact that controls have to be stark enough to make
people aware of upcoming dangers, are discussed in chapter four.

Part two turns to a large set of controls, and examines when to use, and not
to use, them.  Chapter five introduces the list, arrangement, and structure.
Controls that generate other controls (frequently management processes) are
reviewed in chapter six.  For each control there is a title, example,
statement of need, opening thesis, discussion, closing recommendation, and
summary relating to other controls.  Most are one to three pages in length.
Audit and monitoring controls are dealt with in chapter seven.  Adaptation
is the topic of chapter eight.  (There is a longer lead-in discussion to
these controls, since, inherently, they deal with change, to which people,
business, and control processes are highly resistant.)  Chapter nine notes
issues of protection and reliability.  The corrective controls in chapter
ten are conceptually related to those in chapter seven.

Part three looks at change for improvement, rather than just for the
sake of change.  Chapter eleven suggests means of promoting good
behaviours.  A Risk and Uncertainty Management Assessment (RUMA) tool
is presented in chapter twelve, but, frankly, I can't see that it goes
beyond thinking out alternative courses of action.  Barriers to
improvement are noted in chapter thirteen.  Roles in the organization,
and their relation to risk management, are outlined in chapter
fourteen.  Chapter fifteen examines the special needs for innovative
projects.  Ways to address restrictive ideology are mentioned in
chapter sixteen.  Seven areas that Leitch advises should be explored
conclude the book in chapter seventeen.

A number of interesting ideas are presented for consideration in
regard to the choice and design of controls.  However, the text is not
a guidebook for producing actual control systems.

copyright, Robert M. Slade   2013   BKIICARM.RVW   20121210
rslade@vcn.bc.ca     slade@victoria.tc.ca     rslade@computercrime.org
victoria.tc.ca/techrev/rms.htm http://www.infosecbc.org/links
http://blogs.securiteam.com/index.php/archives/author/p1/
http://twitter.com/rslade

Please report problems with the web pages to the maintainer

x
Top