The RISKS Digest
Volume 29 Issue 60

Thursday, 14th July 2016

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…


Tesla driver dies in crash while operating on Autopilot
Self-driving car fatal accident
US Regulators Investigating Tesla Over Use of 'Autopilot' Mode...
The Moral Dilemma of Driverless Cars: Save The Driver or Save The Crowd?
"Federal agency probing Tesla's Autopilot feature after fatal crash"
Stephanie Condon
People Want Driverless Cars with Utilitarian Ethics, Unless They're a Passenger
Gabe Goldberg
Risks of AI too complex to make sense of
Motherboard via Werner
Stanford Mall robot runs over small child
Jean Nowell PGN-ed
Dallas Shooter Killed By Bomb Robot In Policing First
Allee Manning
Move over, sapient pearwood
Gizmag via paul wallich
"Volkswagen to pay up to $14.7 billion in US emissions scandal probe"
Charlie Osborne
Swiss trains fail on curious corner case
Faulty image analysis software may invalidate 40,000 fMRI studies
Bruce Horrocks
Web-Impac's would-be voting software deeply flawed
Multitasking Drains Your Brain's Energy Reserves
Quartz via SlashDot
Truth is in danger as new techniques used to stop journalists covering the news
"How technology disrupted the truth"
The Guardian
Adventures in SRE-land: Welcome to Google Mission Control
Your Car's Studying You Closely and Everyone Wants the Data
Bloom via Gabe Goldberg
Uber Plans To Start Monitoring Their Drivers' Behavior
Info on RISKS (comp.risks)

Tesla driver dies in crash while operating on Autopilot

"Peter G. Neumann" <>
Fri, 1 Jul 2016 7:23:33 PDT
  This incident occurred on 7 May 2016, but is reported on the front page of
  *The New York Times* in an article by Bill Vlasic and Neal Boudette on 1
  Jul 2016.  In short, it is the first known fatal accident involving a
  vehicle under automated control.  Joshua Brown (a Navy veteran who had
  founded his own technology consulting firm) was the "driver".  "Neither
  the autopilot nor the driver noticed the white side of a tractor-trailer
  [which made a left turn in front of the Tesla] against a brightly lit sky,
  so the brake was not applied.  The crash casts doubts on whether
  autonomous vehicles in general can consistently make split-second,
  life-or-death driving decisions on the highway."

  Karl Brauer (a Kelley Blue Book analyst): "This is a bit of a wake-up
  call.  People were maybe too aggressive in taking the position that we're
  almost there, this technology is going to be in the market very soon,
  maybe need to reassess that."

  Although Elon Musk has praised the Model S as "probably better than a
  person right now", Tesla noted on 30 Jun that its use "requires explicit
  acknowledgment that the system is new technology."  [PGN-ed]

Perfection is obviously unrealizable for any computer-based system.  But a
death is a death, and we should do what we can to reduce every one.  Drunk
driving is clearly a problem.  Drive-by shootings are rare events, but gun
controls for loony-tune folks might help.  The problems are holistic (as
usual), and there are many relevant factors.  But I would think the
expectations on the operational risks of automated vehicles and automated
highways will be much higher than those for conventional vehicles and their
fallible drivers.  It will be interesting to see how the insurance industry
assesses the difference.

For example, who is liable for accidents involving self-driving or
computer-assisted vehicles?  Law suits tend to go for deep pockets.  There
are many issues here.  Perhaps when you buy an automated vehicle, the
contract says the car is "experimental" and the maker explicitly disclaims
all liability and responsibility, and requires the person in the driver's
seat to be awake and aware.  Perhaps their lawyers would claim that the
driver was negligent to have faith in the software/hardware system.  Even
more intriguing might be accidents involving multiple self-driving vehicles.
And what happens when the police insist on backdoors to be able to redirect
or stop the vehicle for inspection or arrest?  And then there is the fantasy
of the automated highway.

Lots of issues remain to be resolved, and I suspect this will all happen --
but hopefully very slowly and carefully.  Let's hope that the snake-oil
salesmen peddling supposedly secure point solutions don't do the same for
the automated highway—as they are already doing for the Internet of

For the record, Monty Solomon noted a whole string of NYT articles on
Tesla and related topics in early July:

and others as well:

  [I'm working on a blog on this subject, which hopefully will appear
  shortly in ACM's Ubiquity.  PGN]

Self-driving car fatal accident

"Alister Wm Macintyre \(Wow\)" <>
Sun, 10 Jul 2016 23:49:39 -0500
Human driven cars get into accidents all the time, with sometimes fatal

Worldwide, over a million people are killed each year in auto crashes, where
self-driving vehicles not involved.

In the USA, 10's of thousands of people lose their lives every year, in
highway crashes, where self-driving was not a factor.

Now we have a death in an auto accident where the human in the driver seat
was not driving.  He was using the Autopilot of a Tesla model S, while he
watched a Harry Potter movie.

He is now dead, because the car's camera failed to distinguish a tractor
trailer against a bright sky, and the human was not paying attention.

There are times of day I don't like to be driving in certain directions,
because I cannot make out what a traffic light is communicating, when the
sun is right beside it.  I hold up my hand to try to block the sun, but
still see the traffic light.  By the time I figure it out, the lights have

According to witnesses of the Florida accident, the Tesla car went
underneath the tractor trailer, sheering off the top half of the car, and
continued at highway speeds, as if nothing had happened.

This one story will probably get more news media attention than the tens of
thousands of accident victims where a human was driving.

Why is the news media only now covering this story, 2 months after it

This Florida accident is being investigated by

* Florida Highway Patrol
* NHTSA = US National Highway Traffic Safety Administration
* NTSB = US National Transportation Safety Board

There's been more than one crash involving semi-autonomous driving, while
others are not yet fatal.  There's also a great deal of interest in an
accident in Pennsylvania, where a Tesla X SUV rolled over, while the car was
in Auto-Pilot, according to the driver.  Tesla disagrees.

Some people claim this is not a case of a self-driving car, only a partially
self driving.  Tesla's self-driving is pretty limited compared to Google and
other competitors.  There are also ethical questions about the notion of
making auto drivers the testers for beta systems, which may not yet be ready
for all driving conditions.

US Regulators Investigating Tesla Over Use of 'Autopilot' Mode... (SlashDot)

Werner <>
Fri, 1 Jul 2016 16:07:15 +0200
  (Posted by BeauHD on Thursday June 30, 2016)

quoting a report from CNBC:

The U.S. National Highway Traffic Safety Administration said on Thursday it
is opening a preliminary investigation into 25,000 Tesla Motors Model S cars
after a fatal crash involving a vehicle using the "Autopilot" mode.  The
agency said the crash came in a 2015 Model S operating with automated
driving systems engaged, and "calls for an examination of the design and
performance of any driving aids in use at the time of the crash." It is the
first step before the agency could seek to order a recall if it believed the
vehicles were unsafe. Tesla said Thursday the death was "the first known
fatality in just over 130 million miles where Autopilot was activated,"
while a fatality happens once every 60 million miles worldwide. The electric
automaker said it "informed NHTSA about the incident immediately after it
occurred." The May crash occurred when a tractor trailer drove across a
divided highway, where a Tesla in autopilot mode was driving. The Model S
passed under the tractor trailer, and the bottom of the trailer hit the
Tesla vehicle's windshield.  Tesla quietly settled a lawsuit with a Model X
owner who claims his car's doors would open and close unpredictably,
smashing into his wife and other cars, and that the Model X's Auto-Pilot
feature poses a danger in the rain.


The Moral Dilemma of Driverless Cars: Save The Driver or Save The Crowd? (SlashDot)

Werner <>
Wed, 29 Jun 2016 21:10:25 +0200
(Posted by BeauHD on Tuesday June 28, 2016)
<> writes:

What should a driverless car with one rider do if it is faced with the
choice of swerving off the road into a tree or hitting a crowd of 10

The answer depends on whether you are the rider in the car or someone else
is, writes Peter Dizikes at MIT News. According to recent research most
people prefer autonomous vehicles to minimize casualties in situations of
extreme danger—except for the vehicles they would be riding in. "Most
people want to live in in a world where cars will minimize casualties," says
Iyad Rahwan. "But everybody wants their own car to protect them at all
costs." The result is what the researchers call a "social dilemma," in which
people could end up making conditions less safe for everyone by acting in
their own self-interest. "If everybody does that, then we would end up in a
tragedy whereby the cars will not minimize casualties," says
Rahwan. Researchers conducted six surveys, using the online Mechanical Turk
public-opinion tool, <>
between June 2015 and November 2015. The results consistently showed that
people will take a utilitarian approach to the ethics of autonomous
vehicles, one emphasizing the sheer number of lives that could be saved. For
instance, 76 percent of respondents believe it is more moral for an
autonomous vehicle, should such a circumstance arise, to sacrifice one
passenger rather than 10 pedestrians. But the surveys also revealed a lack
of enthusiasm for buying or using a driverless car programmed to avoid
pedestrians at the expense of its own passengers. "This is a challenge that
should be on the mind of carmakers and regulators alike," the researchers
write. "For the time being, there seems to be no easy way to design
algorithms that would reconcile moral values and personal self-interest."

"Federal agency probing Tesla's Autopilot feature after fatal crash"

Gene Wirchenko <>
Fri, 01 Jul 2016 14:37:03 -0700
Stephanie Condon for Between the Lines, ZDNet, 0 Jun 2016
The National Highway Traffic Safety Administration has opened a preliminary
investigation into the advanced autonomous driving technology following a
May 7 accident.

People Want Driverless Cars with Utilitarian Ethics, Unless They're a Passenger

Gabe Goldberg <>
Wed, 6 Jul 2016 23:03:11 -0400
At some point in the nearer-than-might-be-comfortable future, an autonomous
vehicle (AV) will find itself in a situation where something has gone wrong,
and it has two options: either it can make a maneuver that will keep its
passenger safe while putting a pedestrian at risk, or it can make a
different maneuver that will keep the pedestrian safe while putting its
passenger at risk. What an AV does in situations like these will depend on
how it's been programmed: in other words, what ethical choice its software
tells it to make.

If there were clear ethical rules that society could agree on about how AVs
should behave when confronted with such decisions, we could just program
those in and be done with it. However, there are a near infinite number of
possible ethical problems, and within each one, the most ethical course of
action can vary from person to person. Furthermore, it's not just the
passengers who have a say in how AVs behave, but also the manufacturers, and
more likely than not, government regulators.

Gabriel Goldberg, Computers and Publishing, Inc.
3401 Silver Maple Place, Falls Church, VA 22042           (703) 204-0433

Risks of AI too complex to make sense of

Werner <>
Thu, 7 Jul 2016 00:02:16 +0200
(Motherboard,/Vice, 6 Jul 2016)

"Sufficiently Advanced Technology is Indistinguishable from Magic"
< Advanced Technology
Indistinguishable from Magic>

did you, too, feet like nodding knowingly with a smile when hearing,
reading, or thinking about that meme? (Arthur C. Clarke's 3rd law)?!?

but was that followed by  "White or Black Magic?!?"  thoughts?

..or "Any technology distinguishable from magic is insufficiently
advanced" (Gehm's corollary)?

...don't miss the knowing smiles when reading this article :

When AI Goes Wrong, We Won't Be Able to Ask It Why
(Written by Jordan Pearson, July 6, 2016)

Stanford Mall robot runs over small child (Jean Nowell)

"Peter G. Neumann" <>
Wed, 13 Jul 2016 9:12:00 PDT
Jen Nowell, *Palo Alto Daily Post*, front page story, 13 Jun 2016

Mall robot runs over tot; After two incidents, units are shut down

A 16-month boy was knocked over by a security robot at Stanford Shopping
Center in Palo Alto, which then ran over him, leaving him bruised and

The 5-foot 300-pound Knightsope K5 Robot failed to stop as it approached
Harwin Cheng, hit him in the head, knocked him to the ground, and then ran
over his right foot.  His mother pulled him away just as the robot was about
to run over his left foot.

The robot uses a combination of cameras and sensors ...

The same robot had previously run over another child, so *all* K5
robots have been taken out of service!!!!

A logical guess for repetitive accidents of this type might be that the
sensors are positioned to that they cannot detect standalone small children!

Dallas Shooter Killed By Bomb Robot In Policing First (Allee Manning)

Hendricks Dewayne <>
July 9, 2016 at 6:32:28 AM EDT
  [Note:  This item comes from friend Jen Snow.  Jen's comment:
  It is going to be an interesting next several years as technology starts
  to change society in radical ways'.  DLH]

Allee Manning, Vocativ, 8 Jul 2016
The robot used, however, is not uncommon—more than 350 U.S. police
departments have them

After hours of negotiations and an exchange of gunfire, the Dallas shooting
ended when police used a *bomb robot* to kill one of the shooting suspects
on Thursday night. While Dallas Police Chief David Brown did not
specifically describe the device, his language at a press conference
indicated that it was a bomb disposal robot that ultimately killed Micah
Xavier Johnson.

"We saw no other option but to use our bomb robot and place a device on its
extension for it to detonate where the suspect was.  Other options would
have exposed our officers to grave danger," Johnson had told the hostage
negotiator that police would eventually find the IEDs that he planted in the
downtown Dallas area.

The usage of this type of robotics technology to kill a civilian as a
policing mechanism is the first of its kind in the U.S., as bomb disposal
robots are typically used for the opposite purpose: to remove explosives
from an area in order protect those in its immediate vicinity from the loss
of life. Sometimes they will do so by triggering a controlled
explosion. Normally, however, a human is not the target for those

As Fusion reports, the use of robots weaponized with bombs for the purpose
of killing is a practice typically reserved for the U.S. military. In *The
Changing Character of War*, military historians outlined how MARCBOTs,
created for the purpose of detecting the enemy's presence and/or explosives,
was first repurposed by U.S. soldiers to kill during the war in Iraq.

Bomb disposal robots (properly termed Explosive Ordnance Disposal robots)
have been in use since 1972, when the U.S. military pioneered the
technology. But since then, these robots, which can now be operated
remotely, have become increasingly advanced. They've also become an
increasingly common tool used in U.S. policing since the Department of
Defense created a program for transferring surplus military equipment to
these departments in 1990. The Center for the Study of the Drone discovered
that this program has led to the procurement of these types of devices by
over 350 police departments across the country. [...]

  [See also Bomb Robot' Takes Down Dallas Gunman, but Raises Enforcement
  Questions, noted by Monty Solomon:

Move over, sapient pearwood (Gizmag)

paul wallich <>
Sat, 9 Jul 2016 15:30:01 -0400
Fantasy writer Terry Pratchett probably didn't consider his "Luggage", a
slavishly devoted mobile trunk that sometimes ate interlopers, as something
designers should aspire to.

> Olive is the brainchild of Iran-based Ikap Robotics, and although it
> may look like a standard piece of luggage, it has a Segway-like,
> self-balancing auto-locomotion system that maintains stability while
> riding on two wheels by using 3D accelerometers and gyroscopes. With
> an in-built stereoscopic camera, it can build up a visual map of its
> surroundings and follow its owner using skeleton tracker algorithms
> that is claimed to allow Olive to distinguish individuals even in
> crowded environments.

I'm having enough trouble trying to figure out all the potential risks of
something like this operating mostly as intended (consider the recent
"hoverboard" recall). Let alone what could be done if someone hacked
"intelligent" suitcases or—perish the thought—produced versions with
malevolent firmware.

"Volkswagen to pay up to $14.7 billion in US emissions scandal probe" (Charlie Osborne)

Gene Wirchenko <>
Thu, 30 Jun 2016 10:53:14 -0700
Charlie Osborne for Between the Lines, ZDNet, 29 Jun 2016 Customer deceit
and circumventing software has cost the automaker dearly—and the story
isn't over.

Swiss trains fail on curious corner case

"Peter G. Neumann" <>
Mon, 11 Jul 2016 10:14:13 PDT
If the axle count of trains in Switzerland is a multiple of 2^8 (i.e., 256),
their control system does not detect the existence of that train!

  [Thanks to Steve Bellovin for spotting this one.  PGN]

Faulty image analysis software may invalidate 40,000 fMRI studies

Bruce Horrocks <>
Thu, 7 Jul 2016 21:14:15 +0100
  [Please read this to the end.  PGN]

A new paper [1] suggests that as many as 40,000 scientific studies that used
Functional Magnetic Resonance Imaging (fMRI) to analyse human brain activity
may be invalid because of a software fault common to all three of the most
popular image analysis packages.

... From the paper's significance statement:

"Functional MRI (fMRI) is 25 years old, yet surprisingly its most common
statistical methods have not been validated using real data. Here, we used
resting-state fMRI data from 499 healthy controls to conduct 3 million task
group analyses. Using this null data with different experimental designs, we
estimate the incidence of significant results. In theory, we should find 5%
false positives (for a significance threshold of 5%), but instead we found
that the most common software packages for fMRI analysis (SPM, FSL, AFNI)
can result in false-positive rates of up to 70%. These results question the
validity of some 40,000 fMRI studies and may have a large impact on the
interpretation of neuroimaging results."

Two of the software related risks:

a) It is common to assume that software that is widely used must be
   reliable, yet 40,000 teams did not spot these flaws[2]. The authors
   identified a bug in one package that had been present for 15 years.

b) Quoting from the paper: "It is not feasible to redo 40,000 fMRI studies,
   and lamentable archiving and data-sharing practices mean most could not
   be reanalyzed either."

[1] "Cluster failure: Why fMRI inferences for spatial extent have inflated
false-positive rates" by Anders Eklund, Thomas E. Nichols and Hans
Knufsson. <>

[2] That's so many you begin to wonder if this paper might itself be wrong?
Expect to see a retraction in a future RISKS. ;-)

  [Also noted by Lauren Weinstein in *The Register*:]

  [And then there is this counter-argument, noted by Mark Thorson:

  The author (Neuroskeptic) notes that Eklund et al. have discovered a
  different kind of bug in AFNI, but does not apply to FSL and SPM, and does
  not "invalidate 15 years of brain research."   PGN]

Web-Impac's would-be voting software deeply flawed

"Peter G. Neumann" <>
Tue, 12 Jul 2016 10:59:12 PDT

> Web-Impac's voter software could potentially change the way Americans vote
> and propel the United States election process into the 21st Century, and
> Web-Impac is featuring The World Votes, which is a virtual election, live
> and open to anyone with access to the Internet.

This system reportedly has a remarkable feature that renders it ridiculous
for any serious election.  With almost no effort, it is possible to vote as
often as you like, and have all of your votes count.  You can delete
cookies, or pop up a new tab, or probably other hacks, after which it
forgets you have already voted.

That would *REALLY* change the way we vote!

Multitasking Drains Your Brain's Energy Reserves (Quartz via SlashDot)

Werner <>
Mon, 4 Jul 2016 18:53:53 +0200
[ TANSTAAFL - but the Researchers report a not-obvious RISK-angle ]

Multitasking Drains Your Brain's Energy Reserves, Researchers Say
(Posted by EditorDavid on Sunday July 03, 2016)

quoting from an article in Quartz:

"When we attempt to multitask, we don't actually do more than one
activity at once, but quickly switch between them. And this switching is
exhausting. It uses up oxygenated glucose in the brain, running down the
same fuel that's needed to focus on a task...

"That switching comes with a biological cost that ends up making us feel
...much more quickly than if we sustain attention on one thing," says
Daniel Levitin, professor of behavioral neuroscience at McGill
University. "People eat more, they take more caffeine. Often what you
really need in that moment isn't caffeine, but just a break. If you
aren't taking regular breaks every couple of hours, your brain won't
benefit from that extra cup of coffee."

EditorDavid asks: Anyone have any anecdotal experiences that back this up?

Truth is in danger as new techniques used to stop journalists covering the news

Lauren Weinstein <>
Sun, 10 Jul 2016 19:21:35 -0700

  The truth is being suppressed across the world using a variety of methods,
  according to a special report in the 250th issue of Index on Censorship
  magazine.  Physical violence is not the only method being used to stop
  news being published, says editor Rachael Jolley in the Danger in Truth:
  Truth in Danger report. As well as kidnapping and murders, financial
  pressure and defamation legislation is being used, the report reveals.
  "In many countries around the world, journalists have lost their status as
  observers and now come under direct attack."

"How technology disrupted the truth"

Lauren Weinstein <>
Wed, 13 Jul 2016 13:06:29 -0700
  Now, we are caught in a series of confusing battles between opposing
  forces: between truth and falsehood, fact and rumour, kindness and
  cruelty; between the few and the many, the connected and the alienated;
  between the open platform of the web as its architects envisioned it and
  the gated enclosures of Facebook and other social networks; between an
  informed public and a misguided mob.  What is common to these struggles --
  and what makes their resolution an urgent matter—is that they all
  involve the diminishing status of truth.  This does not mean that there
  are no truths. It simply means, as this year has made very clear, that we
  cannot agree on what those truths are, and when there is no consensus
  about the truth and no way to achieve it, chaos soon follows.
  Increasingly, what counts as a fact is merely a view that someone feels to
  be true—and technology has made it very easy for these "facts" to
  circulate with a speed and reach that was unimaginable in the Gutenberg
  era (or even a decade ago). A dubious story about Cameron and a pig
  appears in a tabloid one morning, and by noon, it has flown around the
  world on social media and turned up in trusted news sources everywhere.
  This may seem like a small matter, but its consequences are enormous.

Adventures in SRE-land: Welcome to Google Mission Control

Lauren Weinstein <>
Mon, 11 Jul 2016 12:45:43 -0700
  [NOTE: SRE refers to Site Reliability Engineering.  PGN]

  But what is an SRE? According to Google Vice President of Engineering Ben
  Treynor Sloss, who coined the term SRE, "SRE is what happens when you ask
  a software engineer to design an operations function." In 2003, Ben was
  asked to lead Google's existing "Production Team" which at the time
  consisted of seven software engineers. The team started as a software
  engineering team, and since Ben is also a software engineer, he continued
  to grow a team that he, as a software engineer, would still want to work
  on. Thirteen years later, Ben leads a team of roughly 2,000 SREs, and it
  is still a team that software engineers want to work on. About half of the
  engineers who do a Mission Control rotation choose to remain an SRE after
  their rotation is complete.

Your Car's Studying You Closely and Everyone Wants the Data

Gabe Goldberg <>
Tue, 12 Jul 2016 08:33:57 -0400
As you may have suspected, your car is spying on you. Fire up a new model
and it updates more than 100,000 data points, including rather personal
details like the front-seat passenger's weight. The navigation system tracks
every mile and remembers your route to work. The vehicular brain is smart
enough to help avoid traffic jams or score parking spaces, and soon will be
able to log not only your itineraries but your Internet shopping patterns.

To read the entire article, go to

Uber Plans To Start Monitoring Their Drivers' Behavior (SlashDot)

Werner <>
Mon, 4 Jul 2016 19:31:19 +0200
[ Everyone (and their dogs) want to Monitor EveryOne and EveryThing...]

Uber Plans To Start Monitoring Their Drivers' Behavior
(Posted by EditorDavid on Sunday July 03, 2016)

An anonymous SlashDot reader writes:

Uber "has developed a new technology that it plans on using to track
driver behavior,
...specifically if drivers are traveling too fast or braking too
harshly..." according to the San Francisco Chronicle, which writes that
"Information about how a driver is performing will be shared with Uber,
but will also be shared with the driver, along with safety tips on how
they can improve their performance." Uber will roll this out as an
update to their app, using existing smartphone functionality, and "in
some cities Uber will also monitor whether or not Uber drivers are
picking up their phones (either to text or even just to look at maps)
during a ride using the phone's gyroscope."

Ride-sharing companies seem to be growing more and more powerful. One
Florida county actually received a grant to offer free Uber rides to
low-income workers, and to allow the county transit authority to arrange
rides for those residents without a smartphone. Uber recently even
became the "official designated driving app" for Mother's Against Drunk
Driving, and published a graph suggesting Uber pickups correlate to a
drop in drunk-driving arrests. And in other news, Uber rides have
apparently even been used by a group of human traffickers to smuggle
migrants from Central America into the United States.

Please report problems with the web pages to the maintainer