The RISKS Digest
Volume 30 Issue 61

Tuesday, 27th March 2018

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Self-Driving Car Had a Fatal Accident: Now What?
Don Norman
Re: Uber car in autonomous mode kills pedestrian
WashPo
Re: The Unstoppable Momentum of Self-Driving Cars
Shapir
R 30.53
"Why Big Tech Needs Big Ethics—Right Now!"
Lauren's Blog
Even Without Cambridge Analytica, the Trump Campaign Already Had Everyone's Data
Emily Taylor via Diego Latella
Yet another security vulnerability afflicts India's citizen database
Prashanth Mundkur
Schools Are Using AI to Check Students' Social Media for Warning Signs of Violence
Gizmodo
Bad science puts innocent people in jail—and keeps them there
WashPo
GrayKey iPhone unlocker poses serious security concerns
Malwarebytes Labs
History Shows DDoS Volumes to Keep Rising Despite Mitigation Efforts
EWeek
"The new social media imperative: Distance yourself"
Mike Elgan
$1 million worth of Iron Dome missiles fired at nothing due to 'oversensitivity'
The Times of Israel
"Cryptocurrency mining malware uses five-year old vulnerability to mine Monero on Linux servers"
Danny Palmer
Tamper-proof currency wallet backdoored by a 15-year-old
Ars Technica
Cybersecurity key to S'pore's survival: CSA chief
Straits Times
Electronic footrest traps customer, who later dies
Jennifer Hassan
Theranos fraud duped billionaires, but Silicon Valley culture blamed
Tom Foremski
"Google Assistant now lets you send and request money from your contacts"
Stephanie Condon
Electric chairs in England?
Yahoo!
Sex Trafficking Bill Heads to Trump, Over Silicon Valley Concerns
NYTimes
Re: Look-Alike Domains and Visual Confusion
Kurt Seifried
Re: Lessons for RISKS from the Florida bridge collapse
Dick Mills
Info on RISKS (comp.risks)

Self-Driving Car Had a Fatal Accident: Now What?

Don Norman <dnorman@ucsd.edu>
Fri, 23 Mar 2018 10:08:46 -0700
(Submitted as an Op-Ed piece to the San Diego Union-Tribune)

Imperfect automation, continually getting better? Or distracted drivers,
continually getting worse? Choose.

Recently, one of Uber's autonomous automobiles was involved in an accident
where a pedestrian was killed. What lesson should we learn from this
incident? During the three years that my colleagues and I have been doing
research on self-driving cars, this is the first death. Compare this single
death with the 120,000 people who have been killed in automobile accidents
in the United States in that same period: roughly 100 people each day.

Fully autonomous cars have driven around four million miles rather than the
nearly nine trillion miles driven by American drivers in that same period.
The accident record is impressively low: in four million miles of driving,
one death compared to 40 deaths in regular driving.

Automobile manufacturers are rushing to add more and more automation to
their existing cars, promising to have fully automated vehicles within a few
years. They need to slow down.

Why should we have fully automated cars? Because they have many benefits:
Less deaths, injuries, and accidents with no more drunk or distracted
driving; more efficient commuting, and increased mobility for those who
cannot or do not wish to drive.

However, we need caution. New technology is always problematical. It can
take years—decades—to make technology safe and reliable in difficult
environmental conditions, unexpected situations, and the ever-unpredictable
behavior of pedestrians, bicycles, motorcycles, and skateboarders (to name a
few). At the Design Lab at the University of California, San Diego our
researchers have observed skateboarders and bicyclists zooming down
sidewalks into the streets and people crossing city streets with eyes firmly
fixed on phones or tablets. Driving on major highways is easy compared to
urban and city streets.

Today tests are performed with safety drivers, people inside the vehicle
ready to take over if something goes wrong. This is a false hope. Almost 50
years of research shows that people are not good at monitoring for long
hours, and then suddenly leap into action when difficulties arise.

In the Uber accident, the video from the car's camera of the safety driver
shows him looking down for roughly 5 seconds, looking up just before the
accident: he only had time to register horror. The car was traveling 40 mph
which means that in those five seconds it had traveled almost 300 feet. But
even had the driver always kept his eyes on the road, the driver might not
have been able to react quickly enough. Studies have shown that it takes up
to 20 seconds for safety drivers to respond. Safety drivers do not ensure
safety.

The Federal Drug Administration (FDA) requires the medical industry to
behave cautiously in their introduction of new devices and medication: new
treatments or devices can sometimes do patients more harm than good, so each
must undergo clinical trials before they can be released. We need something
similar for self-driving autos: a neutral, trusted agency to certify safety
before we let them on the roads. This could be a government agency or a
high-quality private company such as UL. Insisting on a safety certificate
might put a salutary slow up in today=E2=80=99s mad race.

The potential for autonomous vehicles to produce tremendous saving of lives
and injuries while increasing our quality of life provides strong support
for the eventual introduction of fully automated vehicles. Nonetheless, just
as new medicines and medical devices enhance lives, but their introduction
is done cautiously, with carefully controlled tests, we must do the same
with our autonomous vehicles. I look forward to the day when my self driving
car will free me from the tedium and danger of driving. But that day is not
yet here.

Don Norman

Prof. and Director, DesignLab, UC San Diego
dnorman@ucsd.edu designlab.ucsd.edu/  http://www.jnd.org


Re: Uber car in autonomous mode kills pedestrian (WashPo)

Rob Bailey <rob@wm8s.com>
Wed, 21 Mar 2018 05:42:10 -0500
From what I've read, Arizona has no or few regulations governing autonomous
vehicles.  Does anyone know if the conditions of this test drive were
governed by special rules, i.e., rules other than chapter 3, title 28 of
the Arizona Code? That chapter only requires drivers to yield the
right-of-way to pedestrians *crossing the roadway within a crosswalk*.  Ariz.
Rev. Stat. Ann. =C2=A7 28-792.  The following section completes the rule:

  "A pedestrian crossing a roadway at any point other than within a marked
  crosswalk or within an unmarked crosswalk at an intersection shall yield
  the right-of-way to all vehicles on the roadway."  Ariz. Rev. Stat.
  Ann. 28-793(A).  Otherwise, drivers need only "exercise due care."
  Ariz. Rev. Stat. Ann. 28-794.  I've seen an aerial photograph of the
  accident scene, and there's not a crosswalk in sight.  Without knowing
  more, it seems at least plausible that Uber's robot had the right-of-way.

These rules govern "the driver of a vehicle."  As others have asked, who
[what?] is that?


Re: The Unstoppable Momentum of Self-Driving Cars (Shapir, R 30.53)

<kaufmann@winning.com>
Wed, 21 Mar 2018 21:36:04 +0100
  [Note: this message was drafted well before the March 19 accident in which
  a pedestrian was killed by a self-driving car in Arizona.  Subsequent
  edits have not appreciably changed my comments.]

Amos Shapir's comments in RISKS 30.53 (Re: The Unstoppable Momentum of
Self-Driving Cars) crystallized some thoughts for me, which I submit for
your consideration and possible inclusion in the Digest. I claim no special
expertise, apart from being both a licensed driver and a RISKs reader for
over thirty years.

I am very skeptical about a supposed inevitable future utopia of widespread
autonomous vehicles and dismayed at its portrayal across the media as a fait
accompli, apart from "just a small matter of programming." This future seems
to be a technological Rorschach, wherein people envision an outcome that
solves their pet transportation gripe while discounting or disregarding
problem areas and difficult questions. I have seen little critical
examination or reporting of many key issues raised by autonomous
vehicles. Apparently in this miraculous future, nothing ever breaks,
malfunctions, or is misused, economics don't matter, and there are no bad
actors.

As a long-time RISKS reader, I see many problems in both technological
implementation and policy issues. A summary of my list would still be a long
post, and the material could easily be expanded into a book-length
treatise. I wait to hear when self-driving cars successfully complete a
million miles without human intervention in Boston and its suburbs during
winter snowstorms.

The most fundamental issue is one raised by Mr. Shapir, and which I would
express thus: driving is not only a technical exercise, it is also a social
exercise. And not only with other drivers—there is an implied social
contract and possible interaction with bicyclists, pedestrians, etc. Notice
how many times while driving that one makes a judgment about the intent of
other drivers or pedestrians and then acts on that assessment.

Consider one example: how does an autonomous vehicle respond to a police
officer directing traffic at a broken signal? What if the signal is working
normally, but an officer is directing traffic to disregard the signal? What
if it's not an officer but a person wearing a Halloween costume and a
Crackerjack badge? What if it's a civilian who has taken it upon themselves
to direct traffic, as has happened during widespread blackouts or other
emergencies?

Our current society is so oriented around humans driving that I foresee
major changes will be needed to bring about a future of predominately
autonomous vehicles. Already there are signs of how this will play out. In
response to reports of accidents involving self-driving cars, it is being
suggested that people will have to be "educated" to learn how to share the
road with them. In other words, it is the humans who will have to adapt to
machines; the machines will not have to adapt to human drivers.

In closing I would mention briefly the common misperception that
self-driving cars will somehow eliminate the possibility of human error in
driving.  Until autonomous vehicles are designed and built by
extraterrestrials, the possibility for human error cannot be eliminated,
only moved somewhere else. In my view, what is being proposed is no less
than extending the Internet of Things to include millions of 2000-pound
autonomous wheeled robots set loose onto public streets. The overwhelming
evidence to date is that we are incapable of doing so safely and securely.


"Why Big Tech Needs Big Ethics—Right Now!" (Lauren's Blog)

Lauren Weinstein <lauren@vortex.com>
Sat, 24 Mar 2018 10:10:35 -0700
https://lauren.vortex.com/2018/03/24/why-big-tech-needs-big-ethics-right-now

The Cambridge Analytica user trust debacle currently enveloping Facebook has
once again brought into sharp focus a foundational issue that permeates Big
Tech—the complex interrelationships between engineering, marketing, and
ethics.

I've spent many years pounding on this problem, often to be told by my
technologist colleagues that "Our job is just to build the stuff—let the
politicians figure out the ethics!"

That attitude has always chilled me to the bone—let the *politicians*
handle the ethics relating to complicated technologies?  (Or anything else
for that matter?) Excuse me, are we living on the same planet? On the same
timeline? Hello???

So I almost choked on my coffee when I saw articles saying that Facebook was
now suggesting the need for government regulation of their operations - aka
- "Stop us before we screw our users yet again!"

The last thing we need is the politicians involved. They by and large don't
understand what we're doing, they generally operate on the basis of image
and political expediency. Politicians touching tech is typically poison.

But the status quo of Big Tech is untenable also. Google is a wonderful firm
with great ideals, but with continuing user support and accessibility
problems. Facebook strikes me, frankly, as having a basically evil business
model. Apple is handing user data and crypto keys over to the censoring
Chinese dictatorship. Microsoft, and the rest—who the hell knows from day
to day?

One aspect that they've all shared is the "move fast and break things"
mantra of Silicon Valley, and a tendency to operate on the basis that "you
never want to ask permission, just apologize later if things go wrong."

These attitudes just aren't going to work going forward. These firms (and
their users!) are now in the crosshairs of the politicians, who see rigorous
regulation of these firms as key to their political futures, and they intend
to accomplish this by making Big Tech "the fall guy" for a range of
perceived evils—smoothing the ways for various forms of micromanaged,
government-imposed information control and censorship.

As we've already seen in Russia, China, and even increasingly in Europe,
this is indeed the path to tyranny. Assuming that the USA is invulnerable to
these forces would be stupidity to the max.

For too long, user support and ethical questions have had second-class
status at most tech firms. It's not that these concerns don't exist at all,
it's that they're often very low in the product priority hierarchies.

This must change.

Ethics, user trust, and user support issues must proactively rise to the top
of these hierarchies, lest opportunistic politicians leverage the existing
situation for the imposition of knee-jerk "solutions" that will not only
seriously damage these firms, but will ultimately be devastating to their
users and broader communities as well.

There have long existed corporate roles in various "traditional" industries
-- who long ago learned how to avoid being easily steamrolled by the
politicians—to help avoid these dilemmas.

Full-time ethicists and ombudsmen, for example, can play crucial roles in
these respects, by helping firms to understand the cross-product, cross-team
implications of their projects in relation to internal needs, user
requirements, and overall effects on the world at large.

Many Internet-related firms have resisted the idea of accepting these roles
within their corporate ranks, believing that their other management and
public relations employees can fulfill those functions.

But in reality—and the continuing Facebook privacy disasters are but one
set of examples—it takes a specific kind of longitudinal, cross-team
approach to seriously, adequately, and successfully address these escalating
issues.

Another argument heard against ombudsman and ethicist roles is concerns
regarding their supposedly having "veto" power over product decisions. This
is a fallacious argument. These roles need not necessarily imply any sort of
launch or other veto abilities, and can be purely advisory in terms of
internal policy decisions. But having the input of persons with these skill
sets in the ongoing decision-making process is still crucial—and lacking
at many of these major firms.

The time is short for firms to grasp the nettle in these regards.
Politicians around the world—not just in traditional tyrannies—are
taking advantage of the publicly perceived ethical and user support problems
at these firms.

All through human history, governments have naturally gravitated toward
controlling the information available to citizens—sometimes with laudable
motives, always with horrific results.

Internet technologies provide governments with a veritable and irresistible
"candy store" of possibilities for government-imposed censorship and other
information control.

A key step that these firms must take to help stave off such dark outcomes
is to move immediately to make Big Ethics a key part of their corporate DNA.

To do otherwise, or even to hesitate toward making such changes, could
easily be tantamount to total surrender.


Even Without Cambridge Analytica, the Trump Campaign Already Had Everyone's Data (Emily Taylor)

Diego Latella <Diego.Latella@isti.cnr.it>
Fri, 23 Mar 2018 14:43:03 +0100
  Although nothing really new,
  repetita juvant (especially when coming from authoritative source):

Emily Taylor - Chatham House
https://email-chathamhouse.org/1S3M-5JDSC-NUSXMS-322YLV-1/c.aspx

The quest for a war-free world has a basic purpose: survival.  But if in the
process we learn how to achieve it by love rather than by fear, by kindness
rather than compulsion; if in the process we learn how to combine the
essential with the enjoyable, the expedient with the benevolent, the
practical with the beautiful, this will be an extra incentive to embark on
this great task.

  [Above all, remember your humanity. —Sir Joseph Rotblat]

Dott. Diego Latella, CNR-ISTI, Via Moruzzi 1, 56124 Pisa, Italy
(http:www.isti.cnr.it)


Yet another security vulnerability afflicts India's citizen database

Prashanth Mundkur <prashanth.mundkur@sri.com>
Sat, 24 Mar 2018 19:14:10 -0700
Sadly, this is only to be expected of the current Indian government.  The
delicious irony is it is making angry noises about Facebook and Cambridge
Analytica's activities compromising the data of Indian citizens:
https://theprint.in/politics/bhagwat-ravi-shankar-prasad-india-against-facebook/43722/


Schools Are Using AI to Check Students' Social Media for Warning Signs of Violence (Gizmodo)

Lauren Weinstein <lauren@vortex.com>
Thu, 22 Mar 2018 16:11:56 -0700
https://gizmodo.com/schools-are-using-ai-to-check-students-social-media-for-1824002976

  Margulis admits there are false positives, where someone is flagged when
  they don't pose a risk, but critically, there can also be false
  negatives--students deemed unremarkable by the AI who go on to do
  violence. Experts are worried that unleashing this technology in schools
  will only replicate the imbalances we see when these tools are used in
  public policing.  "This is an expansion of the schools' ability to police
  what students are doing inside of school or on campus to their
  outside-of-school conduct," says Kade Crockford, who directs the
  Technology for Liberty Program at the ACLU of Massachusetts. "In many
  cases across the country, schools have been using social media
  surveillance tools in ways that have harmed, specifically, students of
  color. So we certainly have concerns about technologies like this being
  used to expand what we call the school-to-prison pipeline.

Hmm. Now what would a smart but mentally ill kid do in this instance?  How
about creating a benign social media presence to throw authorities off the
track? You think kids aren't smart enough to do that? You're fooling
yourself! What's the government gonna do as the violent folks in our midst
learn not to post photos of guns—and to turn off their cellphones long
before committing crimes? Don't assume they're all stupid people. They're
not!


Bad science puts innocent people in jail—and keeps them there (WashPo)

Lauren Weinstein <lauren@vortex.com>
Sat, 24 Mar 2018 17:01:39 -0700
NNSquad
https://www.washingtonpost.com/outlook/bad-science-puts-innocent-people-in-jail--and-keeps-them-there/2018/03/20/f1fffd08-263e-11e8-b79d-f3d931db7f68_story.html

  Since the onset in the 1990s of DNA testing—which, unlike most fields
  of forensics, was born in the scientific community—we've learned that
  many forensic specialities aren't nearly as accurate as their
  practitioners have claimed.  Studies from the National Academy of Sciences
  and the President's Council of Advisors on Science and Technology have
  concluded that there's insufficient research to support the claims of the
  broad field of "pattern matching" forensics, which includes analyses of
  such things as hair fiber, bite marks, "tool marks" and tire tread.  These
  forensic specialties were never subjected to the rigors of scientific
  inquiry—double-blind testing, peer review—before they were accepted
  in courtrooms. Most are entirely subjective: An analyst will look at two
  marks or patterns and determine whether they're a "match." Most of these
  disciplines can't even calculate a margin of error.


GrayKey iPhone unlocker poses serious security concerns (Malwarebytes Labs)

Gabe Goldberg <gabe@gabegold.com>
Sun, 25 Mar 2018 21:54:08 -0400
Ever since the case of the San Bernadino shooter pitted Apple against the
FBI over the unlocking of an iPhone, opinions have been split on providing
backdoor access to the iPhone for law enforcement. Some felt that Apple was
aiding and abetting a felony by refusing to create a special version of iOS
with a backdoor for accessing the phone's data.  Others believed that it's
impossible to give backdoor access to law enforcement without threatening
the security of law-abiding citizens.

In an interesting twist, the battle ended with the FBI dropping the case
after finding a third party who could help. At the time, it was theorized
that the third party was Cellebrite. Since then it has become known that
Cellebrite—an Israeli company—does provide iPhone unlocking services
to law enforcement agencies.

Cellebrite, through means currently unknown, provides these services at
$5,000 per device, and for the most part this involves sending the phones to
a Cellebrite facility. (Recently, Cellebrite has begun providing in-house
unlocking services, but those services are protected heavily by
non-disclosure agreements, so little is known about them.) It is theorized,
and highly likely, that Cellebrite knows of one or more iOS vulnerabilities
that allow them to access the devices.

In late 2017, word of a new iPhone unlocker device started to circulate: a
device called GrayKey, made by a company named Grayshift. Based in Atlanta,
Georgia, Grayshift was founded in 2016, and is a privately-held company with
fewer than 50 employees. Little was known publicly about this device—or
even whether it was a device or a service—until recently, as the GrayKey
website is protected by a portal that screens for law enforcement
affiliation.

According to Forbes, the GrayKey iPhone unlocker device is marketed for
in-house use at law enforcement offices or labs. This is drastically
different from Cellebrite's overall business model, in that it puts complete
control of the process in the hands of law enforcement.

Thanks to an anonymous source, we now know what this mysterious device looks
like, and how it works. And while the technology is a good thing for law
enforcement, it presents some significant security risks.

https://blog.malwarebytes.com/security-world/2018/03/graykey-iphone-unlocker-poses-serious-security-concerns/


History Shows DDoS Volumes to Keep Rising Despite Mitigation Efforts (EWeek)

Gabe Goldberg <gabe@gabegold.com>
Sun, 25 Mar 2018 21:56:34 -0400
http://www.eweek.com/security/how-ddos-attacks-techniques-have-evolved-over-past-20-years


"The new social media imperative: Distance yourself" (Mike Elgan)

Gene Wirchenko <genew@telus.net>
Sun, 25 Mar 2018 20:41:45 -0700
Mike Elgan,  Computerworld, 24 Mar 2018
As Facebook just learned, a social network's reputation can sour in an
instant. Here's how to save your own.
https://www.computerworld.com/article/3265729/social-media/the-new-social-media-imperative-distance-yourself.html

selected text:

Elon Musk deleted the Facebook pages of both Tesla and SpaceX on Friday.

We learned something new this week about social networks that we didn't know
before: Their reputations can change in an instant.

It takes years to establish a personal, professional or corporate presence
on social sites. Millions of man-hours spent on crafting posts, engaging
with followers and mastering site-specific techniques and practices can be
suddenly wasted when that social site starts "breaking bad" in the public
imagination.

Just look at what happened to Facebook.


$1 million worth of Iron Dome missiles fired at nothing due to 'oversensitivity' (The Times of Israel)

Gabe Goldberg <gabe@gabegold.com>
Mon, 26 Mar 2018 16:16:30 -0400
According to Brig. Gen. Tzvika Haimovitch, the system misidentified
automatic gunfire from the Gaza Strip as incoming rockets heading toward the
southern Israeli community of Zikim.

Haimovitch told reporters that this was not a bug in the system, but the
result of it being programmed to be more sensitive in light of the current
unrest in the area.

"We don't take chances as it relates to threats to Israeli citizens and
property," he said.

http://www.timesofisrael.com/1-million-worth-of-iron-dome-missiles-fired-at-nothing-due-to-oversensitivity/


"Cryptocurrency mining malware uses five-year old vulnerability to mine Monero on Linux servers" (Danny Palmer)

Gene Wirchenko <genew@telus.net>
Thu, 22 Mar 2018 11:13:23 -0700
Danny Palmer | March 22, 2018—16:01 GMT (09:01 PDT) | Topic: Security

Hackers are targeting accessible x86-64 Linux web servers around the world.
http://www.zdnet.com/article/cryptocurrency-mining-malware-uses-five-year-old-vulnerability-to-mine-monero-on-linux-servers/

  So when was the last time you heard that line about number of eyes on code?


Tamper-proof currency wallet backdoored by a 15-year-old (Ars Technica)

"Peter G. Neumann" <neumann@csl.sri.com>
Wed, 21 Mar 2018 13:55:52 PDT
http://arstechnica.com/information-technology/2018/03/a-tamper-proof-currency-wallet-just-got-trivially-backdoored-by-a-15-year-old/

I would like to add an out-of-band note from my colleague Robert
N. M. Watson, who added a little realism to this item:

  As Joe Bonneau recently pointed out to me, one really great thing about
  cryptocurrencies is that they put a clear financial cost on flaws in
  crypto-protocols, and create an incentive scheme to improve the quality of
  both designs and implementations of those protocols (... and also to break
  them).  Unfortunately, the state-of-the-art in both designs and
  implementations appear less mature than some of its currency holders
  would prefer!

    [By the way, my humblest apologies for the messed-up URLs in the
    previous RISKS-30.60.  I need to build an emacs macro to auto-unscramble
    the miserable munging of URLs that is forced on incoming mail to RISKS,
    which SRI's Office 365 garbles annoyingly with SafeLinks.  I goofed the
    translation of "https%3A%2F%2F" by accidentally changing "%3A" into a
    semicolon instead of a colon.  Sorry!  (However, I did subsequently
    perform a semicolonoscopy on the archive copy.)]


Cybersecurity key to S'pore's survival: CSA chief (Straits Times)

Richard M Stein <rmstein@ieee.org>
Thu, 22 Mar 2018 11:54:09 +0800
Cybersecurity effectiveness as a tipping point in a country's resilience and
sustained viability as regional hub of commercial Internet connectivity.
Headline hyperbole? Not in Singapore, where this risk drives government
policy debates and funding priorities.
http://www.straitstimes.com/world/united-states/cyber-security-key-to-spores-survival-csa-chief

  “The more digitalised and connected our economy, the more important it
  becomes to secure our systems in cyberspace,'' Mr Koh said in his keynote
  speech at the 3rd Annual Billington International Cybersecurity Summit in
  Washington.  “The financial cost of cyberattacks can be high, but
  indirect costs, such as the loss of trust from the public, can be even
  higher. This is especially relevant for Singapore, whose brand name is
  often associated with trust, transparency and efficiency.''

Brand outrage and trust erosion characterize the Internet of Mistakes.
Interesting to note that the Transparency International's *Corruption
Perceptions Index 2017* ranks Singapore #6, the United States tied for
#16. North Korea is #171 of 180.

See http://www.transparency.org/news/


Electronic footrest traps customer, who later dies (Jennifer Hassan)

George Mannes <gmannes@gmail.com>
Thu, 22 Mar 2018 10:16:33 -0400
  [The specifics of the chair design and electronics are unclear, but the
  possibility that the only way to disable or reverse an "electronic
  footrest" is to break it is alarming.  GW]

Jennifer Hassan. *The Washington Post*. 21 Mar 2018
Man dies after trapping his head in a movie theater seat
http://www.washingtonpost.com/news/worldviews/wp/2018/03/21/man-dies-after-trapping-his-head-in-a-movie-theater-seat/

LONDON—A man has died of a heart attack after reportedly getting his head
trapped in a movie theater seat in Birmingham, England, as he tried to
retrieve a dropped cellphone.  The incident, which took place at the Vue
Cinema in the Star City entertainment complex, was described as a *freak*
accident.

According to *The Birmingham Mail*, the man had dropped his cellphone
between two *Gold Class* seats and was attempting to retrieve it when an
electronic footrest came down on his head, wedging him underneath. Customers
pay more to sit in the reclining Gold Class seats, described as luxury
seating.  “He was stuck and panicking.  His partner and staff tried to free
him but couldn't.''  The chair leg-rest was eventually broken free and he
managed to get out.

West Midlands Ambulance Service confirmed it was called to reports of a
patient in cardiac arrest on March 9.  The man was taken to a Birmingham
hospital in a serious condition, but died of his injuries a week later, on
March 16.

In a statement, Vue International confirmed the man's death: “Following an
incident which took place on Friday 9 March at our Birmingham cinema, we can
confirm that a customer was taken to hospital that evening.  We are saddened
to learn that he passed away on 16 March.''  Vue said a “full investigation
into the nature of the incident is ongoing.''

A health and safety investigation from Birmingham City Council also was
underway.


Theranos fraud duped billionaires, but Silicon Valley culture blamed (Tom Foremski)

Gene Wirchenko <genew@telus.net>
Thu, 22 Mar 2018 10:42:14 -0700
Tom Foremski, 20 Mar 2018

The very public demise of Theranos momentarily satisfies the Schadenfreude
of Silicon Valley's critics, but it's a distraction that protects the
reputations of rich and powerful men that financed and ran the company for
years.
http://www.zdnet.com/article/theranos-fraud-that-duped-billionaires-but-silicon-valley-culture-blamed/

selected text:

“The Theranos story is an important lesson for Silicon Valley,'' said Jina
Choi, director of the SEC's San Francisco Regional Office.  “Innovators who
seek to revolutionize and disrupt an industry must tell investors the truth
about what their technology can do today, not just what they hope it might
do someday.''

Richard Waters, in the Financial Times, writes, “For start-ups given to
ethically dubious demos or the occasional white lie about the performance of
their technology, it is an object lesson in the danger of getting in over
their heads.''

If there is a meaningful lesson for Silicon Valley investors in the Theranos
case it is this: A startup founder's passion does not equate to talent or
ability.  Also, always demand to see basic financial information and proof
to back up technology claims.


"Google Assistant now lets you send and request money from your contacts" (Stephanie Condon)

Gene Wirchenko <genew@telus.net>
Thu, 22 Mar 2018 11:16:36 -0700
  [Does anyone else think this could get misused in any way?  GW]

Stephanie Condon for Between the Lines, ZDNet, 22 Mar 2018

In the coming months, users will be able to send or request money via
voice-activated speakers like Google Home.
http://www.zdnet.com/article/google-assistant-now-sends-and-requests-money-from-your-contacts/

opening text:

Google Pay users can now use the voice-activated Google Assistant on their
smartphones to send or request money from people in their contacts, Google
announced Thursday. The new service is free and currently on Android and iOS
phones in the US.

A user says something like, "Hey Google, request $20 from Sam for the show
tonight," and funds would be immediately transferred, even if the recipient
doesn't have Google Pay. The recipient would get an email or text message
about the payment, or a notification if they're already installed the Google
Pay app.


Electric chairs in England? (Yahoo!)

JC Cantrell <cantrellengineering@gmail.com>
Thu, 22 Mar 2018 13:57:21 -0700
I read this on Yahoo today:

http://www.yahoo.com/news/man-dies-getting-head-stuck-184447624.html

Now, the article makes no mention of computers, but something made that
chair move.

Also, we are trying to build autonomous cars when we can't even get a chair
right?


Sex Trafficking Bill Heads to Trump, Over Silicon Valley Concerns (NYTimes)

Monty Solomon <monty@roscom.com>
Thu, 22 Mar 2018 09:46:26 -0400
The Senate gave final passage to a bill to combat sex trafficking,
disregarding concerns in Silicon Valley that it could chill Internet content
and harm free speech.
http://www.nytimes.com/2018/03/21/business/sex-trafficking-bill-senate.html


Re: Look-Alike Domains and Visual Confusion (Goldberg, R 30 60)

Kurt Seifried <kurt@seifried.org>
Wed, 21 Mar 2018 09:10:35 -0600
> How good are you at telling the difference between domain names you know
> and trust and impostor or look-alike domains? The answer may depend on how
> familiar you are with the nuances of internationalized domain names
> (IDNs), as well as which browser or Web application you're using.

This is now covered by CWE-1007: Insufficient Visual Distinction of
Homoglyphs Presented to User

http://cwe.mitre.org/data/definitions/1007.html

If you find such an instance of this PLEASE REQUEST A CVE identifier.
Requesting a CVE Identifier makes it much more likely that it will get fixed
(e.g. adding some visual cues to prompt users that they're not looking at
ascii text, a warning, whatever). To request a CVE Identifier please use
  http://iwantacve.org/ and http://cveform.mitre.org/
for Open Source and Closed Source. respectively.


Re: Lessons for RISKS from the Florida bridge collapse (R 30.59)

Dick Mills <dickandlibbymills@gmail.com>
Fri, 23 Mar 2018 10:58:37 -0400
  "So, here are two messages reported by Lauren Weinstein on this subject,
  where problems had been diagnosed but either not considered or considered
  not relevant (respectively)."

The inference that the discovered cracks in this case actually are relevant
is premature.  The inference that officials should have applied the
precautionary principle in this case is also unwarranted at this point.

I object to public speculation and the rush to draw lessons learned before
the actual cause has been determined and released.  Although someone is to
blame (presumably), others are innocent and there is a human cost to
innocent people and their families caused by incorrect premature speculation
as to causes.  The TWA 800 case is a good example why public speculation is
bad.

I made the same complaint in RISKS-18.42 about premature public speculation
about the causes of airplane crashes.

Please report problems with the web pages to the maintainer

x
Top