The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 30 Issue 75

Saturday 14 July 2018

Contents

The return of Spectre
ZDNet
Grand Pwning Unit: Accelerating microarchitectural attacks with the GPU
Colyer
Now-fixed iOS 11.3 bug reveals how Apple censors the Taiwanese flag on Chinese iPhones
9to5Mac
FAA pushes back on Boeing exemption for 787 safety flaw
FlightGlobal
Regulation of facial-recognition software?
WashPo
FACEPTION
Facial Personality Analytics
How Smart TVs in Millions of Homes Track More Than What's Onoo Tonight
NYTimes
Meet Scrub 50, the robot cleaner
StraitsTimes
Video: Gavin Williamson hilariously interrupted by Siri during statement to Parliament
9to5Mac
How Voice-Activated Assistants Pose Security Threats in Home, Office
EWeek
A Revised View of the IoT Ecosystem
Vinton Cerf: Computing Edge
Plan to use AI to help emergency call operators
The Straits Times
Hamas uses fake Facebook friends to dupe 100 soldiers into downloading spyware
The Times of Israel
Chinese hackers infiltrate systems at Australian National University
John Colville
Data encryption: How to avoid common workarounds
HPE
CRTC levies fines against two companies under Canada's anti-spam law
Kelly Bert Manning
Cameras to be deployed to detect illegal smoking
The Straits Times
PayPal Apologizes for Letter Demanding Payment From Woman Who Died of Cancer
NYTimes
ExxonMobil Bungles Rewards Card Debut
Krebs on Security
This keyboard attack steals passwords by reading heat from your fingers
Charlie Osborne
iOS 11.4 seems to have a battery drain problem
ZDNet
Watch that keyboard!
Web Informant
How the Pentagon Keeps Its App Store Secure
WiReD
Inside China Dystopian Dreams
NYTimes
Egypt Sentences Lebanese Tourist to 8 Years in Prison for Facebook Video
NYTimes
The Complexity of Simply Searching For Medical Advice
WiReD
According to Apple's digital assistant Siri, Marvel comic book legend Stan Lee had apparently died on Monday
Business Insider Singapore
Risk and cost/benefit ...
Rob Slade
Employees as subjects in clinical trials
Bob Fenichel
Re: Google is training machines to predict when a patient will die
John R. Levine
Richard M Stein
John R. Levine
Info on RISKS (comp.risks)

The return of Spectre (ZDNet)

Gabe Goldberg <gabe@gabegold.com>
Thu, 12 Jul 2018 00:13:36 -0400
Two new ways to assault computers using Spectre-style attacks have been
discovered. These can be used against any operating system running on AMD,
ARM, and Intel processors.

http://www.zdnet.com/article/the-return-of-spectre/


Grand Pwning Unit: Accelerating microarchitectural attacks with the GPU (Colyer)

Monty Solomon <monty@roscom.com>
Wed, 4 Jul 2018 18:23:00 -0400
http://blog.acolyer.org/2018/07/04/grand-pwning-unit-accelerating-microarchitectural-attacks-with-the-gpu/


Now-fixed iOS 11.3 bug reveals how Apple censors the Taiwanese flag on Chinese iPhones (9to5Mac)

Gabe Goldberg <gabe@gabegold.com>
Thu, 12 Jul 2018 00:00:23 -0400
A bug in iOS 11.3 --- fixed in iOS 11.4.1 --- revealed that Apple
censors the Taiwanese flag on iPhones whose region is set to China

The bug came to light when security researcher Patrick Wardle received a
message from a Taiwanese friend, reporting that iMessage, WhatsApp and
Facebook Messenger all crashed when she typed the word `Taiwan' or received
a message containing the emoji for the Taiwanese flag.

He was initially skeptical, but was able to verify the claim and --- by a
somewhat tortuous process --- work out what was causing it.

On an iOS device with CN (China) set as the language/locale, iOS is looking
for the Taiwanese flag emoji and then removing it. That code was buggy,
which was what caused the crash.

http://9to5mac.com/2018/07/11/apple-china-taiwan-flag/


FAA pushes back on Boeing exemption for 787 safety flaw (FlightGlobal)

<richard@hesketh.org.uk>
Fri, 6 Jul 2018 20:44:05 +0100
http://www.flightglobal.com/news/articles/faa-pushes-back-on-boeing-exemption-for-787-safety-f-449263/

Exec summary: In order to meet a delivery schedule, Boeing would like the
FAA to trust that some software which may contain bugs will provide a safety
net in the event that other software containing a known defect causes an
engine shutdown.


Regulation of facial-recognition software? (WashPo)

"Peter G. Neumann" <neumann@csl.sri.com>
Sat, 14 Jul 2018 08:46:31 -0700
Microsoft is calling for government regulation on facial-recognition
software, one of its key technologies, saying such artificial
intelligence is too important and potentially dangerous for tech
giants to police themselves.

https://www.washingtonpost.com/technology/2018/07/13/microsoft-calls-regulation-facial-recognition-saying-its-too-risky-leave-tech-industry-alone/


FACEPTION (Facial Personality Analytics)

Gabe Goldberg <gabe@gabegold.com>
Sun, 8 Jul 2018 13:45:07 -0400
FACEPTION IS A FACIAL PERSONALITY ANALYTICS TECHNOLOGY COMPANY

We reveal personality from facial images at scale to revolutionize how
companies, organizations and even robots understand people and dramatically
improve public safety, communications, decision-making, and experiences.

http://www.faception.com/


How Smart TVs in Millions of Homes Track More Than What's On Tonight (NYTimes)

<>
Fri, 06 Jul 2018 13:15:22 -0400
http://mobile.nytimes.com/2018/07/05/business/media/tv-viewer-tracking.html

The growing concern over online data and user privacy has been focused on
tech giants like Facebook and devices like smartphones. But people's data is
also increasingly being vacuumed right out of their living rooms via their
televisions, sometimes without their knowledge.  [...]

Once enabled, Samba TV can track nearly everything that appears on the TV on
a second-by-second basis, essentially reading pixels to identify network
shows and ads, as well as programs on Netflix and HBO and even video games
played on the TV. Samba TV has even offered advertisers the ability to base
their targeting on whether people watch conservative or liberal media
outlets and which party's presidential debate they watched.


Meet Scrub 50, the robot cleaner (StraitsTimes)

Richard M Stein <rmstein@ieee.org>
Fri, 06 Jul 2018 08:33:48 +0800
http://www.straitstimes.com/singapore/meet-scrub-50-the-robot-cleaner

Visitors to Singapore, a city-state of ~5.6m citizens and expatriates, often
note the gumblob-free sidewalks, garbage-free streets, and spotless trains.

In truth, Singapore is cleaned daily by an army of mop and broom-wielding
custodians estimated to top ~70K in 2016
http://www.straitstimes.com/singapore/environment/liak-teng-lit-5-million-people-70000-cleanersthats-ridiculous). 

Many are senior citizens earning minimum wages to supplement their
retirement. Demographically, custodians are diminishing, and few young
people wish to pursue this career path. 

Enter Scrub 50, which aspires to replace these workers and fill the human
deficit.

  “For example, daily scrubbing of 5,000 sq m over a one-month period would
  require a cleaner to put in 300 hours of work, but the robot takes 130
  hours, its developers claim.''

Advocates of universal income guarantees should take note of any trial
deployment and outcome, including robo-mopping incidents.


Video: Gavin Williamson hilariously interrupted by Siri during statement to Parliament (9to5Mac)

Gabe Goldberg <gabe@gabegold.com>
Thu, 5 Jul 2018 19:37:19 -0400
We've all had it happen before, Siri going off when your iPhone thinks it
heard the *Hey Siri* command when nothing remotely close was mentioned.

Well, today this happened in a public environment and it was
absolutely hilarious. As tweeted by BBC Parliament, Siri made a brief
interruption while Gavin Williamson was making a statement.
http://twitter.com/BBCParliament/status/1014136145989513218

 From what we can hear, it sounds like surrounding areas triggered the
Hey Siri command on the phone, which prompted Siri to respond on the
iPhone.

False positives with voice assistants are always fun, especially when it
falsely catches the trigger phrase, but gets every word after that
verbatim. We can only hope for Apple to keep improving its machine learning
so things like this won't happen in the future.

Check out the full clip below.

http://9to5mac.com/2018/07/03/siri-hijacks-bbc-parliament-statement/

Only today, I commanded my iPad—which ignored me, but my wife's nearby
iPhone responded.


How Voice-Activated Assistants Pose Security Threats in Home, Office (EWeek)

Gabe Goldberg <gabe@gabegold.com>
Fri, 6 Jul 2018 11:44:49 -0400
http://www.eweek.com/security/five-ways-digital-assistants-pose-security-threats-in-home-office

What a surprise, hmmm?


A Revised View of the IoT Ecosystem (Vinton Cerf, Computing Edge)

George Sherwood <sherwood@transedge.com>
Thu, 5 Jul 2018 09:16:46 -0400
An IoT ensemble must actually be in a kind of continuous configuration
mode, anticipating the arrival and departure of all manner of
Internet-enabled devices.  Among the implications is the notion that
the local IoT management system needs to expect that new devices will
need to be configured into the system and others to depart - it needs
to sense their arrivals and departures and to react accordingly.

Here's a scary thought: what if a device is adopted that's corrupted, and it
has a backdoor allowing remote access to a residential network of devices?

http://www.computer.org/csdl/mags/ic/2017/05/mic2017050072.pdf


Plan to use AI to help emergency call operators (The Straits Times)

Richard M Stein <rmstein@ieee.org>
Thu, 12 Jul 2018 12:18:35 +0800
http://www.straitstimes.com/singapore/plan-to-use-ai-to-help-emergency-call-operators

  “With Singapore's emergency dispatch phone operators receiving almost
  200,000 calls for assistance a year, every minute is vital.  In an effort
  to ease their workload, the Singapore Civil Defence Force (SCDF) and four
  other government agencies are turning to artificial intelligence (AI),
  using a speech recognition system developed to transcribe and log each
  call received in real time - even if it is in Singlish.''

The Straits Times article states the platform possesses a 90% speech-to-text
recognition accuracy rate based on a 80Kword Mandarin & English dictionary.
The dictionary was constructed manually from YouTube, SoundCloud and
Singapore radio programs where mixed language (Malay, Hokkien, Mandarin, and
English) conversations are routine among Singaporeans.

A high incidence of emergency operator post-traumatic stress disorder
and critical incident stress syndrome is reported from the field (see 
https://www.factretriever.com/911-emergency-call-facts, retrieved on
12 Jul 2018).

http://www.nena.org/page/911Statistics
estimates ~240M emergency (911) calls per year in the US, with ~15-20%
identified as non-emergencies. ~80% estimated from mobile devices. In
Singapore, mobile devices dominate; this figure is probably much
higher. Landline v. mobile emergency call statistics are not readily
available in Singapore.

Given a 15-20% non-emergency usage of 911 (999 in Singapore), ~30-40K
calls/year of a non-emergency basis in Singapore might accidentally arise.

The risk is that automatic speech-to-text transcription does not suppress
false emergency dispatch incident density based on the logged
content. Unclear from the article if there's a human involved to inspect the
transcription and arbitrate dispatch.

[1] Jesse Jarnow, Why Our Crazy-Smart AI Still Sucks at Transcribing
Speech, claims ~12% speech-to-text error rate
http://www.wired.com/2016/04/long-form-voice-transcription/

[2] Laim Tung, Microsoft's newest milestone? World's lowest error rate
in speech recognition
http://www.zdnet.com/article/microsofts-newest-milestone-worlds-lowest-error-rate-in-speech-recognition/


Hamas uses fake Facebook friends to dupe 100 soldiers into downloading spyware (The Times of Israel)

Gabe Goldberg <gabe@gabegold.com>
Thu, 5 Jul 2018 15:11:49 -0400
Military intelligence officers say no damage to security after soldiers fall
for terror group cyberplot, sign up for fake World Cup and dating apps

http://www.timesofisrael.com/idf-warns-soldiers-hamas-trying-to-spy-on-them-with-fake-dating-world-cup-apps/


Chinese hackers infiltrate systems at Australian National University

John Colville <John.Colville@uts.edu.au>
Sat, 7 Jul 2018 07:25:17 +0000
Australian National University is one of Australia's top research
universities

http://www.abc.net.au/news/2018-07-06/chinese-hackers-infilitrate-anu-it-systems/9951210%3FWT.ac%3Dstatenews_act

Hackers based in China have infiltrated one of Australia's most prestigious
universities, and the threat is yet to be shut down.  The ABC has been told
the Australian National University (ANU) system was first compromised last
year.  In a statement, the ANU said it had been working with intelligence
agencies for several months to minimise the impact of the threat.


Data encryption: How to avoid common workarounds (HPE)

Gabe Goldberg <gabe@gabegold.com>
Mon, 9 Jul 2018 23:34:52 -0400
Sloppy practice by data security personnel can, and often does, allow clever
hackers to gain access to the data without actually defeating the encryption
algorithms. Learn what measures to take to prevent such security breaches.
http://www.hpe.com/us/en/insights/articles/data-encryption-how-to-avoid-common-workarounds-1807.html


CRTC levies fines against two companies under Canada's anti-spam law

Kelly Bert Manning <bo774@freenet.carleton.ca>
Thu, 12 Jul 2018 11:25:53 -0400
The companies involved did not send spam themselves, they provided ISP
services for malware spreaders and “accepted unverified and anonymous
customers''.

  “Our enforcement actions send a clear message to companies whose business
  models may enable these types of activities,'' said Steven Harroun, the
  CRTC's chief compliance and enforcement officer.  Through their actions
  and omissions, Datablocks and Sunlight Media aided in the commission of
  acts contrary to section 8 of the Act.

http://crtc.gc.ca/eng/archive/2018/vt180711.htmh
http://www.timescolonist.com/crtc-levies-fines-against-two-companies-under-canada-s-anti-spam-law-1.23365348


Cameras to be deployed to detect illegal smoking (The Straits Times)

Richard M Stein <rmstein@ieee.org>
Tue, 10 Jul 2018 10:04:29 +0800
http://www.straitstimes.com/singapore/cameras-to-be-deployed-to-detect-illegal-smoking

  “As smoking curbs are extended, the number of offenders has increased. The
  NEA [National Environment Agency] issued about 22,000 tickets last year to
  people smoking at prohibited areas, compared with 19,000 in 2016.''

High-resolution IR cameras positioned to detect smokers in prohibited areas
supplemented with visual facial recognition matching to ID
offenders. Another example of surveillance sensor fusion to find and fine
scofflaws.

Singapore's governance model, an example of *benign* authoritarianism,
emphasizes civil order. Suppressing second-hand smoke exposure is a hot
enforcement priority for public health initiatives.

The CDC estimates that ~41K US citizens die annually from secondhand smoke-
related diseases (principally heart and lung diseases). Assuming US
population of 340m, and Singapore's is ~5.6m, the arithmetic gives:
5.6m/340m * 41Kcitizens ~= 675 annual deaths per year in Singapore
attributed to secondhand smoke-related diseases.
<https://www.cdc.gov/tobacco/data_statistics/fact_sheets/secondhand_smoke/general_facts/index.htm>


PayPal Apologizes for Letter Demanding Payment From Woman Who Died of Cancer (NYTimes)

Monty Solomon <monty@roscom.com>
Thu, 12 Jul 2018 09:44:08 -0400
http://www.nytimes.com/2018/07/11/business/paypal-dead-wife-husband-letter-nyt.html

“We have received notice that you are deceased,'' said the
letter, which threatened legal action over outstanding debt and left the
British woman's husband `incredulous'.


ExxonMobil Bungles Rewards Card Debut (Krebs on Security)

Gabe Goldberg <gabe@gabegold.com>
Mon, 9 Jul 2018 17:00:03 -0400
Energy giant ExxonMobil recently sent snail mail letters to its Plenti
rewards card members stating that the points program was being replaced with
a new one called Exxon Mobil Rewards+. Unfortunately, the letter includes a
confusing toll-free number and directs customers to a parked page that tries
to foist Web browser extensions on visitors.

The mailer (the first page of which is screenshotted below) urges customers
to visit exxonmobilrewardsplus[dot]com, to download its mobile app, and to
call 1-888-REWARD with any questions. It may not be immediately obvious, but
that + sign is actually the same thing as a zero on the telephone keypad
(although I'm ashamed to say I had to look that up online to be sure).

http://krebsonsecurity.com/2018/07/exxonmobil-bungles-rewards-card-debut/


This keyboard attack steals passwords by reading heat from your fingers (Charlie Osborne)

Gene Wirchenko <genew@telus.net>
Thu, 05 Jul 2018 18:30:17 -0700
Charlie Osborne for Zero Day, 5 Jul 2018
Thermanator harvests thermal energy to steal passwords directly from your
fingertips.  A new attack has been presented by researchers which is able to
record thermal residue from keyboards in order to steal credentials.

http://www.zdnet.com/article/this-attack-steals-your-passwords-by-reading-keyboard-heat/


iOS 11.4 seems to have a battery drain problem (ZDNet)

Gabe Goldberg <gabe@gabegold.com>
Mon, 9 Jul 2018 16:45:04 -0400
http://www.zdnet.com/article/ios-11-4-seems-to-have-a-battery-drain-problem/

Every iOS upgrade? I've deferred this one, in spite of advice given to
always upgrade quickly for security patches.


Watch that keyboard! (Web Informant)

Gabe Goldberg <gabe@gabegold.com>
Mon, 9 Jul 2018 16:42:50 -0400
Here is the thing. In order to install one of these keyboard apps, you have
to grant it access to your phone. This seems like common sense, but sadly,
this also grants the app access to pretty much everything you type, every
piece of data on your phone, and every contact of yours too.  Apple calls
this full access, and they require these keyboards to ask explicitly for
this permission after they are installed and before you use them for the
first time. Many of us don't read the fine print and just click yes and go
about our merry way.

http://blog.strom.com/wp/%3Fp%3D6603


How the Pentagon Keeps Its App Store Secure (WiReD)

Gabe Goldberg <gabe@gabegold.com>
Sun, 8 Jul 2018 23:37:27 -0400
“NGA is kind of a unique combat-support agency,'' Saffel says. “With the
GEOINT App Store we chose to go into a very risky new frontier for DOD and
the government in general, but I think we've demonstrated that we can do
things differently and still be secure and still control access.  We're
supporting a lot of different mission sets, and I expect that the app store
will keep growing.''

http://www.wired.com/story/dod-app-store-does-this-one-crucial-thing-to-stay-secure


Inside China Dystopian Dreams (NYTimes)

<>
Sun, 8 Jul 2018 18:54:40 -0400
http://www.nytimes.com/2018/07/08/business/china-surveillance-technology.html

In the Chinese city of Zhengzhou, a police officer wearing facial
recognition glasses spotted a heroin smuggler at a train station.

In Qingdao, a city famous for its German colonial heritage, cameras powered
by artificial intelligence helped the police snatch two dozen criminal
suspects in the midst of a big annual beer festival.

In Wuhu, a fugitive murder suspect was identified by a camera as he bought
food from a street vendor.

With millions of cameras and billions of lines of code, China is building a
high-tech authoritarian future. Beijing is embracing technologies like
facial recognition and artificial intelligence to identify and track 1.4
billion people. It wants to assemble a vast and unprecedented national
surveillance system, with crucial help from its thriving technology
industry.

http://rinzewind.org/blog-es

  [Also noted by Richard M Stein.  PGN]


Egypt Sentences Lebanese Tourist to 8 Years in Prison for Facebook Video (NYTimes)

Lauren Weinstein <lauren@vortex.com>
Sun, 8 Jul 2018 08:53:51 -0700
  An Egyptian court sentenced a Lebanese tourist to eight years in prison on
  Saturday after she posted a video tirade on her Facebook page that
  Egyptian authorities claimed had insulted the country and its leader.  The
  news website Ahram reported that Mona el-Mazbouh was initially handed an
  11-year sentence and a fine after she was convicted of “deliberately
  broadcasting false rumors which aim to undermine society and attack
  religions.


The Complexity of Simply Searching For Medical Advice (WiReD)

Gabe Goldberg <gabe@gabegold.com>
Sun, 8 Jul 2018 23:34:51 -0400
As we increasingly rely on search and on social to answer questions that
have a profound impact on both individuals and society, especially where
health is concerned, this difficulty in discerning, and surfacing, sound
science from pseudo-science has alarming consequences. Will we have to fight
the battle of keyword voids at a grassroots level, wrangling with the
asymmetry of passion by tapping people to find these voids and create
counter-content? Do we need to organize counter-GoFundMe campaigns to pay
for ad campaigns that promote real science? Or will the tech platforms where
this is occurring begin to understand that giving legitimacy to health
misinformation via high search and social rankings is profoundly harmful?
Getting high-quality, fact-based health information shouldn't be dependent
on the outcome of SEO games, or on who has more resources for pay-to-play
content promotion.

Ultimately, the question is, how do we incorporate factual accuracy into
rankings when no one is willing to be the *arbiter of truth*.
Unfortunately, the answer is not easily Googled.

http://www.wired.com/story/the-complexity-of-simply-searching-for-medical-advice

The risk? Energetic advocates of nonsense.


According to Apple's digital assistant Siri, Marvel comic book legend Stan Lee had apparently died on Monday (Business Insider Singapore)

Gabe Goldberg <gabe@gabegold.com>
Fri, 6 Jul 2018 18:11:27 -0400
Comic book fans were in for a shock this week when they were told that
Marvel comic book legend Stan Lee, had passed away on Monday (July 2).

The `news' was broken by Apple's digital assistant Siri, as reported first
by CinemaBlend.
http://www.cinemablend.com/news/2444550/siri-is-telling-people-stan-lee-died-yesterday

While Stan Lee is still alive and well at the sprightly age of 95, it did
not stop Siri from telling users that he had *died* on July 2, 2018, when
asked how old he was.

Siri has since corrected the information, but it still raises questions as
to how the software got it wrong.

The problem can be traced back to Lee's Wikipedia page
http://en.wikipedia.org/wiki/Stan_Lee
http://io9.gizmodo.com/siri-erroneously-told-people-stan-lee-was-dead-1827322243

In the recent profile history of Lee, user `&beer&love' changed Lee's Wiki
data to include a `date of death', pronouncing him dead.

http://www.businessinsider.sg/siri-stan-lee-died-on-monday/

Siri relying for information on Wikipedia which can be changed by anyone,
even &beer&love. Sure beats those dusty encyclopedia volumes I grew up with.


Risk and cost/benefit ...

Rob Slade <rmslade@shaw.ca>
Thu, 5 Jul 2018 11:16:18 -0800
I live in Vancouver, British Columbia, Canada.  We have an abundance of
natural beauty.  Therefore, we also have an abundance of tourists.

I was born here.  (So were my parents.  And 75% of my grandparents.)  Those
of us who are long time residents know that the natural beauty comes with
some natural dangers.

A lot of the tourists don't seem to realize that.  In our social media
intense and almost virtual world, people don't seem to realize that you
can't just press *undo* or *reload* when you do something stupid in the real
world.
http://vancouversun.com/news/local-news/rugged-b-c-locales-are-a-magnet-for-selfie-seekers
or http://is.gd/C1rOty

And we also seem to have a society that idolizes risk-taking.  You've got to
live `on the edge'.  You've got to get closer to the edge than anyone else.

Well, sometimes when you get to close to the edge, you fall off.
http://vancouversun.com/news/local-news/underwater-camera-added-to-search-for-trio-missing-near-squamishs-shannon-falls or
http://is.gd/qolaca

We've got a big tourist industry in BC.  (No, it's not just a business here,
it's an industry.)  We've got lots of companies that spend time and money
taking people out into the wild.  In a (reasonably) safe way.  But, for
some, that isn't enough.  They've got to go beyond the bounds.  And then
they get into trouble.

I live near Lynn Canyon.  I live between the fire station and Lynn Canyon.
We hear the sirens all the time, indicating that some tourist has decided
that he's (it's usually he, or her, when some idiot convinces his girlfriend
to accompany him) smarter then the locals who posted all the “don't jump off
dangerous areas'' signs.  We heard them again last night.  It was late last
night, so I assume that whoever killed himself last night hasn't made the
news sites yet.
http://vancouversun.com/news/local-news/social-media-driving-risky-behaviour-in-lynn-canyon-north-shore-mountains or
http://is.gd/ghM3w2

For the reasons stated above, we have some of the best search and rescue
volunteers in the world in our neck of the woods.  They are, unfortunately,
extremely experienced.  We have, also unfortunately, a bunch of helicopter
pilots who have lots of experience in trying to put a helicopter into deep
canyons, or very close to waterfalls, or rock faces.  It's dangerous work.
Forced upon us by tourists who want the ultimate selfie ...


Employees as subjects in clinical trials (Re: Stein, RISKS-30.74)

"Robert R. Fenichel" <bob@fenichel.net>
Thu, 5 Jul 2018 16:18:16 -0700
Richard M. Stein suggests that when AI-based diagnostic programs are
tested in randomized clinical trials (RCTs), the affected patients
should be the vendor's employees and their families.  This is
problematic.

In evaluating diagnostic methods, several different sorts of RCTs
can be contemplated.  A trial might demonstrate that the new method
(a) provided the same information as old methods, perhaps more
    quickly or at lower cost; or
(b) provided new information that was of interest, but did not alter
    patient or physician behavior; or
(c) provided new information that changed patient or physician behavior; or
(d) changed patient-perceived outcome (feeling better or living longer).

At the upper end of this scale (certainly (d), probably (c)), some of the
patients in a given RCT will be winners, and some will be losers.  Some
people want to play this game, and some don't.

Recruitment into RCTs is generally considered unethical.when the recruited
patients are not fully at liberty to decline participation.  This generally
excludes prisoners and employees.  Even when consent can be freely given
(say, by an academic researcher experimenting on himself or herself*),
trials in developed countries are subject to vetting by outside arbiters to
be sure that the investigators are not, perhaps out of honest enthusiasm,
inadvertently exposing subjects (even if the subjects are themselves) to
unnecessary risks.

Independent of the problem of obtaining freely-given consent from employees,
there are potential problems of bias.  As Stein notes, any such trial would
need to be evaluated by non-conflicted reviewers.  Similarly, patients with
conflicts of interest** can lead to doubt about the soundness of a trial's
results, depending on the credibility of the blinding, which is rarely
perfect.

* There is of course a long history of that, notably including the
first cardiac catheterization.
** Wanting to be successfully treated is not a conflict of interest,
but wanting one treatment or diagnostic process to work better than
another might be.


Re: Google is training machines to predict when a patient will die (Stein and LA Times, R 30 74)

"John Levine" <johnl@iecc.com>
5 Jul 2018 22:05:17 -0400
I looked at the article you linked to, and I'm pretty sure that you sent the
wrong link since there is nothing in the article even vaguely like *death
panels*.  It's about diagnosis based on a wider than usual range of patient
data.

The closest thing was a paragraph in which a hospital's system looked at
very sick patient and estimated she had a 9% chance of dying during her
stay, Google's AI thought it was 19% and she indeed died a few days later.
That tells us she was sicker than she looked but nothing about whether her
treatment was appropriate for her condition.

On the other hand, we have a lot of work to do with or without machines to
manage treatment of people who are terminally ill.  Americans spend vast
amounts on futile care in the last few weeks or days of life of people who
will die no matter what we do.  I expect that computers can be of some use
figuring out what treatments might help and which are just painful and
pointless.


Re: Google is training machines to predict when a patient will die (Levine, R 30 75)

Richard M Stein <rmstein1961@gmail.com>
Fri, 6 Jul 2018 14:11:49 +0800
John—Agreed about end of life healthcare expenditures; they are often
onerous.

My extrapolation of Medical Brain (MB) AI as a *death panel* proxy is
premature, given state of readiness to deploy. I chose the label based on
former Gov. Palin's campaign hyperbole to emphasize potential adoption and
deployment of MB's predictive diagnostic capability. Clearly, connecting MB
to a patient's IV infusion pump, respirator, or other life support device
would be unwise and inhumane.

When I read the LA Times piece, I imagined a hospital or hospice-bound
patient with a `Do Not Resuscitate' (DNR) order tied to their health records
under continuous MB monitoring near end of life (EOL).

As a hypothetical, suppose MB EOL initiation was an opt-in choice? I asked
myself, “What MB outcome would trigger the live/die threshold: 50.1% or 22%
or 90%?''  In light of MB diagnostic prediction, should DNRs have an extra
field to specify an MB live/die outcome threshold that automates end of life
sequence initiation - perhaps a morphine drip.

A dystopian expectation, based on pure economic and business prerogatives,
suggests that delegation of automated live/die choices will emerge. The
nefarious intrusion of technology into life and death decisions promotes
choice acceleration over deliberation; MB deployment demotes human sympathy
to insignificance by pure computation. Some people might prefer a Magic
8-ball to decide, not a stack of software toxicwaste.


Re: Google is training machines to predict when a patient will die (Stein, R 30 75)

"John R. Levine" <johnl@iecc.com>
6 Jul 2018 11:45:18 -0400
> My extrapolation of Medical Brain (MB) AI as a *death panel* proxy is
> premature, given state of readiness to deploy.

It's not premature, it's just silly.  There is a great deal of work around
the world looking at what treatment is cost-effective under what conditions.
This is not exactly a new frontier of inquiry.

One of the best-known is NICE, the National Institute for Health and Care
Excellence in the UK.  It is a major reason that even though the NHS spends
less than half per person what we do in the US, and has well known funding
and management problems, people in the UK are nonetheless about as healthy
as in the US.

NICE really is a death panel, and sometimes turns down treatments that might
hypothetically extend someone's life, because the cost is too far out of
line with the potential benefit.  I'd rather a death panel run transparently
with a goal of improving the country's health to ones we have in the US, run
in secret with a goal of maximizing my insurance company's dividends.

http://www.nice.org.uk/

obRisks: shiny new technical things can be very distracting

Please report problems with the web pages to the maintainer

Top