The RISKS Digest
Volume 32 Issue 90

Sunday, 17th October 2021

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…


Keyword warrants
NY Post
Security risks of insulin pumps
The FDA Should Better Regulate Medical Algorithms
Scientific American
Apple's App Tracking Transparency circumvented by some apps
Special Report: How AT&T helped build far-right One America News
Missouri governor accuses journalist who warned state about cybersecurity flaw of criminal ‘hacking’
Trans man says confusion caused cervical screening delay
BBC News
How the WhatsApp Outage Hurt Small Businesses in India
Expensive hotel room!!!
Jonathan M. Gitlin
Lauren Weinstein
Google Chat spam?
Rob Slade
Dubai’s Ruler Hacked Phones of His Ex-Wife and Her Lawyers, UK Court Says
Bugs in our Pockets: The Risks of Client-Side Scanning
Info on RISKS (comp.risks)

Keyword warrants (NYPost)

Peter Neumann <>
Wed, 6 Oct 2021 20:07:17 PDT
The U.S. federal government is secretly ordering Google and other search
engines to track and provide data on anyone who searches certain terms
through *keyword warrants*, according to a new report.

In recent years, only two such warrants have been made public, but
accidentally *unsealed court documents obtained by Forbes* show the
government has been making these requests far more frequently.  [...]

Security risks of insulin pumps (Healio)

"Judith Hemenway" <>
Wed, 6 Oct 2021 19:25:51 +0000

The FDA Should Better Regulate Medical Algorithms (Scientific American)

"Richard Stein" <>
Fri, 8 Oct 2021 11:18:31 +0800

"Medical algorithms are used across the health care spectrum to diagnose
disease, offer prognosis, monitor patients’ health and assist with
administrative tasks such as scheduling patients. But recent news in the
U.S. is filled with stories of these technologies running amok. From sexual
trauma victims being unfairly labeled as “high-risk” by substance-abuse-
scoring algorithms to diagnostic algorithms failing to detect sepsis cases
in more than 100 health systems nationwide to clinical decision support
(CDS) software systematically discriminating against millions of Black
patients by discouraging necessary referrals to complex care”this problem
abounds. And it extends our pandemic as well. In a review of 232
machine-learning algorithms designed to detect COVID-19, none were of
clinical use.

"The kicker: most of these algorithms did not require FDA approval, and the
ones that did often were not required to conduct clinical trials."

The FDA's 510(k) regulatory process promotes medical innovations by
establishing a broadened definition of "device similarity"—if the newest
form of a medical device is not too different from the old, approval for
deployment and use is given without significant qualification trial for
effectiveness or safety.

The 510(k) process has been abused by medical device manufacturers,
especially those based on computer technology. Patients that rely on
embedded applications (pacemakers, cardiodefibrillators, drug infusers,
continuous glucose monitors, etc.) and diagnostic systems (X-rays, MRI,
blood chemistry analyzers, etc.) are constantly exposed to adverse product
events documented as malfunctions, injuries, and deaths. Adverse events also
contribute to inconvenience that consumers and insurers underwrite through
lost time and expense.

Failure to minimize software defect escape exposes patient populations to
unnecessary and avoidable technological risks. Reforming the 510(k) process
by subjecting algorithmic qualification efforts to broad public scrutiny
(e.g., open source inspection) can suppress product defect escape potential.

Apple's App Tracking Transparency circumvented by some apps (LockDownPrivacy)

"Anthony Thorn" <>
Sat, 9 Oct 2021 16:29:26 +0200
Apple’s so-called App Tracking Transparency initiative has not stopped all
tracking.  Testing by Johnny Lin and Sean Halloran of "Lockdown Privacy"
showed that apps are using "Fingerprinting" to track users.

"To find out what happens when you tap “ask app not to track,” Lockdown says
it tested ten popular apps on an iPhone running iOS 14.8 and again with the
newest iOS 15, analyzing what personal information flowed out of them.

As part of a technical change that arrived with iOS 14.5, the apps were
no longer able to access one valuable piece of data: a kind of social
security number for your iPhone, known as the ID for Advertisers, or
IDFA. But there’s other information that can identify your phone beyond
that number. [...]"

For example:

The app "Subway Surfers starts sending an outside ad company called
Chartboost 29 very specific data points about your iPhone, including your
Internet address, your free storage, your current volume level (to 3 decimal
points) and even your battery level (to 15 decimal points).  It’s the kind
of unique data that could be used by advertisers to identify your iPhone,
possibly letting them know what other apps you use or how to target you."

Special Report: How AT&T helped build far-right One America News (Reuters)

"Gabe Goldberg" <>
Sun, 10 Oct 2021 22:25:24 -0400
As it lauded former President Donald Trump and spread his unfounded claims
of election fraud, One America News Network saw its viewership jump. Reuters
has uncovered how America’s telecom giant nurtured the news channel now at
the center of a bitter national divide over politics and truth.

Missouri governor accuses journalist who warned state about cybersecurity flaw of criminal ‘hacking’ (WashPost)

Gabe Goldberg <>
Fri, 15 Oct 2021 16:18:30 -0400
Free press advocates called Gov. Mike Parson's comments against a St. Louis
Post-Dispatch journalist "absurd."

When a St. Louis Post-Dispatch journalist discovered that the Missouri state
teachers website allowed anyone to see the Social Security numbers of some
100,000 school employees, he did what any reporter might do. He published a
story about the security vulnerability — though not before warning the state
and giving it time to remove the affected webpages.

Another official might have thanked the newspaper for spotting the flaw and
giving a heads-up before publicizing it — or at least downplayed what
appears to be an embarrassing government mishap. But Missouri Gov.  Mike
Parson (R) did the opposite: He called the journalist “a hacker” who may
face civil or criminal charges for “decod[ing]” HTML code on the Department
of Elementary and Secondary Education website and viewing three Social
Security numbers.

The journalist was “acting against the state agency to compromise teachers’
personal information in an attempt to embarrass the state and sell headlines
for their news outlet,” Parson announced Thursday. He said that he had
referred the case to the Cole County prosecutor and the Missouri State
Highway Patrol’s Digital Forensic Unit.

The announcement immediately drew appalled reactions from The Post-Dispatch
and other journalistic organizations.

“We stand by our reporting and our reporter who did everything right,” Ian
Caso, president and publisher of The Post-Dispatch, said in a
statement. “It’s regrettable the governor has chosen to deflect blame onto
the journalists who uncovered the website’s problem and brought it to DESE’s

Committee to Protect Journalists’ U.S. and Canada program coordinator
Katherine Jacobsen called Parson’s legal threats “absurd.”

“Using journalists as political scapegoats by casting routine research as
‘hacking’ is a poor attempt to divert public attention from the government’s
own security failing,” she told The Washington Post in an email.

Trans man says confusion caused cervical screening delay (BBC News)

Jane Muir <>
Mon, 4 Oct 2021 13:29:24 +0200
A transgender man (i.e., someone who was born female and subsequently
transitioned gender) was registered with his medical practice and the UK
National Health Service as male. Having a vagina and cervix, he arranged a
cervical screening test (US: Pap test).  When the test results came back
suggesting abnormalities, the hospital follow up checks were significantly
delayed by confusion over why a man needed cervical cancer checks."

In fact the patient had also had to take the initiative to arrange the
original screening. NHS England policy for cervical screening is that those
between 25 and 64 registered with a GP as female will be routinely invited
for cervical screening, those registered as male won't. Transgender men can
contact their GP to arrange to book a screening. Transgender men are not
routinely invited to cervical screening checks and might not arrange their
own.  To be clear about terminology, according to the World Health
Organisation, `gender' is used to describe the characteristics of women and
men that are socially constructed, while `sex' refers to those that are
biologically determined. People are born female or male, but learn to be
girls and boys who grow into women and men. This learned behaviour makes up
gender identity and determines gender roles.

A data field intended for one purpose, recording biological sex, is
being used to record something else (gender identity) for a small number of
patients while using exactly the same coding. There does not appear to be a
field that would disambiguate the two usages. A person or automated system
reading the record cannot distinguish them immediately without reading
background notes or accompanying letters.

The risk: Records that conflate biological sex with gender identity can
result in people having essential health checks compromised or missed

How the WhatsApp Outage Hurt Small Businesses in India (Slate)

"Lauren Weinstein" <>
Wed, 6 Oct 2021 09:31:24 -0700
When Facebook went down, it took Instagram and WhatsApp with it. -L

Expensive hotel room!!! (Jonathan M. Gitlin)

"Dave Farber" <>
Sun, 3 Oct 2021 17:56:52 +0900
Jonathan M. Gitlin (8 Jun 2019)
NASA will allow private astronauts on the ISS for $11,250-$22,500 a day The
space agency wants to create a sustainable economy in low Earth orbit.

The forward end of the International Space Station is pictured showing
portions of five modules. From right to left is a portion of the
U.S. Destiny laboratory module linking forward to the Harmony
module. Attached to the port side of Harmony (left foreground) is the Kibo
laboratory module from the Japan Aerospace Exploration Agency (JAXA) with
its logistics module berthed on top. On Harmony's starboard side (center
background) is the Columbus laboratory module from ESA (European Space


On Thursday morning, NASA held a press conference to announce that the
International Space Station is now open for business. Previously, commercial
organizations have only been able to use the ISS for research purposes; now
NASA is open to letting them make a profit in low Earth orbit (LEO). "We're
marketing these opportunities as we've never done before," said NASA's Chief
Financial Officer Jeff DeWitt earlier today.

For starters, the space agency issued a new directive that allows commercial
manufacturing and production to occur on the ISS, as well as marketing
activities. It's not quite "anything goes," though”approved activities have
to have a link to NASA's mission, stimulate the development of a LEO
economy, or actually require a zero-G environment. NASA has published a
price list for the ISS, and it's setting aside five percent of the station's
annual resources (including astronaut time and cargo mass) for commercial


"Lauren Weinstein" <>
Tue, 5 Oct 2021 10:06:53 -0700
So now they're comparing Facebook with cigarettes and opioids. For the
record, similar accusations were made against comic books and horror movies
in their day. Here we go again.

Google Chat spam?

Rob Slade <>
Mon, 11 Oct 2021 11:53:14 -0700
Recently I've been getting a whole bunch of requests, from people I don't know, to
join "chats" via Google Chat.  (I don't yet know Google Chat, but I assume that it
is an evolution of Duo?)

I assume this is some kind of fraud or phishing, possibly a version of 419/advance
fee fraud.  Anybody have any additional details?  (I don't have time to explore it
by joining the chats, but does anyone know if there are any malware

Dubai’s Ruler Hacked Phones of His Ex-Wife and Her Lawyers, UK Court Says (NYTimes)

"Jan Wolitzky" <>
Wed, 6 Oct 2021 17:52:54 -0400
When the hyper-wealthy ruler of the Middle Eastern emirate of Dubai found
himself embroiled in a British court case with the Jordanian princess who
was once his wife, he did more than hire top-shelf lawyers.

He also deployed high-tech software purchased from an Israeli company to
hack the cellphones of his ex-wife, two of her lawyers and three other
associates, according to court documents made public on Wednesday.

Bugs in our Pockets: The Risks of Client-Side Scanning

Peter G Neumann <Neumann@CSL.SRI.COM>
Thu, 14 Oct 2021 20:32:58 -0400
Title: Bugs in our Pockets: The Risks of Client-Side Scanning
Authors: Hal Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt
  Blaze, Jon Callas, Whitfield Diffie, Susan Landau, Peter G. Neumann, Ronald
  L. Rivest, Jeffrey I. Schiller, Bruce Schneier, Vanessa Teague and Carmela
Comments: 46 pages, 3 figures

Our increasing reliance on digital technology for personal, economic, and
government affairs has made it essential to secure the communications and
devices of private citizens, businesses, and governments. This has led to
pervasive use of cryptography across society. Despite its evident
advantages, law enforcement and national security agencies have argued that
the spread of cryptography has hindered access to evidence and
intelligence. Some in industry and government now advocate a new technology
to access targeted data: client-side scanning (CSS). Instead of weakening
encryption or providing law enforcement with backdoor keys to decrypt
communications, CSS would enable on-device analysis of data in the clear. If
targeted information were detected, its existence and, potentially, its
source, would be revealed to the agencies; otherwise, little or no
information would leave the client device. Its proponents claim that CSS is
a solution to the encryption versus public safety debate: it offers privacy
-- in the sense of unimpeded end-to-end encryption—and the ability to
successfully investigate serious crime. In this report, we argue that CSS
neither guarantees efficacious crime prevention nor prevents
surveillance. Indeed, the effect is the opposite. CSS by its nature creates
serious security and privacy risks for all society while the assistance it
can provide for law enforcement is at best problematic. There are multiple
ways in which client-side scanning can fail, can be evaded, and can be


From Ross Anderson:
The report is also at

From Susan Landau <>

From Bruce Schneier:

Please report problems with the web pages to the maintainer