The RISKS Digest
Volume 33 Issue 72

Sunday, 4th June 2023

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

How A Dark Fleet Moves Russian Oil
The New York Times
Metro Breach Linked To Computer In Russia, Report Finds
DCIST
Kaspersky Says New Zero-Day Malware Hit iPhones, Including Its Own
WiReD
$528 Billion Nuclear Cleanup Plan at Hanford Site in Jeopardy
NYTimes
Secret industry documents reveal that makers of PFAS 'forever chemicals' covered up their health dangers
phys.org
Japanese Moon Lander Crashed Because of a Software Glitch
NYTimes
Millions of Gigabyte Motherboards Were Sold With a Firmware Backdoor
WiReD
Fake students stealing aid from colleges
Nanette Asimov
Tesla leak reportedly shows thousands of Full Self-Driving safety complaints
The Verge
Tesla data leak reportedly details Autopilot complaints
LATimes
Social Media and Youth Mental Health
U.S. Surgeon General
Meta slapped with record $1.3 billion EU fine over data privacy
CNN
Flaws Found in Using Source Reputation for Training Automatic Misinformation Detection Algorithms
Carol Peters
Failed Expectations: A Deep Dive Into the Internet's 40 Years of Evolution
Geoff Huston
AI Poses 'Risk of Extinction,' Industry Leaders Warn
Kevin Roose
What we *should* be worrying about with AI
Lauren Weinstein
Artificial intelligence system predicts consequences of gene modifications
medicalxpress.com
How to fund and launch your AI startup
Meetup
Rise of the Newsbots: AI-Generated News Websites Proliferating Online
NewsGuard
Some thoughts on the current AI storm und drang
Gene Spafford
Massachusetts hospitals, doctors, medical groups pilot ChatGPT technology
The Boston Globe
The benefits and perils of using artificial intelligence to trade and other financial instruments
TheConversation.com
Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers
Rolling Stone
Top French court backs AI-powered surveillance cameras for Paris Olympics
Politico
Meta's Big AI Giveaway
Metz/Isaac
Meta hit with record fine by Irish regulator over U.S. data transfers
CBC
AI scanner used in hundreds of US schools misses knives
BBC
Milton resident's against CVS raises questions about the use of AI lie detectors in hiring
The Boston Globe
EPIC on Generative AI
Prashanth Mundkur
Reality check: What will generative AI really do for cybersecurity?
Cyberscoop
Moody's cites credit risk from state-backed cyber intrusions into U.S. critical infrastructure
cybersecuritydive.com
What Happens When Your Lawyer Uses ChatGPT
NYTimes
Anger over airports' passport e-gates not working
BBC News
Longer and longer trains are blocking emergency services and killing people
WashPost
Denials of health-insurance claims are risingm and getting weirder
WashPost
Small plane crashes after jet fighter chase in WashDC area
WashPost
Response from American Airlines for delay
Steven J. Greenwald
Microsoft Finds macOS Bug That Lets Hackers Bypass SIP Root Restrictions
Sergiu Gatlan
Apps for Older Adults Contain Security Vulnerabilities
Patrick Lejtenyi
India official drains entire dam to retrieve phone
BBC
Google's Privacy Sandbox
Lauren Weinstein
WebKit Under Attack: Apple Issues Emergency Patches for 3 New Zero-Day Vulnerabilities
Apple
Q&A: Why is there so much hype about the quantum computer?
phys.org
Report Estimates Trillions in Indirect Losses Would Follow Quantum Computer Hack
nextgov.com
Don't Store Your Money on Venmo, U.S. Govt Agency Warns
Gizmodo
Re: An EFF Investigation: Mystery GPS Tracker
Steve Lamont
Re: Three Companies Supplied Fake Comments to FCC (NY AG), but John Oliver didn't
John Levine
Re: Near collision embarrasses Navy, so they order public San Diego
Michael Kohne
Info on RISKS (comp.risks)

How A Dark Fleet Moves Russian Oil (The New York Times)

Peter G Neumann <neumann@csl.sri.com>
Sat, 3 Jun 2023 13:15:46 PDT
This article is by Christian Triebert, Blacki Migliozzi, Alexander Cardia,
Muyl Shao, and David Botti.  It covers pages 6-7 in today's National
Edition, and has a front-page satellite image above the fold showing the
Cathay Phoenix tanker docked at the Russian oil terminal in Kozmino,
although its GPS showed it many miles southeast, near to the coast of Japan.
Actually, the ship had left from China for a scheduled stop in South Korea,
and then switched its GPS location to a spoofed fixed FAKE location near
Niigata (Japan) while returning to Kozmino.  According to the article, three
tankers tracked by *The NYTimes* from Kozmino had made 13 trips loading
Russian oil and delivering it to China, each using GPS spoofing to mask
their whereabouts.

  [Just another instance of spoofed GPS locations, which have been
    discussed in earlier RISKS issues, such as these:
  Russia Regularly Spoofs Regional GPS (RISKS-31.15)
  Ghost ships, crop circles, and soft gold: A GPS mystery in Shanghai,
    RISKS-31.48)
  Mysterious GPS outages are wracking the shipping industry (RISKS-31.59)
  High Seas Deception: How Shady Ships Use GPS to Evade International
    Law (RISKS-33.43)
  PGN]


Metro Breach Linked To Computer In Russia, Report Finds (DCIST)

Gabe Goldberg <gabe@gabegold.com>
Wed, 17 May 2023 17:04:22 -0400
A former WMATA contractor using a personal computer in Russia breached
Metro's computer system earlier this year, according to a report from
WMATA's Office of the Inspector General, revealing *grave concerns* in the
system's cyber-vulnerabilities.

The investigation by Metro OIG Rene Febles into the hacking revealed several
weaknesses in WMATA operations regarding data protection and cyberscurity,
and a failure by the agency to address its vulnerabilities.

“Evidence has surfaced that WMATA, at all levels, has failed to follow its
own data handling policies and procedures as well as other policies and
procedures establishing minimum levels of protection for handling and
transmitting various types of data collected by WMATA,'' OIG report, made
public Wednesday.

https://dcist.com/story/23/05/17/metro-breach-linked-russian-computer


Kaspersky Says New Zero-Day Malware Hit iPhones, Including Its Own (WiReD)

Gabe Goldberg <gabe@gabegold.com>
Fri, 2 Jun 2023 18:19:28 -0400
On the same day, Russia's FSB intelligence service launched wild claims of
NSA and Apple hacking thousands of Russians.

https://www.wired.com/story/kaspersky-apple-ios-zero-day-intrusion


$528 Billion Nuclear Cleanup Plan at Hanford Site in Jeopardy (The New York Times)

Gabe Goldberg <gabe@gabegold.com>
Thu, 1 Jun 2023 11:08:36 -0400
A $528-billion plan to clean up 54-million gallons of radioactive
bomb-making waste may never be achieved. Government negotiators are looking
for a compromise.

https://www.nytimes.com/2023/05/31/us/nuclear-waste-cleanup.html

  [WOPR in *War Games* strikes again?
    “The only winning strategy is not to play.''
    A compromise here seems like a lose-lose strategy.
  PGN]


Secret industry documents reveal that makers of PFAS 'forever chemicals' covered up their health dangers (phys.org)

Richard Marlon Stein <rmstein@protonmail.com>
Fri, 02 Jun 2023 02:21:34 +0000
https://phys.org/news/2023-05-secret-industry-documents-reveal-makers.html

... From the department of environment pollution risks.

Is another master settlement agreement, similar to that imposed on tobacco
companies, for cancer-causing PFAS—forever chemical pollution—in the
works?


Japanese Moon Lander Crashed Because of a Software Glitch (NYTimes)

Jan Wolitzky <jan.wolitzky@gmail.com>
Sat, 27 May 2023 08:36:07 -0400
A software glitch caused a Japanese robotic spacecraft to misjudge its
altitude as it attempted to land on the moon last month leading to its
crash, an investigation has revealed.

Ispace of Japan said in a news conference on Friday that it had finished
its analysis of what went wrong during the landing attempt on April 25. The
Hakuto-R Mission 1 lander completed its planned landing sequence, slowing
to a speed of about 2 miles per hour. But it was still about three miles
above the surface. After exhausting its fuel, the spacecraft plunged to its
destruction, hitting the Atlas crater at more than 200 miles per hour.

<https://www.nytimes.com/2023/05/26/science/moon-crash-japan-ispace.html>


Millions of Gigabyte Motherboards Were Sold With a Firmware Backdoor (WiReD)

Gabe Goldberg <gabe@gabegold.com>
Wed, 31 May 2023 13:50:00 -0400
Hidden code in hundreds of models of Gigabyte motherboards invisibly and
insecurely downloads programs—a feature ripe for abuse, researchers say.

Hiding malicious programs in a computer's UEFI firmware, the deep-seated
code that tells a PC how to load its operating system, has become an
insidious trick in the toolkit of stealthy hackers. But when a motherboard
manufacturer installs its own hidden backdoor in the firmware of millions of
computers—and doesn't even put a proper lock on that hidden back entrance
-- they're practically doing hackers' work for them.

Researchers at firmware-focused cybersecurity company Eclypsium revealed
today that they've discovered a hidden mechanism in the firmware of
motherboards sold by the Taiwanese manufacturer Gigabyte, whose components
are commonly used in gaming PCs and other high-performance computers.
Whenever a computer with the affected Gigabyte motherboard restarts,
Eclypsium found, code within the motherboard's firmware. code within the
motherboard's firmware invisibly initiates an updater program that runs on
the computer and in turn downloads and executes another piece of software.

While Eclypsium says the hidden code is meant to be an innocuous tool to
keep the motherboard's firmware updated, researchers found that it's
implemented insecurely, potentially allowing the mechanism to be hijacked
and used to install malware instead of Gigabyte's intended program. And
because the updater program is triggered from the computer's firmware,
outside its operating system, it's tough for users to remove or even
discover.

https://www.wired.com/story/gigabyte-motherboard-firmware-backdoor/


Fake students stealing aid from colleges (Nanette Asimov)

Peter G Neumann <neumann@csl.sri.com>
Sun, 4 Jun 2023 11:49:18 PDT
Nanette Asimov, *The San Francsico Chronicle" print edition,
4 Jun 2023 front page

  [Based on an earlier online version:]

  Thousands of `ghost students' are applying to California colleges to steal
  financial aid. Here's how.  SFChronicle, 2 Jun 2023:
  Nobody knows how much money the fraudsters have managed to grab by
  impersonating enrollees.

Months after a mysterious check for $1,400 landed in Richard Valicenti's
mailbox last summer, the U.S. Department of Education notified him that the
money was a mistake—an overpayment of the $3,000 Pell grant he had used to
attend Saddleback College in Orange County.

"I told them I never applied for a Pell," said Valicenti, a 64-year-old
radiation oncologist at UC Davis who had never even heard of Saddleback.
[...]

 [Just the tip of the iceberg, evidently.  No surprise... PGN]


Tesla leak reportedly shows thousands of Full Self-Driving safety complaints (The Verge)

Gabe Goldberg <gabe@gabegold.com>
Fri, 26 May 2023 20:01:37 -0400
The data contains reports about over 2,400 self-acceleration issues and more
than 1,500 braking problems.

https://www.theverge.com/2023/5/25/23737972/tesla-whistleblower-leak-fsd-complaints-self-driving


Tesla data leak reportedly details Autopilot complaints (LATimes)

Steve Bacher <sebmb1@verizon.net>
Sun, 28 May 2023 07:05:03 -0700
https://www.latimes.com/business/story/2023-05-26/tesla-autopilot-alleged-data-breach-leak

How bad is Tesla Autopilot's safety problem? According to thousands of
complaints allegedly from Tesla customers in the U.S. and around the world,
pretty bad.
<https://www.latimes.com/business/story/2022-12-08/tesla-lawsuit-full-self-driving-technology-failure-not-fraud>


Social Media and Youth Mental Health (U.S. Surgeon General)

Jim Reisert AD1C <jjreisert@alum.mit.edu>
Wed, 24 May 2023 08:12:24 -0600
This Advisory describes the current evidence on the impacts of social media
on the mental health of children and adolescents. It states that we cannot
conclude social media is sufficiently safe for children and adolescents and
outlines immediate steps we can take to mitigate the risk of harm to
children and adolescents.

https://www.hhs.gov/surgeongeneral/priorities/youth-mental-health/social-media/

https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-summary.pdf

https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf


Meta slapped with record $1.3 billion EU fine over data privacy (CNN)

geoff goodfellow <geoff@iconia.com>
Tue, 30 May 2023 16:21:16 -0700
Meta has been fined a record-breaking 1.2 billion euros ($1.3 billion) by
European Union regulators for violating EU privacy laws by transferring the
personal data of Facebook users to servers in the United States.
https://edition.cnn.com/2022/11/28/tech/meta-irish-fine-privacy-law/index.html
https://edition.cnn.com/2022/04/23/business/eu-tech-regulation/index.html

The European Data Protection Board announced the fine in a statement Monday,
saying it followed an inquiry into Facebook (FB) by the Irish Data
Protection Commission, the chief regulator overseeing Meta's operations in
Europe.
<https://edpb.europa.eu/news/news/2023/12-billion-euro-fine-facebook-result-edpb-binding-decision_en>
<https://money.cnn.com/quote/quote.html?symb=FB&source=story_quote_link>

The move highlights ongoing uncertainty about how global businesses may
legally transfer EU users' data to servers overseas.  [...]
https://www.cnn.com/2023/05/22/tech/meta-facebook-data-privacy-eu-fine

  [Matthew Kruk found
https://www.cbc.ca/news/business/meta-europe-fine-data-transfers-1.6851243
  PGN]


Flaws Found in Using Source Reputation for Training Automatic Misinformation Detection Algorithms (Carol Peters)

ACM TechNews <technews-editor@acm.org>
Wed, 17 May 2023 12:15:57 -0400 (EDT)
Carol Peters, Rutgers Today, 16 May 2023 via ACM TechNews

Rutgers University scientists found algorithms trained to detect `fake news'
may have a flawed approach for assessing the credibility of online news
stories.  The researchers said most of these programs do not evaluate an
article's credibility, but instead rely on a credibility score for the
article's sources. They rated the credibility and political leaning of 1,000
news articles and incorporated the assessment into misinformation-detection
algorithms, then evaluated the labeling methodology's impact on the
algorithms' performance. Article-level source labels matched just 51% of the
time, illustrating the source reputation method 's lack of reliability. In
response, the researchers created a new dataset of journalistic-quality,
individually labeled articles and a process for misinformation detection and
fairness audits.


Failed Expectations: A Deep Dive Into the Internet's 40 Years of Evolution (Geoff Huston)

geoff goodfellow <geoff@iconia.com>
Tue, 30 May 2023 16:18:19 -0700
In a recent workshop, I attended, reflecting on the evolution of the
Internet over the past 40 years, one of the takeaways for me is how weâve
managed to surprise ourselves in both the unanticipated successes weâve
encountered and in the instances of failure when technology has stubbornly
resisted to be deployed despite our confident expectations to the contrary!
What have we learned from these lessons about our inability to predict
technology outcomes?  Are the issues related to the aspects of the
technology? Are they embedded in the considerations behind the expectations
about how a technology will be adopted? Or do the primary issues reside at a
deeper level relating to economic and even political contexts? Let's look at
this question of failed expectations using several specific examples drawn
from the last 40 years of the Internet's evolution.

*The Public Debut of the Internet (and the demise of O.S.I.)*.  [...]

https://circleid.com/posts/20230524-failed-expectations-a-deep-dive-into-the-internets-40-years-of-evolution


AI Poses 'Risk of Extinction,' Industry Leaders Warn (Kevin Roose)

Peter G Neumann <neumann@csl.sri.com>
Wed, 31 May 2023 14:56:28 PDT
Kevin Roose, *The New York Times*, 30 May 2023, via ACM TechNews

Subtital: Putting the Necessity of Controls on Par with Nuclear Weapons

Industry leaders warned in an open letter from the nonprofit Center for AI
Safety that artificial intelligence (AI) technology might threaten
humanity's existence. Signatories included more than 350 executives,
scientists, and engineers working on AI, with the CEOs of OpenAI, Google
DeepMind, and Anthropic among them. ACM Turing Award recipients and AI
pioneers Geoffrey Hinton and Yoshua Bengio also signed the letter, which
comes amid growing concern about the potential hazards of AI partly fueled
by innovations in large language models. Such advancements have provoked
fears of AI facilitating mass job takeovers and the spread of
misinformation, while earlier this month OpenAI's Sam Altman said the risks
were sufficiently dire to warrant government intervention and regulation.

https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html

  [Jan Wolitzky noted a comment on this in The Onion:
  “It's sad how desperately these nerds want to make their coding jobs sound
  cool.''
https://www.theonion.com/industry-leaders-warn-that-ai-poses-risk-of-extinc=
tion-1850497166
  PGN]


What we *should* be worrying about with AI

Lauren Weinstein <lauren@vortex.com>
Wed, 31 May 2023 19:11:27 -0700
We shouldn't be worrying about AI wiping out humanity. That's a smokescreen.
That's sci-fi. We need to worry about the *individuals* now and in the near
future who can be hurt by the premature deployment of generative AI systems
that spew wrong answers and lies, and then when asked for confirmation, lie
about their own lies! And just popping up warnings to users is useless,
because you know and I know that hardly anyone will read those warnings or
pay any attention to them whatsoever.

  [Remenber the boy who cried wolf too often—when there was one.  PGN]


Artificial intelligence system predicts consequences of gene modifications (medicalxpress.com)

Richard Marlon Stein <rmstein@protonmail.com>
Thu, 01 Jun 2023 04:00:24 +0000
https://medicalxpress.com/news/2023-05-artificial-intelligence-consequences-gene-modifications.html

"The new model, dubbed Geneformer, learns from massive amounts of data on
gene interactions from a broad range of human tissues and transfers this
knowledge to make predictions about how things might go wrong in disease."

Would commercial or academic life science organizations apply this
capability to reduce human trial expenses for certain genetically engineered
medicines or treatments, like CAR T-cells used to treat leukemia? Would
prescription drug prices decline as a result?


How to fund and launch your AI startup (Meetup)

Gabe Goldberg <gabe@gabegold.com>
Fri, 2 Jun 2023 02:21:02 -0400
You've got a great idea, but how do you build it into a successful company?
Come learn how to assemble the right team, develop your pitch, and raise
venture capital for your new company.

https://www.meetup.com/acm-chicago/events/293851188

What could go wrong?


Rise of the Newsbots: AI-Generated News Websites Proliferating Online (NewsGuard)

Steve Bacher <sebmb1@verizon.net>
Fri, 2 Jun 2023 07:00:29 -0700
NewsGuard has identified 49 news and information sites that appear to be
almost entirely written by artificial intelligence software. A new
generation of content farms is on the way.

https://www.newsguardtech.com/special-reports/newsbots-ai-generated-news-websites-proliferating/


Some thoughts on the current AI storm und drang

Gene Spafford <spaf@purdue.edu>
Fri, 19 May 2023 17:53:41 -0400
There is a massive miasma of hype and misinformation around topics related
to AI, ML, and chat programs and how they might be used—or misused.  I
remember previous hype cycles around 5th-generation systems, robotics, and
automatic language translation (as examples). The enthusiasm each time
resulted in some advancements that weren't as profound as predicted. That
enthusiasm faded as limitations became apparent and new bright, shiny
technologies appeared to be chased.

The current hype seems even more frantic for several reasons, not least of
which is that there are many more potential market opportunities for the
current developments. Perhaps the entities that see new AI systems as a way
to reduce expenses by cutting headcount and replacing people with AI are one
of the biggest drivers causing both enthusiasm and concern (see, for
example,
https://www.businessinsider.com/chatgpt-jobs-at-risk-replacement-artificialintelligence-ai-labor-trends-2023-02?op=1#teachers-5). That
was a driver of the robotics craze some years back, too. The current cycle
has already had an impact on some creative media, including being an issue
of contention in the media writers' strike in the US. It also is raising
serious questions in academia, politics, and the military.

There's also the usual hype cycle FOMO (fear of missing out) and the urge to
be among the early adopters, as well as those speculating about the most
severe forms of misuse.  That has led to all sorts of predictions of
outlandish capabilities and dire doom scenarios—neither of which is
likely wholly accurate. AI, generally, is still a developing field and will
produce some real benefits over time. The limitations of today's systems may
or may not be present in future systems.  However, there are many caveats
about the systems we have now and those that may be available soon that
justify genuine concern.

First, LLMs such as ChatGPT, Bard, et al. are NOT really "intelligent." [...]
Second, these systems are not accountable in current practice and law.  [...]
Third, the inability of much of the general public to understand teh
limitations of current systems means that any use may introduce a bias
into how people make their own decisions and choices.  [...]

  [Long item PGN-ed for RISKS.  Check in with Spaf if you want the entire
  piece.]


Massachusetts hospitals, doctors, medical groups pilot ChatGPT technology (The Boston Globe)

Jan Wolitzky <jan.wolitzky@gmail.com>
Wed, 31 May 2023 07:27:32 -0400
Artificial intelligence is already in wide use in health care: medical
workers use it to record patient interactions and add notes to medical
records; some hospitals use it to read radiology images, or to predict how
long a patient may need to be in intensive care.

But some hospitals have begun to contemplate using a new phase of AI that
is much more advanced and could have a profound effect on their operations,
and possibly even clinical care.

Indeed, never one for modesty, ChatGPT, one form of the new AI technology
that can render answers to queries in astonishing depth (if dubious
accuracy), called its own role in the future of medicine a *groundbreaking
development poised to reshape the medical landscape.*

https://www.bostonglobe.com/2023/05/30/metro/massachusetts-hospitals-doctor=
s-medical-groups-pilot-chatgpt-technology/


The benefits and perils of using artificial intelligence to trade and other financial instruments (TheConversation.com)

Richard Marlon Stein <rmstein@protonmail.com>
Sun, 21 May 2023 03:26:49 +0000
https://theconversation.com/chatgpt-powered-wall-street-the-benefits-and-perils-of-using-artificial-intelligence-to-trade-stocks-and-other-financial-instrument-201436

High Frequency Trading (HFT) platforms elevate financial market
volatility. Cou pling HFT with ChatGPT will likely exponentiate volatility.

  [Is e^sh*tload >= Spinal Tap's "11"?]


Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers (Rolling Stone)

geoff goodfellow <geoff@iconia.com>
Thu, 18 May 2023 05:29:01 -0700
Texas A&M University commerce seniors who have already graduated were
denied their diplomas because of an instructor who incorrectly used AI
software to detect cheating.

https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-false-claims-1234736601/
https://news.slashdot.org/story/23/05/17/2023212/professor-failed-more-than-half-his-class-after-chatgpt-falsely-claimed-it-wrote-their-final-papers


Top French court backs AI-powered surveillance cameras for Paris Olympics (Politico)

Steve Bacher <sebmb1@verizon.net>
Thu, 18 May 2023 07:42:33 -0700
https://www.politico.eu/article/french-top-court-backs-olympics-ai-powered-surveillance-cameras/


Meta's Big AI Giveaway (Metz/Isaac)

Peter Neumann <neumann@csl.sri.com>
Sat, 20 May 2023 15:36:15 PDT
Cade Metz and Mike Isaac, *The New York Times* business section, front
page continued inside, 20 May 2023

  “Do you want every AI system to be under the control of a couple of
  powerful American companies?'' Yann LeCun, Meta chief scientist.

As tech giant makes its latest innovation open-source, rivals view it
as a dangerous move.


Meta hit with record fine by Irish regulator over U.S. data transfers (CBC)

Matthew Kruk <mkrukg@gmail.com>
Mon, 22 May 2023 14:30:41 -0600
https://www.cbc.ca/news/business/meta-europe-fine-data-transfers-1.6851243

Facebook parent company Meta was hit with a record 1.2 billion euro ($1.75
billion Cdn) fine by its lead European Union privacy regulator over its
handling of user information and given five months to stop transferring
users' data to the United States.

The fine, imposed by Ireland's Data Protection Commissioner (DPC), came
after Meta continued to transfer data beyond a 2020 EU court ruling that
invalidated an EU-U.S. data transfer pact. It tops the previous record EU
privacy fine of 746 million euros ($1.09 billion Cdn) handed by Luxembourg
to Amazon.com Inc in 2021.  [...]


AI scanner used in hundreds of US schools misses knives (BBC)

Matthew Kruk <mkrukg@gmail.com>
Tue, 23 May 2023 06:33:54 -0600
https://www.bbc.com/news/technology-65342798

A security firm that sells AI weapons scanners to schools is facing fresh
questions about its technology after a student was attacked with a knife
that the $3.7m system failed to detect.

On Halloween last year, student Ehni Ler Htoo was walking in the corridor
of his school in Utica, New York, when another student walked up behind him
and stabbed him with a knife.

Speaking exclusively to the BBC, the victim's lawyer said the 18-year-old
suffered multiple stab wounds to his head, neck, face, shoulder, back and
hand.

The knife used in the attack was brought into Proctor High School despite a
multimillion weapons-detection system installed by a company called Evolv
Technology, a security firm that wants to replace traditional metal
detectors with AI weapons scanners.


Milton resident's against CVS raises questions about the use of AI lie detectors in hiring (The Boston Globe)

Steve Bacher <sebmb1@verizon.net>
Tue, 23 May 2023 13:45:48 +0000 (UTC)
https://www.boston.com/news/the-boston-globe/2023/05/22/milton-residents-lawsuit-cvs-ai-lie-detectors

It's illegal for employers in Mass. to use a lie detector to screen job
applicants, but what if they use AI to assess a candidate's honesty?


EPIC on Generative AI

Prashanth Mundkur <prashanth.mundkur@sri.com>
Thu, 25 May 2023 02:39:50 +0000
In case you haven't seen this great report:

https://epic.org/new-epic-report-sheds-light-on-generative-a-i-harms/


Reality check: What will generative AI really do for cybersecurity? (Cyberscoop)

Richard Marlon Stein <rmstein@protonmail.com>
Wed, 24 May 2023 12:20:15 +0000
https://cyberscoop.com/generative-ai-chatbots-cybersecurity/ via
https://www.washingtonpost.com/politics/2023/05/24/food-agriculture-industry-gets-new-center-share-cybersecurity-information/.

"Cleaning data to get it usable for machine learning required time and
resources, and once the agency rolled out the models for analysts to use
some were resistant and were concerned that they could be displaced. 'It
took a while until it was accepted that such models could triage and to give
them a more effective role,' Neuberger said."

Data cleansing requires skilled eyes and hands, not something an LLM
possesses out-of-the-box. Inculcating these skills into the LLM is
equivalent to outsourcing and off-shoring.

If the data cleansers and infosec engineers were given certain copyright or
patent royalties over the knowledge they transferred into the LLM,
cybersecurity engineering organizational effectiveness would likely
experience less turnover.


Moody's cites credit risk from state-backed cyber intrusions into U.S. critical infrastructure (cybersecuritydive.com)

Richard Marlon Stein <rmstein@protonmail.com>
Thu, 01 Jun 2023 12:00:26 +0000
https://www.cybersecuritydive.com/news/moodys-credit-risk-cyber-critical-infrastructure/651656/

Corporate credit ratings are affected by cybersecurity risk assessments.
This expense should motivate rapid adoption of hardening infrastructure
measures to elevate ratings based on infosec audits and preparedness,
thereby reduce cyber-insurance costs.

  Where are the skilled hands to competently rollout these core capabilities
  and to sustain vigilant operations? Cyber infrastructure engineers and
  watchdogs are hard to train, recruit, and retain. Not a role typically
  out-sourced or off-shore, unlike the domestic US computer manufacturing
  operations migrated to cut costs.

  Remains to be seen if AI mitigates hockey-stick cybersecurity cost
  expenditures while improving protection benchmarks established by
  CISA. Business expenses attributed to cyberinsurance cannot be absorbed by
  consumers indefinitely.


What Happens When Your Lawyer Uses ChatGPT (NYTimes)

Jan Wolitzky <jan.wolitzky@gmail.com>
Sat, 27 May 2023 13:45:02 -0400
The lawsuit began like so many others: A man named Roberto Mata sued the
airline Avianca, saying he was injured when a metal serving cart struck his
knee during a flight to Kennedy International Airport in New York.

When Avianca asked a Manhattan federal judge to toss out the case, Mr.
Mata's lawyers vehemently objected, submitting a 10-page brief that cited
more than half a dozen relevant court decisions. There was Martinez v.
Delta Air Lines, Zicherman v. Korean Air Lines and, of course, Varghese v.
China Southern Airlines, with its learned discussion of federal law and
“the tolling effect of the automatic stay on a statute of limitations.''

There was just one hitch: No one—not the airline's lawyers, not even the
judge himself—could find the decisions or the quotations cited and
summarized in the brief.

That was because ChatGPT had invented everything.

https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html

  [Gabe Goldberg noted this item, and commented:
    I guess hallucinations aren't admissible...
  Matthew Kruk found another report in the BBC:
    https://www.bbc.com/news/world-us-canada-65735769
  Amos Shapir noted that one as well, with this line;
    The lawyer claimed that it was the first time he had used AI,
    and was “unaware that its content could be false.''
  PGN]


Anger over airports' passport e-gates not working (BBC News)

Gabe Goldberg <gabe@gabegold.com>
Sun, 28 May 2023 16:38:03 -0400
Passengers flying into the UK faced hours of delays at airports across the
country where passport e-gates were not working.

Travelers told of their anger at being stuck in queues at airports including
Heathrow, Manchester and Gatwick.

The Home Office said on Saturday evening that all e-gates were now operating
as normal.

The disruption, which began on Friday night, had been due to an IT issue, a
source told the BBC.

All airports across the country using the technology were affected.

The e-gate system speeds up passport control by allowing some passengers to
scan their own passports. It uses facial recognition to verify identity and
captures the traveler's image.

https://www.bbc.com/news/uk-65731795


Longer and longer trains are blocking emergency services and killing people (WashPost)

Lauren Weinstein <lauren@vortex.com>
Sat, 27 May 2023 16:26:37 -0700
https://www.washingtonpost.com/nation/interactive/2023/long-trains-block-intersections-paramedics/


Denials of health-insurance claims are rising and getting weirder (WashPost)

Richard Marlon Stein <rmstein@protonmail.com>
Wed, 17 May 2023 22:29:47 +0000
https://www.washingtonpost.com/opinions/2023/05/17/health-insurance-denial-claims-reasons/

“ProPublica's investigation, published in March, found that an automated
system, called PXDX, allowed Cigna medical reviewers to sign off on 50
charts in 10 seconds presumably without even examining the patients'
records.''

Another electronic health record advantage.


Small plane crashes after jet fighter chase in WashDC area (WashPost)

Lauren Weinstein <lauren@vortex.com>
Sun, 4 Jun 2023 15:12:01 -0700
A small plane (Cessna Citation)—apparently on autopilot with unresponsive
pilot—crashes in mountainous Virginia terrain after chase by jet fighters
causing sonic boom across DC, after plane violated DC airspace. -L

  [I was just on a zoom call with folks in the DC area.  The boom
  was heard quite widely.  PGN]


Response from American Airlines for delay

"Steven J. Greenwald" <greenwald.steve@gmail.com>
Sat, 20 May 2023 15:18:11 -0400
For my flight out of DFW the other day, American Airlines had a major issue
with a bugged up Airbus 321. They couldn't debug it, so they had to change
the plane/gate.

I found it unusual enough to ask for confirmation from American Airlines
(they gave it, included below).

> Date: Sat, May 20, 2023 at 2:46=E2=80=AFPM
> From: <AmericanAirlinesCustomerRelations@aa.com>
> Subject: Your Response From American Airlines

> Thank you for contacting Customer Relations. I am happy to respond to your
> inquiry regarding the reason for the delay of AA2206.

> Our records indicate that your flight was delayed due to an aircraft
> change caused by a moth infestation.  [...]

  [And it was not even Moth-ers' Day.  PGN]


Microsoft Finds macOS Bug That Lets Hackers Bypass SIP Root Restrictions (Sergiu Gatlan)

ACM TechNews <technews-editor@acm.org>
Fri, 2 Jun 2023 11:53:10 -0400 (EDT)
Sergiu Gatlan, *BleepingComputer, 30 May 2023 via ACM Tech News

Apple has patched a vulnerability discovered by Microsoft security
researchers, dubbed Migraine, that would have allowed attackers with root
privileges to install *undeleteable* malware and access the victim's private
data. The researchers said, “By focusing on system processes that are
signed by Apple and have the com.apple.rootless.install.heritable
entitlement, we found two child processes that could be tampered with to
gain arbitrary code execution in a security context that bypasses SIP
[System Integrity Protection] checks.''  Bypassing SIP also would allow
attackers to circumvent Transparency, Consent, and Control (TCC) policies to
gain access the victim's private data. The vulnerability was patched in
Apple's May 18 security updates for macOS Ventura 13.4, macOS Monterey
12.6.6, and macOS Big Sur 11.7.7.


Apps for Older Adults Contain Security Vulnerabilities (Patrick Lejtenyi)

ACM TechNews <technews-editor@acm.org>
Wed, 24 May 2023 11:23:31 -0400 (EDT)
Patrick Lejtenyi, Concordia University, Canada, 23 May 2023

Researchers at Canada's Concordia University found security bugs in 95 of
146 popular Android applications designed for older adults. The researchers
discovered that many apps failed to properly authenticate server application
programming interface endpoints, which attackers could exploit to access
sensitive personal data. Other apps had easily penetrable accounts, with
some sending unencrypted information to either client-side servers or
third-party domains. The researchers found multiple other flaws in dozens of
other apps. Only seven of the 35 app developers the team contacted about the
bugs responded, while Concordia's Pranay Kapoor said the vulnerabilities
could be remedied by following best practices for basic security.


India official drains entire dam to retrieve phone (BBC)

Matthew Kruk <mkrukg@gmail.com>
Fri, 26 May 2023 21:09:12 -0600
https://www.bbc.com/news/world-asia-india-65726193

A government official in India has been suspended after he ordered a
reservoir to be drained to retrieve his phone.

It took three days to pump millions of litres of water out of the dam, after
Rajesh Vishwas dropped the device while taking a selfie.

By the time it was found, the phone was too water-logged to work.

  [Jim Reisert found that here:
https://www.cnn.com/2023/05/28/india/india-reservoir-drained-selfie-photo-intl-hnk/
  PGN]

  [Out, Out, Dammed Drought.  PGN]


Google's Privacy Sandbox

Lauren Weinstein <lauren@vortex.com>
Thu, 18 May 2023 08:31:26 -0700
I believe the big Achilles heel of Google's Privacy Sandbox, their
continuing effort rolling out in trials already, is Ad Topics, that replaces
third party cookies with an advertising API involving local device modeling
of your browsing history into predefined categories (about 350), with sites
able to receive up to three of them most highly ranked.

Google asserts that this will maintain or increase the value of targeted ads
while increasing individual user privacy by moving away from third party
cookies and ad hoc techniques used by sites to try target individual users.

Google is moving to default the various aspect of Privacy Sandbox to ON,
based on the usual hope that users won't bother to change the defaults.

I think the two words that spell the main trouble for this plan are
"browsing history." Most people are quite sensitive about this and assume it
is private. Even if shared with Google for enhanced services, they don't
really want advertisers to know anything about it.

Hell, even I feel an emotional punch when I think about advertisers being
handed information about my browsing, no matter how carefully categorized,
anonymized, and sanitized. And I know how this stuff actually works. I even
agree that in theory it's better than the status quo with third party
cookies, etc.

Is this really going to fly in the long run? It seems unlikely as currently
defined. Most people aren't going to understand it, just like they don't
understand that Google doesn't sell user data to advertisers—a widely
held false belief that Google has never really been able to dispel. And
Privacy Sandbox is even more complicated to explain to the average
nontechnical person.

Politicians from both parties are going to jump all over this. The fine
points of privacy balance will be lost in the noise.

This is unlikely to end well for anyone.


WebKit Under Attack: Apple Issues Emergency Patches for 3 New Zero-Day Vulnerabilities (Apple)

geoff goodfellow <geoff@iconia.com>
Sat, 20 May 2023 12:59:55 -0700
Apple on Thursday rolled out security updates
<https://support.apple.com/en-us/HT201222> to iOS, iPadOS, macOS, tvOS,
watchOS, and the Safari web browser to address three new zero-day flaws
that it said are being actively exploited in the wild.

The three security shortcomings are listed below --

   - CVE-2023-32409 - A WebKit flaw that could be exploited by a malicious
   actor to break out of the Web Content sandbox. It was addressed with
   improved bounds checks.
   - CVE-2023-28204 - An out-of-bounds read issue in WebKit that could be
   abused to disclose sensitive information when processing web content. It
   was addressed with improved input validation.
   - CVE-2023-32373 - A use-after free bug in WebKit that could lead to
   arbitrary code execution when processing maliciously crafted web content.

It was addressed with improved memory management.   [...]


Q&A: Why is there so much hype about the quantum computer? (phys.org)

Richard Marlon Stein <rmstein@protonmail.com>
Tue, 23 May 2023 03:36:59 +0000
https://phys.org/news/2023-05-qa-hype-quantum.html

“Calculations show that it takes a quantum computer of 10^20M quantum bits
[qubits *] to break an RSA encryption. Right now, the largest quantum
computer is in the region of 430 quantum bits. So there is still some way to
go. So, at the risk of becoming a laughing stock for posterity, I would
guess that it will take another 20 years before we have a quantum computer
that meets these expectations.''

Four orders of magnitude to scale in the qubit space represents a mighty
tall order to achieve. Government funding essential to back innovation on
this turf.

  [*And that's without the massive error-correction that is required for a
  huge-qubit quantum computer—and let's not forget out-put(t)s on the
  turf.  PGN]


Report Estimates Trillions in Indirect Losses Would Follow Quantum Computer Hack (nextgov.com)

Richard Marlon Stein <rmstein@protonmail.com>
Tue, 23 May 2023 12:12:00 +0000
https://www.nextgov.com/cybersecurity/2023/05/report-estimates-trillions-indirect-losses-would-follow-quantum-computer-hack/386653/

“An analysis projects the hypothetical disruption a cyberattack from a
quantum computer could have on global financial markets.''

The original report from the Hudson Institute is
https://www.hudson.org/events/prosperity-risk-quantum-computer-threat-us-financial-system.

[Same old financial chaos with a quantum twist.]


Don't Store Your Money on Venmo, U.S. Govt Agency Warns (Gizmodo)

Monty Solomon <monty@roscom.com>
Sat, 3 Jun 2023 13:32:00 -0400
https://gizmodo.com/venmo-paypal-digital-payments-cashapp-1850500772


Re: An EFF Investigation: Mystery GPS Tracker

Steve Lamont <spl@tirebiter.org>
Sat, 20 May 2023 10:13:19 -0700
The device in question sounds very similar to the better known LoJack stolen
vehicle recovery service, though generally they're better hidden than just
under the driver's seat.

I had one put in a used car I bought and the installer wouldn't even let me
watch.  (I suspect it was placed under the rear seat but never bothered to
poke around to look.)

I've had one in each in my last three new vehicles.  The dealer installs
them by default and the one time fee is included in the purchase price.

According to the brochure, at least, the device tracking is only activated
if/when the vehicle is reported stolen.


Re: Three Companies Supplied Fake Comments to FCC (NY AG), but John Oliver didn't (RISKS-33.71)

"John Levine" <johnl@iecc.com>
16 May 2023 20:12:51 -0400
> I suppose this is not nearly on the same level as what those companies
> did, however.

It's not even slightly the same. Oliver was encouraging his viewers to send
their own messages to the FCC. Real messsages from actual people are fine,
so fine that the last clause of the First Amendment specifically allows it:

  Congress shall make no law respecting an establishment of religion,
  or prohibiting the free exercise thereof; or abridging the freedom of
  speech, or of the press; or the right of the people peaceably to
  assemble, and to petition the Government for a redress of grievances.

Sure, a lot of the messages will say the same thing, but that's
nothing new. Back in the day people set up card tables with postcards
with preprinted messages you could write your name under and mail to
your representatives. So long as each postcard was from the real
person whose name was on it, no problem.

These guys were sending fake comments using names of people who had no
idea that messages were being sent using their names. I hope the
difference is not hard to see.


Re: Near collision embarrasses Navy, so they order public San Diego webcams taken down (Bacher, RISKS-33.71)

Michael Kohne <mhkohne@kohne.org>
Wed, 17 May 2023 10:57:47 -0400
I think you're wrong. Most of what the Navy does is pretty much out in the
open. There's no way to prevent people watching what the Navy is doing
anywhere near shore. It's just the nature of the beast—you can't stop
people taking pictures of the ocean, guys! And anyone competent in the Navy
is aware of that, but frankly it never hurts for them to be publicly
reminded of that fact—which is why I also tend to believe this is more
about embarrassment, rather than any actual purpose.

Please report problems with the web pages to the maintainer

x
Top