The RISKS Digest
Volume 33 Issue 86

Saturday, 23rd September 2023

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Driverless Car Company Using Chatbots to Make Its Vehicles Smarter
MIT Tech Review
ChatGPT Can Now Generate Images
NYTimes
Prominent Authors Sue OpenAI
????
Google Search first result for "Tank Man" is e fake AI image rather than actual image from China
404Media + Lauren Weinstein
Misinformation research is buckling under GOP legal attacks
WashPost
Egyptian presidential hopeful targeted by Predator spyware
WashPost
It's 2030, and digital wallets have replaced every card in our purses and pockets
ZDNET
Google accused of directing motorist to drive off collapsed bridge
BBC
Typeface trolls shaking down users of Adobe's font platform
BoingBoing
Bitcoin conspiracy theory
PGN via John Markoff
Re: Pedestrian dies after Cruise cars block ambulance
Amos Shapir John Levine
Info on RISKS (comp.risks)

Driverless Car Company Using Chatbots to Make Its Vehicles Smarter (MIT Tech Review)

ACM TechNews <technews-editor@acm.org>
Wed, 20 Sep 2023 12:08:18 -0400 (EDT)
Will Douglas Heaven, MIT Technology Review 14 Sep 2023, via ACM
TechNews, 20 Sep 2023

U.K.-based driverless car company Wayve has tapped chatbot technology to
question its vehicles about their driving decisions. The company combined
its self-driving software with a large language model into the LINGO-1
hybrid model, which synchronizes video and driving data with
natural-language descriptions that record the car's observations and
actions. Wayve aims to know how and why its cars make certain decisions by
quizzing the self-driving software at every step, helping to expose flaws
faster than sifting through video playbacks or scrolling through error
reports. The University of California, Berkeley's Pieter Abbeel said, “With
a system like LINGO-1, I think you get a much better idea of how well it
understands driving in the world.''

  [Smarter than what? A bedpost?  PGN]


ChatGPT Can Now Generate Images (NYTimes)

ACM TechNews <technews-editor@acm.org>
Fri, 22 Sep 2023 11:35:28 -0400 (EDT)
Cade Metz and Tiffany Hsu, *The New York Times*, 20 Sep 2032,
via ACM TechNews; 22 Sep 2023

OpenAI has integrated a new version of its DALL-E image generator into its
ChatGPT online chatbot. DALL-E 3 generates more detailed images than its
predecessors, with notable improvements in images featuring letters,
numbers, and human hands. The new version of the image generator can create
images from multi-paragraph descriptions and follow detailed
instructions. OpenAI's Aditya Ramesh said DALL-E 3 was given a more precise
understanding of the English language. The DALL-E/ChatGPT integration means
ChatGPT can generate digital images based on detailed textual descriptions
provided by users or produced by the chatbot itself. OpenAI has included
tools in DALL-E 3 to prevent the generation of sexually explicit images,
images of public figures, and images that imitate the styles of specific
artists.

  [It's all over now, when you can create a realistic image of almost anyone
  saying almost anything, however faked, ridiculous, and perhaps even
  irrefutable.  The world as we knew it is no longer.  Ground meat was
  always a mystery, but ground truth seems to be irrelevant.  Now even what
  seems to be the real thing may not be real, as in 3-D printing bots
  cranking out not only tender synthetic steaks, spare ribs, and replaceable
  body parts, but also fake binding contracts, `originally' signed
  masterpiece paintings, altered classic movies, and almost anything else.
  PGN]


Prominent Authors Sue OpenAI (NYTimes +)

Jan Wolitzky <jan.wolitzky@gmail.com>
Wed, 20 Sep 2023 10:48:47 -0400
A group of prominent novelists, including John Grisham, Jonathan Franzen
and Elin Hilderbrand, are joining the legal battle against OpenAI over its
chatbot technology, as fears about the encroachment of artificial
intelligence on creative industries continue to grow.

More than a dozen authors filed a lawsuit against OpenAI on Tuesday,
accusing the company, which has been backed with billions of dollars in
investment from Microsoft, of infringing on their copyrights by using their
books to train its popular ChatGPT chatbot. The complaint, which was filed
along with the Authors Guild, said that OpenAI's chatbots can now produce
*derivative works* that can mimic and summarize the authors' books,
potentially harming the market for authors' work, and that the writers were
neither compensated nor notified by the company.

“The success and profitability of OpenAI are predicated on mass copyright
infringement without a word of permission from or a nickel of compensation
to copyright owners,'' the complaint said.

https://www.nytimes.com/2023/09/20/books/authors-openai-lawsuit-chatgpt-copyright.html

  [Ellen Ullman noted (via Dave Farber) another article on
  this subject:
    "Algorithmic destruction" and the deep algorithmic problems of AI and
    copyright,
  https://www.sfchronicle.com/tech/article/ai-artificial-intelligence-copyright-18374295.php
    where Chase DiFeliciantonio asks “Could *algorithmic destruction* solve
    AI's copyright issues?
  PGN]


Google Search first result for "Tank Man" is e fake AI image rather than actual image from China (404MEdia)

Lauren Weinstein <lauren@vortex.com>
Wed, 20 Sep 2023 11:17:25 -0700
https://www.404media.co/first-google-search-result-for-tiananmen-square-tank-man-is-ai-generated-selfie/

  Yes, this ranking of a fake image like that as top result IS MISINFORMATION
  from Google. AWFUL. -L


Misinformation research is buckling under GOP legal attacks (WashPost)

Jan Wolitzky <jan.wolitzky@gmail.com>
Sat, 23 Sep 2023 15:53:36 -0400
Academics, universities and government agencies are overhauling or ending
research programs designed to counter the spread of online misinformation
amid a legal campaign from conservative politicians and activists who
accuse them of colluding with tech companies to censor right-wing views.

The escalating campaign =E2=80=94 led by Rep. Jim Jordan (R-Ohio) and other
Republicans in Congress and state government =E2=80=94 has cast a pall over
programs that study not just political falsehoods but also the quality of
medical information online.

Facing litigation, Stanford University officials are discussing how they
can continue tracking election-related misinformation through the Election
Integrity Partnership (EIP), a prominent consortium that flagged social
media conspiracies about voting in 2020 and 2022, several participants told
The Washington Post. The coalition of disinformation researchers may shrink
and also may stop communicating with X and Facebook about their findings.

The National Institutes of Health froze a $150 million program intended to
advance the communication of medical information, citing regulatory and
legal threats. Physicians told The Post that they had planned to use the
grants to fund projects on noncontroversial topics such as nutritional
guidelines and not just politically charged issues such as vaccinations
that have been the focus of the conservative allegations.

NIH officials sent a memo in July to some employees, warning them not to
flag misleading social media posts to tech companies and to limit their
communication with the public to answering medical questions.

https://www.washingtonpost.com/technology/2023/09/23/online-misinformation-=
jim-jordan/


Egyptian presidential hopeful targeted by Predator spyware (WashPost)

Jan Wolitzky <jan.wolitzky@gmail.com>
Sat, 23 Sep 2023 15:50:44 -0400
A prominent Egyptian opposition politician who plans to challenge President
Abdel Fatah El-Sisi in elections expected early next year was targeted with
a previously unknown zero-day attack in an effort to infect his phone with
Predator spyware, according to new research by Google and the University of
Toronto's Citizen Lab.

The discovery of the valuable zero-day exploit, designed to install
Predator on iPhones running even the most up-to-date operating system,
prompted Apple to push a security update to users on Thursday afternoon.

Citizen Lab said it had *high confidence* that the Egyptian government was
responsible for the failed hacking attempt. The effort targeted journalist
and former member of parliament Ahmed Eltantawy and was first reported by
Mada Masr, an independent Egyptian news organization. Eltantawy had been
living briefly in Lebanon but moved back to Egypt in May.

https://www.washingtonpost.com/investigations/2023/09/23/predator-egypt-hack-spyware-iphone/


It's 2030, and digital wallets have replaced every card in our purses and pockets (ZDNET)

Gabe Goldberg <gabe@gabegold.com>
Fri, 22 Sep 2023 23:40:44 -0400
OpenWallet, now joined by Microsoft, looks to the near future when digital
wallets replace traditional wallets in the same way debit cards replaced
checkbooks.

https://www.zdnet.com/finance/its-2030-and-digital-wallets-have-replaced-every-card-in-our-purses-and-pockets/

  Nice basket you have there. Be a shame if anything happened to all those
  eggs.

    [Even worse, there's another Y2K+10x problem and the date on every
    transaction goes back 10 years and everyone is hosed.  PGN]


Google accused of directing motorist to drive off collapsed bridge (BBC)

Matthew Kruk <mkrukg@gmail.com>
Thu, 21 Sep 2023 07:30:17 -0600
https://www.bbc.com/news/world-us-canada-66873982

*The family of a U.S. man who drowned after driving off a collapsed bridge
are claiming that he died because Google failed to update its maps. *

Philip Paxson's family are suing the company over his death, alleging that
Google negligently failed to show the bridge had fallen nine years earlier.

Mr Paxson died in September 2022 after attempting to drive over the damaged
bridge in Hickory, North Carolina.

A spokesperson for Google said the company was reviewing the allegations.


Typeface trolls shaking down users of Adobe's font platform (BoingBoing)

Gabe Goldberg <gabe@gabegold.com>
Thu, 21 Sep 2023 23:44:55 -0400
Do you use a font through Adobe's font platform? Is it Proxima Nova?  Users
of the typeface report being threatened by a foundry that claims to
represent its creator, and Adobe isn't taking calls. The copyright troll
business model, where lawyers demand money from people who know that proving
their innocence would cost even more, has come to the land of fancy fonts.

https://boingboing.net/2023/09/14/typeface-trolls-shaking-down-users-of-adobes-font-platform.html


Bitcoin conspiracy theory

Peter Neumann <neumann@csl.sri.com>
Fri, 22 Sep 2023 21:24:36 PDT
 [Thanks to John Markoff.]

Nic Carter doubles down on theory Bitcoin was invented by NSA

https://cointelegraph.com/news/nic-carter-supports-bitcoin-invented-by-nsa-conspiracy-theory?utm_source=artifact


Re: Pedestrian dies after Cruise cars block ambulance (RISKS-33.85)

Amos Shapir <amos083@gmail.com>
Fri, 22 Sep 2023 12:58:34 +0300
It seems that autonomous cars have stumbled upon what every beginner driver
should realize: That the hardest part of driving is not operating a vehicle,
but dealing with other drivers; and that the hardest part of that, is trying
to decipher their intentions and acting accordingly.

Robotic systems are notoriously bad at recognizing human intentions—a
lot of context is required, which is often implicit, implied, or even
virtual.


Re: Pedestrian dies after Cruise cars block ambulance (RISKS-33.85)

"John Levine" <johnl@iecc.com>
20 Sep 2023 15:21:10 -0700
I've seen a lot of arguments about whether self-driving cars are more or
less safe than human drivers, based on accidents or deaths per mile driven.
They probably are, although robocars haven't get driven enough for the
numbers to be meaningful.

But we're only now coming to grips with the fact that when they fail,
they often do so in ways very unlike the ways that human drivers fail.
They may be very good at stopping for red lights, but they are a lot
less good at going through a red light to get out of the way of an
ambulance.

Please report problems with the web pages to the maintainer

x
Top