The RISKS Digest
Volume 33 Issue 90

Thursday, 19th October 2023

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

How ChatGPT and other AI tools could disrupt scientific publishing
Nature
‘Algorithmic destruction’ and the deep algorithmic problems of AI and copyright
San Francisco Chronicle
A Chatbot Encouraged Him to Kill the Queen. It's Just the Beginning
WiReD
Dilemma of the Artificial Intelligence Regulatory Landscape
CACM Vol 66 No 9
Experts Worry as Facial Recognition Comes to Airports and Cruises
NYTimes
Deepfake Election Interference in Slovakia
Bruce Schneier
A big win in our fight to reclaim the Internet!
Mozilla
Win $12k by rediscovering the secret phrases that secure the Internet
New Scientist
Your old phone is safe for longer than you think
WashPost
How do you get out of a $28,000 timeshare mistake?
Eliott
The TSA wants to put a government tracking app on your smartphone
PapersPlease
New York Bill Would Require a Criminal Background Check to Buy a 3D Printer
Gizmodo
Burned-out parents seek help from a new ally: ChatGPT
geoff goodfellow
Allied Spy Chiefs Warn of Chinese Espionage Targeting Tech Firms
NYTimes
Top crypto firms named in $1bn fraud lawsuit
BBC
The secret life of Jimmy Zhong, who stole and lost more than $3B
CNBC
Why do people fall for grief scams?
Rob Slade
Remote Driving Is a Sneaky Shortcut to the Robotaxi
WiReD
Re: Autonomous Vehicles Are Driving Blind
Chris Volpe
Re: False news spreads faster than the truth
Amos Shapir
Re: Vermont Utility Plans to End Outages by Giving Customers Batteries
John Levine
Info on RISKS (comp.risks)

How ChatGPT and other AI tools could disrupt scientific publishing (Nature)

Steve Bacher <sebmb1@verizon.net>
Sat, 14 Oct 2023 09:52:05 -0700

https://www.nature.com/articles/d41586-023-03144-w

When radiologist Domenico Mastrodicasa finds himself stuck while writing a research paper, he turns to ChatGPT, the chatbot that produces fluent responses to almost any query in seconds. “I use it as a sounding board,” says Mastrodicasa, who is based at the University of Washington School of Medicine in Seattle. “I can produce a publication-ready manuscript much faster.”


‘Algorithmic destruction’ and the deep algorithmic problems of AI and copyright (San Francisco Chronicle)

Ellen Ullman <ullman@well.com>
September 24, 2023 10:06:15 JST
[From Dave Farber's IP distribution)

Could ‘algorithmic destruction’ solve AI's copyright issues? https://www.sfchronicle.com/tech/article/ai-artificial-intelligence-copyright-18374295.php

By Chase DiFeliciantonio, 23 Sep 2023

Comments

OpenAI’s ChatGPT is trained by “consuming” vast amounts of information online. Some authors have sued OpenAI alleging the company unfairly used their copyrighted works to teach its chatbots how to respond to written prompts. One way to fix that could be to employ “algorithmic destruction.”

OpenAI’s ChatGPT is trained by “consuming” vast amounts of information online. Some authors have sued OpenAI alleging the company unfairly used their copyrighted works to teach its chatbots how to respond to written prompts. One way to fix that could be to employ “algorithmic destruction.” Richard Drew/Associated Press

If artificial intelligence mimics our brains, does that mean it too can unlearn something it knows?

That question is central to lawsuits filed by a range of creatives who say their copyrighted work was infringed by OpenAI and Meta. But making an AI “forget” isn’t the same as removing the blocky chips from HAL 9000’s digital brain in “2001: A Space Odyssey.” In fact, the lawsuits raise the question: is “unlearning” even possible for an AI? And, if not, are there other ways to ensure generative AI programs don’t draw from copyrighted material, short of tearing them down?

Enter “algorithmic destruction,” a term that entails trashing an AI model that may have taken years and millions of dollars to train, then rebuilding it from scratch by inputting only fair-use text, images and data.

That would be “the most extreme remedy” to issues highlighted in lawsuits like those filed against OpenAI and Meta, said Pamela Samuelson, a UC Berkeley professor and expert in generative AI and copyright law.

But, she said, it’s not unthinkable.

Here’s how it might work:

Since AI models aren't some baby powder that can be easily recalled and remade after slapping a company with a fine, there are basically three approaches given the current way the technology works, plus one more path that would change how it “thinks,” said UC Berkeley professor and computer scientist Matei Zaharia:

  Destroy the model.
  “Screen” results that include copyrighted material.
  Retrain the model.

A fourth route would be to invent models that work more like a super-smart web search, and which can cite sources unlike chatbots such as GPT-3 which, similar to a human brain, doesn't always know where it learned something, or whether it's totally accurate.

That way programmers could, in theory, pull documents from a model's training set “ like nodes of the HAL 9000's processor ” so the program could no longer reference them when asked a question, said Zaharia, who is working on that kind of approach.

With the way the dominant generative AI technology works for now, though, “it's hard to make models forget specific content,” said Zaharia, who is also the co-founder and CTO of San Francisco's Databricks.

The easiest way to keep a program from spitting out information it shouldn't “would probably be after your model generates something, but before you send it back to the user, you check, ‘Is this really close to something’ ” like a copyrighted work, Zaharia said.

Telling the program to skate around copyrighted material, specifically, would probably not work since, like a toddler with superpowers, “it doesn't really know” what is copyrighted and what isn't, Zaharia said.


A Chatbot Encouraged Him to Kill the Queen. It's Just the Beginning (WiReD)

Gabe Goldberg <gabe@gabegold.com>
Thu, 19 Oct 2023 01:04:01 -0400

Companies are designing AI to appear increasingly human. That can mislead users or worse.

Humans are prone to see two dots and a line and think they're a face. When they do it to chatbots, it's known as the Eliza effect. The name comes from the first chatbot, Eliza, developed by MIT scientist Joseph Weizenbaum in 1966. Weizenbaum noticed users were ascribing erroneous insights to a text generator simulating a therapist. […]

Mental health chatbots may carry similar risks. Jodi Halpern, a professor of bioethics at UC Berkeley, whose work has challenged the idea of using AI chatbots to help meet the rising demand for mental health care, has become increasingly concerned by a marketing push to sell these apps as caring companions. She's worried that patients are being encouraged to develop dependent relationships “of trust, intimacy, and vulnerability” with an app. This is a form of manipulation, Halpern says. And should the app fail the user, there is often no mental health professional ready to come to their aid. Artificial intelligence cannot stand in for human empathy, she says.

https://www.wired.com/story/chatbot-kill-the-queen-eliza-effect


Dilemma of the Artificial Intelligence Regulatory Landscape (CACM Vol 66 No 9)

Cliff Kilby <cliffjkilby@gmail.com>
Sun, 15 Oct 2023 16:49:04 -0400

In the opinion piece “Dilemma of the Artificial Intelligence Regulatory Landscape”: Wu and Liu note that with rapid expansion of LLMs and progress towards true AI regulatory frameworks are woefully unprepared. The authors are of the opinion that implementation of new features should be accelerated regardless of the regulatory gaps stating “We have found the key to settling concerns is to clearly convey the message that potential benefits outweigh relevant risks.” This is an extremely troubling approach that led to terrible things like the ozone hole over Antarctica, and one that I would caution strongly against. Doubly so, considering that LLMs are regularly demonstrating they provide at least as many risks as benefits, if not more so. Copyright lawsuits, Intellectual Property disputes and even a case of Libel have all appeared in relation to LLMs. In summation, I disagree.


Experts Worry as Facial Recognition Comes to Airports and Cruises (The New York Times)

Gabe Goldberg <gabe@gabegold.com>
Sun, 15 Oct 2023 18:13:54 -0400

Facial recognition software is speeding up check-in at airports, cruise ships and theme parks, but experts worry about risks to security and privacy.

Facial recognition technology will increasingly offer travelers shorter lines and fewer documents to juggle, but all that convenience may have a cost, warned Jay Stanley, a senior policy analyst at the American Civil Liberties Union. By accepting more surveillance technology, he said, “we open ourselves to tracking where we are and who we are with all the time.”

https://www.nytimes.com/2023/10/13/travel/facial-recognition-airports-cruises.html?smid=nytcore-ios-share&referringSource=articleShare


Deepfake Election Interference in Slovakia e

Bruce Schneier <schneier@schneier.com>
Sun, 15 Oct 2023 09:17:04 +0000

[2023.10.06] [https://www.schneier.com/blog/archives/2023/10/deepfake-election-interfer ence-in-slovakia.html] Well designed and well timed deepfake [https://www.wired.co.uk/article/slovakia-election-deepfakes] or two Slovakian politicians discussing how to rig the election:

Šimeška and Denníka immediately denounced the audio as fake. The fact-checking department of news agency AFP said [https://fakty.afp.com/doc.afp.com.33WY9LF] the audio showed signs of being manipulated using AI. But the recording was posted during a 48-hour moratorium ahead of the polls opening, during which media outlets and politicians are supposed to stay silent. That meant, under Slovakia's election rules, the post was difficult to widely debunk. And, because the post was audio, it exploited a loophole in Meta's manipulated-media policy, which dictates [https://transparency.fb.com/en-gb/policies/community-standards/manipulated-media/] only faked videos—where a person has been edited to say words they never said—go against its rules.

I just wrote about this [https://theconversation.com/ai-disinformation-is-a-threat-to-elections-learning-to-spot-russian-chinese-and-iranian-meddling-in-other-countries-can-help-the-us-prepare-for-2024-214358]. Countries like Russia and China tend to test their attacks out on smaller countries before unleashing them on larger ones. Consider this a preview to their actions in the US next year.

[See also an excellent long item that precedes this one:

Political Disinformation and AI [2023.10.05] [https://www.schneier.com/blog/archives/2023/10/political-disinformation-and-ai.html] Elections around the world are facing an evolving threat from foreign actors, one that involves artificial intelligence


A big win in our fight to reclaim the Internet!

Mozilla <mozilla@email.mozilla.org>
Thu, 12 Oct 2023 13:50:55 +0000 (UTC)

The so-called impossible has happened! It was widely believed that YouTube was too big and too powerful to change its behaviour in response to public pressure. But we didn't accept that.

Following years of campaigning, research, and policy advocacy by Mozilla, YouTube has announced it will share data with researchers. This change, as pd art of the platform's required compliance of the European Union's Digital Services Act, will provide independent researchers with critical public data—data that will allow the outside world to better understand how to stop YouTube from recommending dangerous and harmful content.

This victory belongs to hundreds of thousands of Mozilla supporters — to the people who: Donated their data to RegretsReporter, our browser extension that helped aggregate YouTube recommendations for researchers and that has been influential in shaping the EU's Digital Services Act (DSA) regulation; Shared their stories of how their lives and well-being had been impacted by YouTube's recommendations; and Signed petitions and chipped in to support multiple fronts of our campaigning work that made this victory possible.

It's a result of long-term movement building work—the exact type of work that is made possible by the continued support of Mozilla supporters. And it's a result of our work with dozens of organisations that supported our recommendations for public data sharing under the DSA. And wins like these prove that together, the Mozilla community is capable of taking on globally dominant platforms like YouTube—and winning. […]


Win $12k by rediscovering the secret phrases that secure the Internet (New Scientist)

Monty Solomon <monty@roscom.com>
Sun, 15 Oct 2023 18:34:39 -0400

Win $12k by rediscovering the secret phrases that secure the Internet

Five secret phrases used to create the encryption algorithms that secure everything from online banking to email have been lost to history—but now cryptographers are offering a bounty to rediscover them

https://www.newscientist.com/article/2396724-win-12k-by-rediscovering-the-secret-phrases-that-secure-the-internet/


Your old phone is safe for longer than you think (WashPost)

Monty Solomon <monty@roscom.com>
Sat, 14 Oct 2023 22:28:09 -0400

Smartphone makers fix security flaws on your device for four to eight years or more, giving you comfort that you can hold onto your phone if you wish.

https://www.washingtonpost.com/technology/2023/10/13/security-updates-ios-an


How do you get out of a $28,000 timeshare mistake? (Eliott)

Gabe Goldberg <gabe@gabegold.com>
Sun, 15 Oct 2023 18:19:01 -0400

Ling Lu Yamaki wanted to get out of her timeshare with Hilton Grand Vacations after she discovered a serious problem with it.

Yamaki asked the company to cancel the $28,000 contract she and her husband had agreed to pay for their timeshare after attending a presentation in Las Vegas, but it said no.

https://www.elliott.org/advocacy/how-do-you-get-out-of-a-28000-timeshare-mistake-definitely-not-like-this/

The risk? Lying companies and agreements nobody reads.


The TSA wants to put a government tracking app on your smartphone (PapersPlease)

Monty Solomon <monty@roscom.com>
Mon, 16 Oct 2023 09:24:36 -0400

https://papersplease.org/wp/2023/10/16/the-tsa-wants-to-put-a-government-tracking-app-on-your-smartphone/


New York Bill Would Require a Criminal Background Check to Buy a 3D Printer (Gizmodo)

Lauren Weinstein <lauren@vortex.com>
Mon, 16 Oct 2023 17:38:16 -0700
[Ridiculous]

https://gizmodo.com/new-york-bill-criminal-background-check-buy-3d-print er-1850930407


Burned-out parents seek help from a new ally: ChatGPT

geoff goodfellow <geoff@iconia.com>
Sat, 14 Oct 2023 07:49:04 -0700

ChatGPT's latest job is to be mom and dad's brilliant sidekick. Parents of kids of all ages are using the chatbot to help raise their children.

Why it matters: The tool has the potential to ease the burden on burned-out, over-scheduled parents. <https://www.axios.com/2022/05/31/parents-schools-uvalde-baby-formula-race>= But it's no replacement for a human's judgment—especially regarding what's best for their kids.

What's happening: ChatGPT excels at brainstorming and researching—both functions that can be uniquely useful to parents, says Celia Quillian, a product marketer in Atlanta who runs a TikTok account advising followers on creative ways to use the robot.

Zoom in: Some parents are using the chatbot to navigate even bigger milestones in their children's lives. […]

https://www.axios.com/2023/10/14/chatgpt-parents-ai-chatbot


Allied Spy Chiefs Warn of Chinese Espionage Targeting Tech Firms (NYTimes)

“Jan Wolitzky” <jan.wolitzky@gmail.com>
Wed, 18 Oct 2023 06:25:21 -0400

The United States and its allies vowed this week to do more to counter Chinese theft of technology, warning at an unusual gathering of intelligence leaders that Beijing’s espionage is increasingly trained not on the hulking federal buildings of Washington but the shiny office complexes of Silicon Valley.

The intelligence chiefs sought to engage private industry in combating what one official called an unprecedented threat on Tuesday as they discussed how to better protect new technologies and help Western countries keep their edge over China.

The choice of meeting venue -” Stanford University, in Silicon Valley ”- was strategic. While Washington is often considered the key espionage battleground in the United States, FBI officials estimate that more than half of Chinese espionage focused on stealing American technology takes place in the Bay Area.

It was the first time the heads of the FBI and Britain's MI5 and their counterparts from Australia, Canada and New Zealand had gathered for a public discussion of intelligence threats. It was, in effect, a summit of the spy hunters, the counterintelligence agencies whose job it is to detect and stop efforts by China to steal allied secrets.

“That unprecedented meeting is because we are dealing with another unprecedented threat,” said Christopher A. Wray, the FBI director. “There is no greater threat to innovation than the Chinese government.”

https://www.nytimes.com/2023/10/18/us/politics/china-spying-technology.html


Top crypto firms named in $1bn fraud lawsuit (BBC)

Matthew Kruk <mkrukg@gmail.com>
Thu, 19 Oct 2023 13:00:40 -0600

https://www.bbc.com/news/business-67161638

U.S. prosecutors have accused three high-profile cryptocurrency firms of defrauding investors of more than $1bn.

New York Attorney General Letitia James said Gemini, a crypto exchange, had lied to customers about the risks of an investment account it offered, which paid high interest rates on crypto.

Genesis, a crypto lender, and its parent company Digital Currency Group were also involved in the programme.

It was halted last November, cutting off customer access to funds.


The secret life of Jimmy Zhong, who stole and lost more than $3B

Li Gong <ligongsf@gmail.com>
Wed, 18 Oct 2023 12:17:05 +0800

<https://www.cnbc.com/2023/10/17/crypto911.html>


Why do people fall for grief scams?

Rob Slade <rslade@gmail.com>
Thu, 19 Oct 2023 12:41:04 -0700:

https://fibrecookery.blogspot.com/2023/10/why-do-people-fall-for-grief-scams.html


Remote Driving Is a Sneaky Shortcut to the Robotaxi (WiReD)

Gabe Goldberg <gabe@gabegold.com>
Thu, 19 Oct 2023 01:26:05 -0400

German startup Vay is pushing teledriving—in which cars are remotely operated by humans—as easier to achieve than fully autonomous driving.

On the busy streets of suburban Berlin, just south of Tempelhofer Feld, a white Kia is skillfully navigating double-parked cars, roadworks, cyclists, and pedestrians. Dan, the driver, strikes up a conversation with his passengers, remarking on the changing traffic lights and the sound of an ambulance screaming past in the other direction. But Dan isn’t in the car.

Instead, he’s half a mile away at the offices of German startup Vay. The company kits its cars out with radar, GPS, ultrasound, and an array of other sensors to allow drivers like Dan to control the vehicles remotely from a purpose-built station equipped with a driver’s seat, steering wheel, pedals, and three monitors providing visibility in front of the car and to its side.

https://www.wired.com/story/a-sneaky-shortcut-to-driverless-cars


Re: Autonomous Vehicles Are Driving Blind (NYTimes)

Chris Volpe <cvolpe@ara.com>
Wed, 18 Oct 2023 13:41:31 +0000
For all the ballyhoo over the possibility of artificial intelligence threatening humanity someday, there's remarkably little discussion of the ways it is threatening humanity right now. When it comes to self-driving cars, we are driving blind.

That's because with self-driving cars, as the technology improves, the threat is mitigated, whereas with other forms of AI (deep fakes, large language models, etc.), as the technology improves, the threat is exacerbated.


Re: False news spreads faster than the truth (RISKS-33.88)

Amos Shapir <amos083@gmail.com>
Sun, 15 Oct 2023 17:35:26 +0300

Martin Ward writes (quoting Alvin Plantinga's “evolutionary argument against naturalism”):

“Therefore, to assert that naturalistic evolution is true also asserts that one has a low or unknown probability of being right. Therefore, naturalism is self-defeating.”

The main flaw of Patinga's argument is of course that naturalistic evolution is not an idea or an opinion, but a scientific theory. Therefore, its veracity does not depend on a theoretical “probability of being right”, but on hard evidence, attained through observation and experimentation.


Re: Vermont Utility Plans to End Outages by Giving Customers Batteries (Baker, RISKS-33.89)

“John Levine” <johnl@iecc.com>
12 Oct 2023 16:57:20 -0400
>  Terrific idea!  How come it's taken this long for a utility to utilize
>  the advantages of a distributed power system to reduce the need for
>  long-distance power transmission?

Financial incentives, perverse state regulators, and the lack of cost-effective batteries until recently.

>  I'm still waiting for one of the cellphone companies to start paying
>  homeowners to put nano cellsites on their roofs in order to avoid having
>  to build stand-alone cellsites/towers.

Using what for backhaul? Every cableco I know forbids reselling your connection.

Also, femtocells have very limited power and are only supposed to be used indoors because they can interfere with real cell towers. The real towers are designed by engineers and have licenses so they provide useful coverage and manage interference between cells.

Please report problems with the web pages to the maintainer

x
Top