The RISKS Digest
Volume 34 Issue 06

Monday, 12th February 2024

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Most Distant Space Probe Jeopardized by Glitch
Stephen Clar
Attacks in the Metaverse Are Booming. Police Start to Pay Attention
Naomi Nix
Chinese malware removed from SOHO routers after FBI issues covert commands
Ars Technica
Deep fakes
CNN
Have we lost faith in technology?
BBC
AIs sometimes consider nuclear war the best way to achieve peace
Lauren Weinstein
Police Turn to AI to Review Bodycam Footage
ProPublica
The real wolf menacing the news business? AI
Jim Albrecht
Google CEO suggests that “hallucinating AI misinformation is a feature
WiReD
Diving deep into OpenAI's new study on LLM's and bioweapons
Gary Marcus vis Gabe Goldberg
How AI is quietly changing everyday life
Politico
FCC votes to ban AI-generated misleading robocalls, which …
Lauren Weinstein
Google changes Bard to Gemini—and links it to Google Assistant—but it's still a misleading idiot LLM AI
Lauren Weinstein
The Internet of Toothbrushes
Tom Van Vleck
No, 3 million electric toothbrushes were not used in a DDoS attack
Bleeping Computer via Steve Bacher
AI deepfakes get very real as 2024 election season begins]
Fast Company
Hurd in reflection
Jon Callas
VR fail safe vs. driving
Lauren Weinstein
Manipulated Biden Video Can Remain Online
CNN
Re: AI maxim
Ian
Re: ChatGPT can answer yes or no at the same time
DJC
Re: Even after a recall, Tesla's Autopilot does dumb dangerous things
John Levine
A Whistleblower's tale about the Boeing 737 MAX 9 door plug
LeeHamNews via Thomas Koenig
Re: Why the 737 MAX 9 door plug blew out
Dick Mills
Info on RISKS (comp.risks)

Most Distant Space Probe Jeopardized by Glitch (Stephen Clark)

ACM TechNews <technews-editor@acm.org>
Mon, 12 Feb 2024 11:06:32 -0500 (EST)

Stephen Clark, Ars Technica, 6 Feb 2024

Researchers at NASA's Jet Propulsion Laboratory have not received telemetry data from the Voyager 1 space probe since a 14 Nov computer glitch in its Flight Data Subsystem (FDS). They believe the problem involves corrupted memory in the FDS, but without the telemetry data, they cannot identify the root cause. Said Voyager project manager Suzanne Dodd, “It would be the biggest miracle if we get it back. We certainly haven't given up.”


Attacks in the Metaverse Are Booming. Police Start to Pay Attention

ACM TechNews <technews-editor@acm.org>
Mon, 12 Feb 2024 11:06:32 -0500 (EST)

Naomi Nix, The Washington Post. 4 Feb 2024

Law enforcement is paying closer attention to reports of attacks, harassment, and sexual assault in virtual environments. The Zero Abuse Project received a grant from the U.S. Department of Justice to educate state and local police on crimes committed in VR. There are concerns about the psychological impact of harassment in VR, but legal precedent would need a significant overhaul for virtual crimes to be prosecuted.


Chinese malware removed from SOHO routers after FBI issues covert commands (Ars Technica)

Victor Miller <victorsmiller@gmail.com>
Thu, 1 Feb 2024 15:19:06 +0000

https://arstechnica.com/security/2024/01/chinese-malware-removed-from-soho-routers-after-fbi-issues-covert-commands/


Deep fakes (CNN) https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk?cid=ios_app

Victor Miller <victorsmiller@gmail.com>
Sun, 4 Feb 2024 22:20:30 +0000

Have we lost faith in technology? (BBC)

Matthew Kruk <mkrukg@gmail.com>
Fri, 9 Feb 2024 22:02:32 -0700

https://www.bbc.com/news/business-68057193

Relationship status: it's complicated.

When it comes to technology, never before have we been both more reliant, and more wary.

Society is more connected, but also more lonely; more productive, but also more burnt-out; we have more privacy tools, but arguably less privacy.

There's no doubt that some tech innovation has been universally great. The formula for a new antibiotic that killed a previously lethal hospital superbug was invented by an AI tool.

Machines that can suck carbon dioxide out of the air could be a huge help in the fight against climate change. Video games and movies are more immersive and entertaining because of better screens and better effects.

But on the other hand, tech-related scandals dominate headlines. Stories about data breaches, cyber attacks and horrific online abuse are regularly on the news.


AIs sometimes consider nuclear war the best way to achieve peace

Lauren Weinstein <lauren@vortex.com>
Wed, 7 Feb 2024 10:01:59 -0800

The comparisons to the 1970 film “Colossus: The Forbin Project” are painfully obvious. -L

Escalation Risks from Language Models in Military and Diplomatic Decision-Making

https://arxiv.org/pdf/2401.03408.pdf


Police Turn to AI to Review Bodycam Footage (ProPublica)

Gabe Goldberg <gabe@gabegold.com>
Sun, 11 Feb 2024 22:05:20 -0500

Body camera video equivalent to 25 million copies of “Barbie” is collected but rarely reviewed. Some cities are looking to new technology to examine this stockpile of footage to identify problematic officers and patterns of behavior.

Christopher J. Schneider, a professor at Canada’s Brandon University who studies the impact of emerging technology on social perceptions of police, said the lack of disclosure makes him skeptical that AI tools will fix the problems in modern policing.

Even if police departments buy the software and find problematic officers or patterns of behavior, those findings might be kept from the public just as many internal investigations are.

Because it’s confidential,” he said, “the public are not going to know which officers are bad or have been disciplined or not been disciplined.”

https://www.propublica.org/article/police-body-cameras-video-ai-law-enforcement


The real wolf menacing the news business? AI. (Jim Albrecht)

Lauren Weinstein <lauren@vortex.com>
Mon, 12 Feb 2024 08:22:55 -0800

[By a laid off senior ex-Googler]

The author, Jim Albrecht, was senior director of news ecosystem products at Google until he was laid off last year as part of a purge of the team. -L

https://www.washingtonpost.com/opinions/2024/02/06/ai-news-business-links-google-chatgpt/


Google CEO suggests that “hallucinating AI misinformation is a feature

Lauren Weinstein <lauren@vortex.com>
Thu, 8 Feb 2024 11:51:07 -0800

https://www.wired.com/story/google-prepares-for-a-future-where-search-isnt-king/


Diving deep into OpenAI's new study on LLM's and bioweapons

Gabe Goldberg <gabe@gabegold.com>
Sun, 4 Feb 2024 15:10:06 -0500

When looked at carefully, OpenAI's new study on GPT-4 and bioweapons is deeply worrisome What they didn't quite tell you, and why it might matter, a lot

https://garymarcus.substack.com/p/when-looked-at-carefully-openais


How AI is quietly changing everyday life (Politico)

Steve Bacher <sebmb1@verizon.net>
Sun, 4 Feb 2024 06:57:20 -0800

A growing share of businesses, schools, and medical professionals have quietly embraced generative AI, and there’s really no going back. It is being used to screen job candidates, tutor kids, buy a home and dole out medical advice.

The Biden administration is trying to marshal federal agencies https://www.politico.com/news/2023/10/27/white-house-ai-executive-order-00124067 to assess what kind of rules make sense for the technology. But lawmakers in Washington, state capitals and city halls have been slow to figure out how to protect people’s privacy and guard against echoing the human biases baked into much of the data AIs are trained on.

“There are things that we can use AI for that will really benefit people, but there are lots of ways that AI can harm people and perpetuate inequalities and discrimination that we’ve seen for our entire history,” said Lisa Rice, president and CEO of the National Fair Housing Alliance.

While key federal regulators have said decades-old anti-discrimination laws and other protections can be used to police some aspects of artificial intelligence, Congress has struggled to advance proposals <https://www.politico.com/news/2023/09/13/schumer-senate-ai-policy-00115794> for new licensing and liability systems for AI models and requirements focused on transparency and kids’ safety.

“The average layperson out there doesn’t know what are the boundaries of this technology?” said Apostol Vassilev, a research team supervisor focusing on AI at the National Institute of Standards and Technology. “What are the possible avenues for failure and how these failures may actually affect your life?”

Here’s how AI is already affecting […]

https://www.politico.com/news/2024/02/04/how-ai-is-quietly-changing-everyday-life-00138341


FCC votes to ban AI-generated misleading robocalls, which … FCC has voted to ban AI-generated misleading robocalls. Which will

Lauren Weinstein <lauren@vortex.com>
Thu, 8 Feb 2024 09:20:28 -0800

Google changes Bard to Gemini—and links it to Google Assistant—but it's still a misleading idiot LLM AI

Lauren Weinstein <lauren@vortex.com>
Thu, 8 Feb 2024 08:53:00 -0800

But if you want advanced “Ultra idiot”—oops, I mean “Gemini Ultra”, you can pay Google $20/mo for a subscription! Whoopee. Meanwhile Search continues to get flushed down the toilet. -L


The Internet of Toothbrushes

Tom Van Vleck <thvv@multicians.org>
Wed, 7 Feb 2024 08:13:52 -0500

https://it.slashdot.org/story/24/02/06/2219207/3-million-malware-infected-smart-toothbrushes-used-in-swiss-ddos-attacks

A former OpenAI person and really does believe it's dangerous) or that it only needs the sort of regulation we put on nuclear power plants or smallpox research.

The argument is so all over the place I, like you all, couldn't really follow it. I kept thinking “where the heck are you going with this” as I read it. I believe that he doesn't want to be an iconoclast and smash all the icons, and he doesn't want to say that it doesn't need regulation, and most of all, he doesn't think Sam Altman should be head of The Spacers Guild (which Sam may not want, but he sure wouldn't turn down).

LATER: Aw, shucks, the Internet of Toothbrushes did not happen https://it.slashdot.org/story/24/02/08/2115202/the-viral-smart-toothbrush-botnet-story-is-not-real


No, 3 million electric toothbrushes were not used in a DDoS attack

Steve Bacher <sebmb1@verizon.net>
Thu, 8 Feb 2024 10:35:59 -0800

www.bleepingcomputer.com

A widely reported story that 3 million electric toothbrushes were hacked with malware to conduct distributed denial of service (DDoS) attacks is likely a hypothetical scenario instead of an actual attack.


AI deepfakes get very real as 2024 election season begins]

Peter Neumann <neumann@csl.sri.com>
Thu, 1 Feb 2024 9:52:48 PST

Recent incidents with fake Biden robocall and explicit Taylor Swift deepfakes could further ratchet up disinformation fears.

https://www.fastcompany.com/91020077/ai-deepfakes-taylor-swift-joe-biden-2024-election

https://www.fastcompany.com/91020077/ai-deepfakes-taylor-swift-joe-biden-2024-election


Hurd in reflection

Jon Callas <jon@callas.org>
Thu, 1 Feb 2024 14:00:45 -0800

Re: Will Hurd, Should 4 People Be Able to Control the Equivalent of a Nuke?

https://www.politico.com/news/magazine/2024/01/30/will-hurd-ai-regulation-00136941

Part of the AI discussion going on now has in it the proposition that unchecked AI is far more dangerous than nuclear fission/fusion, genetic tinkering, chemical research, and so on. They are saying with an apparently straight face that AI represents an actual to all human life, if not all life on this planet, if not life in the universe itself. The folks in that camp are asserting as their starting position that the latent dangers of AI are worse than nuclear weapons.

I find this maddening in part because there is also an argument they are making that I caricature as saying that AI is so dangerous that they should be given a legal monopoly to it. Only by giving OpenAI, Microsoft, Anthropic, and others total control of AI, can we avert extinction as a species. (Well, there's a mere 5-25% chance of extinction by their calculations; let me be fair to them for some suitable definition of fair.)

It smells to me like an appeal to legislators for a business grab backed with some sort of legal and governmental apparatus that resembles the DOE more than the SEC or FTC. It's asking for a business moat enforced by draconian crackdowns on competition.

It's really hard to construct a good argument against this. Many of the things I would say to shoot it down appear to actually strengthen the power grab. If you forgive the Dune allusions, they're giving us the dilemma of choosing between human extinction (which many of us don't want), the Butlerian Jihad that smashes all GPUs if not all computers (which also many of us don't want), and letting them be CHOAM [Dune] — the cartel/oligarchy that gets to control all of the dangerous technology, presumably with the authority to use government's monopoly on violence to enforce their monopoly on AI.

not to doomsday, merely to weapons of mass destruction. He's trying to argue that if it's really that dangerous then maybe we oughta just go to DOE level controls, lest we get to Jehanne Butler. He's trying to tackle the greased pig of the AI doomer desire to own it all without either saying that it only needs FTC-style regulation (because he's a former OpenAI person and really does believe it's dangerous) or that it only needs the sort of regulation we put on nuclear power plants or smallpox research.

The argument is so all over the place I, like you all, couldn't really follow it. I kept thinking “where the heck are you going with this” as I read it. I believe that he doesn't want to be an iconoclast and smash all the icons, and he doesn't want to say that it doesn't need regulation, and most of all, he doesn't think Sam Altman should be head of The Spacers Guild (which Sam may not want, but he sure wouldn't turn down).


VR fail safe vs. driving

Lauren Weinstein <lauren@vortex.com>
Sun, 4 Feb 2024 11:50:05 -0800

I can't think of any mechanism that would be hardware crash or software crash fail safe that depended on a display for all visuals. It would have to have some sort of physical direct (non-display) pass-through for such circumstances, that would operate instantly in the event of any kind of failure (including power failure). Not impossible, but no signs of that happening on anyone's road map, no pun intended. -L


Manipulated Biden Video Can Remain Online (CNN)

ACM TechNews <technews-editor@acm.org>
Fri, 9 Feb 2024 11:34:57 -0500 (EST)

Brian Fung and Donie O'Sullivan, CNN, 25 Feb 2024

Meta's Oversight Board said a manipulated video of President Joe Biden can remain on Facebook due to a loophole in the company's manipulated media policy that allows it to be enforced only when a video has been altered by AI and makes it appear as if a person said something they did not. Because Biden actually did place an I Voted sticker on his adult granddaughter, the board ruled the video can stay on Facebook despite being edited to make it appear as though he touched her chest repeatedly and inappropriately. The board called on Meta to “reconsider this policy quickly, given the number of elections in 2024.”


Re: AI maxim

Ian <risks-4536@jusme.com>
Thu, 1 Feb 2024 14:50:25 +0000 (GMT)
> The familiar computing maxim “garbage in, garbage out”—dating to the
> late 1950s or early 1960s—needs to be updated to “quality in, garbage
> out” when it comes to most generative AI systems. -L

What's scary, is when it become “garbage in, gospel out”.

(and given that they're usually feeding off the open Internet, it really isn't a high percentage of gospel or quality going in…)


Re: ChatGPT can answer yes or no at the same time (Shapir, RISKS-34.05)

djc <djc@resiak.org>
Thu, 1 Feb 2024 09:01:40 +0100
> Would you accept an accounting system that makes simple calculation
> errors?

Working for DEC in the late 1980s, while working on my household budget I discovered and pinpointed a calculation error bug in the company's proprietary spreadsheet program, and reported it through internal channels. It was never fixed, and was present until the program's demise several years later. A bug so obvious that I detected it while verifying my checkbook!

It wasn't fixed, I was eventually told, because it stemmed from code that was difficult to change. I imagine no one important ever noticed the errors or made a fuss. I kept asking myself “How can they not see it?” Inattention, I suppose. So they simply accepted it.


Re: Even after a recall, Tesla's Autopilot does dumb dangerous things (Kuenning, RISKS-34.05)

“John Levine” <johnl@iecc.com>
1 Feb 2024 14:05:26 -0500
>I was completely unimpressed by the Washington Post article on Tesla's
>autosteering feature.  Cancel that: I was disgusted.
>I am hardly a Tesla fan.  But the author of the article complained that the
>automatic STEERING feature blew through stop signs.  No duh. …

Did we read the same article? Tesla says Autopilot “is designed for use on highways that have a center divider, clear lane markings, and no cross-traffic.”

The author noted that the car knew there was a stop sign since it appeared on the display. You'd think that'd be a pretty strong hint that you're not on a freeway, so it should turn Autopilot off and force the driver to drive. But nope.


A Whistleblower's tale about the Boeing 737 MAX 9 door plug (LeeHamNews)

<Thomas Koenig <tkoenig@netcologne.de>:>
Sun, 4 Feb 2024 12:19:14 +0100

An extremely interesting account of the circumstances leading to the blowout of the Boeing 737 MAX 9 has been published in the comment section of an article about the subject, at

https://leehamnews.com/2024/01/15/unplanned-removal-installation-inspection-procedure-at-boeing/#comment-509962

It contains a lot of internal details, some of them corroborated by other sources. To anybody who knows large corporations, it also sounds quite believable. Salient points include:

The mid-fuselage door installations delivered by Spirit to Boeing had 392 “nonconforming findings” in a single year (both for doors and for door plugs). Apparently, this was accepted. A team from the supplier, Spirit, was on-site to fix warranty issues. There are two record systems used side-by-side, one official one, which Spirit employees have no write access too, and one unofficial one, which is then used to coordinate with them.

A defect was found and routed to Spirit via the unofficial system. Instead of fixing the issue, it was literally painted over (apparently a federal crime if an airplane mechanic had done so).

After the second fix, a problem with the seal was discovered. A decision was then made to “open” that plug and exchange the seal.

This is physically not possible, such a plug needs to be removed, which would have had to be recorded in the official system. Instead, they called the removal opening, didn't record it, and (apparently) forgot to put the bolts back in.

We'll see what the investigation report shows.

Some other comments were also interesting - by lawyers wishing to represent the whistleblower, by people concerned that Boeing would find out his identity, by people claiming to be journalists from several large news organizations and by a former chairman of the Transportation and Infrastructure Committee of the US House of Representatives, who claimed that they toughened laws on aviation safety - well, apparently not enough).

It would also interesting to see if at least one of the people claiming to be a journalist was in fact somebody who tried to get the whistleblower's identity.

And, of course, as the saying goes: Just because somebody says something on the Internet, it doesn't necessarily mean it is true.


Re: Why the 737 MAX 9 door plug blew out

Dick Mills <dickandlibbymills@gmail.com>
Fri, 9 Feb 2024 15:57:11 -0500

Two weeks ago, the Blancolirio channel on Youtube revealed the underlying error to be a bug in written QA documents. https://www.youtube.com/watch?v=XhRYqvCAX_k&t=451s

Responding to complaints about a leak from the seal, the door was opened and the seal replaced. The bolts must be removed to open the door plug or to remove the door plug. However, Boeing's Quality Assurance Documents require QA inspection of the bolts after plug removal, but inspection was not required if the plug was only opened. In my view, that makes it a bug, analogous to a software error. Faulty written instructions were the cause, even if executed by humans rather than executed by computer.

The contractor, Spirit, has their own independent Quality Assurance Documents that are not identical to Boeing's. Blanoilirio also discussed that. That suggests to me that there are other unexplored ways for QA to fail.

It makes me wonder if academics who worked on proofs of software correctness, have ever applied those methods to written instructions other than computer software.

Please report problems with the web pages to the maintainer

x
Top