Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
Stephen Clark, Ars Technica, 6 Feb 2024
Researchers at NASA's Jet Propulsion Laboratory have not received telemetry data from the Voyager 1 space probe since a 14 Nov computer glitch in its Flight Data Subsystem (FDS). They believe the problem involves corrupted memory in the FDS, but without the telemetry data, they cannot identify the root cause. Said Voyager project manager Suzanne Dodd, “It would be the biggest miracle if we get it back. We certainly haven't given up.”
Naomi Nix, The Washington Post. 4 Feb 2024
Law enforcement is paying closer attention to reports of attacks, harassment, and sexual assault in virtual environments. The Zero Abuse Project received a grant from the U.S. Department of Justice to educate state and local police on crimes committed in VR. There are concerns about the psychological impact of harassment in VR, but legal precedent would need a significant overhaul for virtual crimes to be prosecuted.
https://www.bbc.com/news/business-68057193
Relationship status: it's complicated.
When it comes to technology, never before have we been both more reliant, and more wary.
Society is more connected, but also more lonely; more productive, but also more burnt-out; we have more privacy tools, but arguably less privacy.
There's no doubt that some tech innovation has been universally great. The formula for a new antibiotic that killed a previously lethal hospital superbug was invented by an AI tool.
Machines that can suck carbon dioxide out of the air could be a huge help in the fight against climate change. Video games and movies are more immersive and entertaining because of better screens and better effects.
But on the other hand, tech-related scandals dominate headlines. Stories about data breaches, cyber attacks and horrific online abuse are regularly on the news.
The comparisons to the 1970 film “Colossus: The Forbin Project” are painfully obvious. -L
Escalation Risks from Language Models in Military and Diplomatic Decision-Making
https://arxiv.org/pdf/2401.03408.pdf
Body camera video equivalent to 25 million copies of “Barbie” is collected but rarely reviewed. Some cities are looking to new technology to examine this stockpile of footage to identify problematic officers and patterns of behavior.
…
Christopher J. Schneider, a professor at Canada’s Brandon University who studies the impact of emerging technology on social perceptions of police, said the lack of disclosure makes him skeptical that AI tools will fix the problems in modern policing.
Even if police departments buy the software and find problematic officers or patterns of behavior, those findings might be kept from the public just as many internal investigations are.
Because it’s confidential,” he said, “the public are not going to know which officers are bad or have been disciplined or not been disciplined.”
https://www.propublica.org/article/police-body-cameras-video-ai-law-enforcement
[By a laid off senior ex-Googler]
The author, Jim Albrecht, was senior director of news ecosystem products at Google until he was laid off last year as part of a purge of the team. -L
https://www.washingtonpost.com/opinions/2024/02/06/ai-news-business-links-google-chatgpt/
https://www.wired.com/story/google-prepares-for-a-future-where-search-isnt-king/
When looked at carefully, OpenAI's new study on GPT-4 and bioweapons is deeply worrisome What they didn't quite tell you, and why it might matter, a lot
https://garymarcus.substack.com/p/when-looked-at-carefully-openais
A growing share of businesses, schools, and medical professionals have quietly embraced generative AI, and there’s really no going back. It is being used to screen job candidates, tutor kids, buy a home and dole out medical advice.
The Biden administration is trying to marshal federal agencies https://www.politico.com/news/2023/10/27/white-house-ai-executive-order-00124067 to assess what kind of rules make sense for the technology. But lawmakers in Washington, state capitals and city halls have been slow to figure out how to protect people’s privacy and guard against echoing the human biases baked into much of the data AIs are trained on.
“There are things that we can use AI for that will really benefit people, but there are lots of ways that AI can harm people and perpetuate inequalities and discrimination that we’ve seen for our entire history,” said Lisa Rice, president and CEO of the National Fair Housing Alliance.
While key federal regulators have said decades-old anti-discrimination laws and other protections can be used to police some aspects of artificial intelligence, Congress has struggled to advance proposals <https://www.politico.com/news/2023/09/13/schumer-senate-ai-policy-00115794> for new licensing and liability systems for AI models and requirements focused on transparency and kids’ safety.
“The average layperson out there doesn’t know what are the boundaries of this technology?” said Apostol Vassilev, a research team supervisor focusing on AI at the National Institute of Standards and Technology. “What are the possible avenues for failure and how these failures may actually affect your life?”
Here’s how AI is already affecting […]
https://www.politico.com/news/2024/02/04/how-ai-is-quietly-changing-everyday-life-00138341
But if you want advanced “Ultra idiot”—oops, I mean “Gemini Ultra”, you can pay Google $20/mo for a subscription! Whoopee. Meanwhile Search continues to get flushed down the toilet. -L
A former OpenAI person and really does believe it's dangerous) or that it only needs the sort of regulation we put on nuclear power plants or smallpox research.
The argument is so all over the place I, like you all, couldn't really follow it. I kept thinking “where the heck are you going with this” as I read it. I believe that he doesn't want to be an iconoclast and smash all the icons, and he doesn't want to say that it doesn't need regulation, and most of all, he doesn't think Sam Altman should be head of The Spacers Guild (which Sam may not want, but he sure wouldn't turn down).
LATER: Aw, shucks, the Internet of Toothbrushes did not happen https://it.slashdot.org/story/24/02/08/2115202/the-viral-smart-toothbrush-botnet-story-is-not-real
www.bleepingcomputer.com
A widely reported story that 3 million electric toothbrushes were hacked with malware to conduct distributed denial of service (DDoS) attacks is likely a hypothetical scenario instead of an actual attack.
Recent incidents with fake Biden robocall and explicit Taylor Swift deepfakes could further ratchet up disinformation fears.
https://www.fastcompany.com/91020077/ai-deepfakes-taylor-swift-joe-biden-2024-election
https://www.fastcompany.com/91020077/ai-deepfakes-taylor-swift-joe-biden-2024-election
Re: Will Hurd, Should 4 People Be Able to Control the Equivalent of a Nuke?
https://www.politico.com/news/magazine/2024/01/30/will-hurd-ai-regulation-00136941
Part of the AI discussion going on now has in it the proposition that unchecked AI is far more dangerous than nuclear fission/fusion, genetic tinkering, chemical research, and so on. They are saying with an apparently straight face that AI represents an actual to all human life, if not all life on this planet, if not life in the universe itself. The folks in that camp are asserting as their starting position that the latent dangers of AI are worse than nuclear weapons.
I find this maddening in part because there is also an argument they are making that I caricature as saying that AI is so dangerous that they should be given a legal monopoly to it. Only by giving OpenAI, Microsoft, Anthropic, and others total control of AI, can we avert extinction as a species. (Well, there's a mere 5-25% chance of extinction by their calculations; let me be fair to them for some suitable definition of fair.)
It smells to me like an appeal to legislators for a business grab backed with some sort of legal and governmental apparatus that resembles the DOE more than the SEC or FTC. It's asking for a business moat enforced by draconian crackdowns on competition.
It's really hard to construct a good argument against this. Many of the things I would say to shoot it down appear to actually strengthen the power grab. If you forgive the Dune allusions, they're giving us the dilemma of choosing between human extinction (which many of us don't want), the Butlerian Jihad that smashes all GPUs if not all computers (which also many of us don't want), and letting them be CHOAM [Dune] — the cartel/oligarchy that gets to control all of the dangerous technology, presumably with the authority to use government's monopoly on violence to enforce their monopoly on AI.
not to doomsday, merely to weapons of mass destruction. He's trying to argue that if it's really that dangerous then maybe we oughta just go to DOE level controls, lest we get to Jehanne Butler. He's trying to tackle the greased pig of the AI doomer desire to own it all without either saying that it only needs FTC-style regulation (because he's a former OpenAI person and really does believe it's dangerous) or that it only needs the sort of regulation we put on nuclear power plants or smallpox research.
The argument is so all over the place I, like you all, couldn't really follow it. I kept thinking “where the heck are you going with this” as I read it. I believe that he doesn't want to be an iconoclast and smash all the icons, and he doesn't want to say that it doesn't need regulation, and most of all, he doesn't think Sam Altman should be head of The Spacers Guild (which Sam may not want, but he sure wouldn't turn down).
I can't think of any mechanism that would be hardware crash or software crash fail safe that depended on a display for all visuals. It would have to have some sort of physical direct (non-display) pass-through for such circumstances, that would operate instantly in the event of any kind of failure (including power failure). Not impossible, but no signs of that happening on anyone's road map, no pun intended. -L
Brian Fung and Donie O'Sullivan, CNN, 25 Feb 2024
Meta's Oversight Board said a manipulated video of President Joe Biden can remain on Facebook due to a loophole in the company's manipulated media policy that allows it to be enforced only when a video has been altered by AI and makes it appear as if a person said something they did not. Because Biden actually did place an I Voted sticker on his adult granddaughter, the board ruled the video can stay on Facebook despite being edited to make it appear as though he touched her chest repeatedly and inappropriately. The board called on Meta to “reconsider this policy quickly, given the number of elections in 2024.”
> The familiar computing maxim “garbage in, garbage out”—dating to the > late 1950s or early 1960s—needs to be updated to “quality in, garbage > out” when it comes to most generative AI systems. -L
What's scary, is when it become “garbage in, gospel out”.
(and given that they're usually feeding off the open Internet, it really isn't a high percentage of gospel or quality going in…)
> Would you accept an accounting system that makes simple calculation > errors?
Working for DEC in the late 1980s, while working on my household budget I discovered and pinpointed a calculation error bug in the company's proprietary spreadsheet program, and reported it through internal channels. It was never fixed, and was present until the program's demise several years later. A bug so obvious that I detected it while verifying my checkbook!
It wasn't fixed, I was eventually told, because it stemmed from code that was difficult to change. I imagine no one important ever noticed the errors or made a fuss. I kept asking myself “How can they not see it?” Inattention, I suppose. So they simply accepted it.
>I was completely unimpressed by the Washington Post article on Tesla's >autosteering feature. Cancel that: I was disgusted. >I am hardly a Tesla fan. But the author of the article complained that the >automatic STEERING feature blew through stop signs. No duh. …
Did we read the same article? Tesla says Autopilot “is designed for use on highways that have a center divider, clear lane markings, and no cross-traffic.”
The author noted that the car knew there was a stop sign since it appeared on the display. You'd think that'd be a pretty strong hint that you're not on a freeway, so it should turn Autopilot off and force the driver to drive. But nope.
An extremely interesting account of the circumstances leading to the blowout of the Boeing 737 MAX 9 has been published in the comment section of an article about the subject, at
It contains a lot of internal details, some of them corroborated by other sources. To anybody who knows large corporations, it also sounds quite believable. Salient points include:
The mid-fuselage door installations delivered by Spirit to Boeing had 392 “nonconforming findings” in a single year (both for doors and for door plugs). Apparently, this was accepted. A team from the supplier, Spirit, was on-site to fix warranty issues. There are two record systems used side-by-side, one official one, which Spirit employees have no write access too, and one unofficial one, which is then used to coordinate with them.
A defect was found and routed to Spirit via the unofficial system. Instead of fixing the issue, it was literally painted over (apparently a federal crime if an airplane mechanic had done so).
After the second fix, a problem with the seal was discovered. A decision was then made to “open” that plug and exchange the seal.
This is physically not possible, such a plug needs to be removed, which would have had to be recorded in the official system. Instead, they called the removal opening, didn't record it, and (apparently) forgot to put the bolts back in.
We'll see what the investigation report shows.
Some other comments were also interesting - by lawyers wishing to represent the whistleblower, by people concerned that Boeing would find out his identity, by people claiming to be journalists from several large news organizations and by a former chairman of the Transportation and Infrastructure Committee of the US House of Representatives, who claimed that they toughened laws on aviation safety - well, apparently not enough).
It would also interesting to see if at least one of the people claiming to be a journalist was in fact somebody who tried to get the whistleblower's identity.
And, of course, as the saying goes: Just because somebody says something on the Internet, it doesn't necessarily mean it is true.
Two weeks ago, the Blancolirio channel on Youtube revealed the underlying error to be a bug in written QA documents. https://www.youtube.com/watch?v=XhRYqvCAX_k&t=451s
Responding to complaints about a leak from the seal, the door was opened and the seal replaced. The bolts must be removed to open the door plug or to remove the door plug. However, Boeing's Quality Assurance Documents require QA inspection of the bolts after plug removal, but inspection was not required if the plug was only opened. In my view, that makes it a bug, analogous to a software error. Faulty written instructions were the cause, even if executed by humans rather than executed by computer.
The contractor, Spirit, has their own independent Quality Assurance Documents that are not identical to Boeing's. Blanoilirio also discussed that. That suggests to me that there are other unexplored ways for QA to fail.
It makes me wonder if academics who worked on proofs of software correctness, have ever applied those methods to written instructions other than computer software.
Please report problems with the web pages to the maintainer