The RISKS Digest
Volume 33 Issue 95

Saturday, 2nd December 2023

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Commercial Flights Are Experiencing ‘Unthinkable’ GPS Attacks and Nobody Knows What to Do
Vice
G7 and EU countries pitch guidelines for AI cybersecurity
Joseph Bambridge
U.S. and UK Unveil AI Cyber-Guidelines
Politico via PGN
Was Argentina the First AI Election?
NYTimes
As AI-Controlled Killer Drones Become Reality, Nations Debate Limits,
The New York Times
Reports that Sports Illustrated used AI-generated stories and fake authors are disturbing, but not surprising
Poynter
Is Anything Still True? On the Internet, No One Know Anymore
WSJ
ChatGPT x 3
sundry sources via Lauren Weinstein
Texas Rejects Science Textbooks Over Climate Change, Evolution Lessons
WSJ
A ‘silly’ attack made ChatGPT reveal real phone numbers and email addresses
Engadget
Meta/Facebook profiting from sale of counterfeit U.S. stamps
Mich Kabay
Chaos in the Cradle of AI
The New Yorker
Impossibility of Strong watermarks for Generative AI
Victor Miller
Hallucinating language models
Victor Miller
USB worm unleashed by Russian state hackers spreads worldwide
Ars Technica
AutoZone warns almost 185,000 customers of a data breach
Engadget
Okta admits hackers accessed data on all customers during recent breach
TechCrunch
USB worm unleashed by Russian state hackers spreads worldwide
Ars Technica
Microsoft’s Windows Hello fingerprint authentication has been bypassed
The Verge
Thousands of routers and cameras vulnerable to new 0-day attacks by hostile botnet
Ars Technica
A Postcard From Driverless San Francisco
Steve Bacher
Voting machine trouble in Pennsylvania county triggers alarm ahead of 2024
Politico via Steve Bacher
Intel hardware vulnerability
Daniel Moghimi at Google
Outdated Password Practices are Widespread
Georgia Tech
THE CTIL FILES #1
Shellenberger via geoff goodfellow
Judge rules it's fine for car makers to intercept your text messages
Henry Baker
Protecting Critical Infrastructure from Cyber Attacks
RMIT
Crypto Crashed and Everyone's In Jail. Investors Think It's Coming Back Anyway.
Vice
Feds seize Sinbad crypto mixer allegedly used by North Korean e hackers
TechCrunch
A lost bitcoin wallet passcode helped uncover a major security flaw
WashPost
Ontario's Crypto King still jet-setting to UK, Miami, and soon Australia despite bankruptcy
CBC
British Library confirms customer data was stolen by hackers, with outage expected to last months
TechCrunch
PSA: Update Chrome browser now to avoid an exploit already in the wild
The Verge
WeWork has failed. Like a lot of other tech startups, it left damage in its wake
CBC
Re: The AI Pin
Rob Slade
Re: Social media gets teens hooked while feeding aggression and impulsivity, and researchers think they know why
C.J.S. Hayward
Re: Garble in Schneier's AI post
Steve Singer
Re: Using your iPhone to start your car is about to get a lot easier
Sam Bull
Re: Oveview of the iLeakage Attack
Sam Bull
Info on RISKS (comp.risks)

Commercial Flights Are Experiencing ‘Unthinkable’ GPS Attacks and Nobody Knows What to Do (Vice)

Monty Solomon <monty@roscom.com>
Mon, 20 Nov 2023 19:00:14 -0500

New “spoofing” attacks resulting in total navigation failure have been occurring above the Middle East for months, which is “highly significant” for airline safety.

https://www.vice.com/en/article/m7bk3v/commercial-flights-are-experiencing-unthinkable-gps-attacks-and-nobody-knows-what-to-do


G7 and EU countries pitch guidelines for AI cybersecurity (Joseph Bambridge)

Peter Neumann <neumann@csl.sri.com>
Mon, 27 Nov 2023 9:10:36 PST

Joseph Bambridge, Politico Europe, 27 Nov 2023

Cybersecurity authorities in 18 major European and Western countries, including all G7 states, today released joint guidelines on how to develop artificial intelligence systems in ways that ensure their cybersecurity.

The United Kingdom, United States, Germany, France, Italy, Australia, Japan, Israel, Canada, Nigeria, Poland and others backed what they called the world's first AI cybersecurity guidelines. The initiative was led by the U.K.'s National Cyber Security Centre and follows London's AI Safety Summit that took place early November.

The 20-page document sets out practical ways providers of AI systems can ensure they function as intended, don't reveal sensitive data and aren't taken offline by attacks.

AI systems face both traditional threats and novel vulnerabilities like data poisoning and prompt injection attacks, the authorities said. The guidelines—which are voluntary—set standards for how technologists design, deploy and maintain AI systems with cybersecurity in mind.

The U.K.'s NCSC will present the guidelines at an event Monday after noon.

<https://y3r710.r.eu-west-1.awstrack.me/I0/0102018c10220f9c-cd93ae92-527e-4258-a9b4-5c43adb51332-000000/VBwAxQb3zMQOCAxex0irXa9NdgE=349>


U.S. and UK Unveil AI Cyber-Guidelines (Politico)

Peter Neumann <neumann@csl.sri.com>
Tue, 28 Nov 2023 11:26:30 PST

(Joseph Bambridge, Politico, PGN-ed for RISKS)

U.S. and UK UNVEIL AI CYBER GUIDELINES

The UK's National Cyber Security Center and U.S. Cybersecurity and Infrastructure Security Agency on Monday unveiled what they say are the world's first AI cyber guidelines, backed by 18 countries including Japan, Israel, Canada and Germany. It's the latest move on the international stage to get ahead of the risks posed by AI as companies race to develop more advanced models, and as systems are increasingly integrated in government and society.

“Overall I would assess them as some of the early formal guidance related to the cybersecurity vulnerabilities that derive from both traditional and unique vulnerabilities,” the Center for Strategic and guidelines appeared to be aimed at both traditional cyberthreats and new ones that come with the continued advancement of AI technologies.

Although the guidelines are voluntary, Allen said they could be made mandatory for selling to the U.S. federal government for certain types of risk-averse activities. In the private sector, Allen said companies buying AI technologies could require vendors to demonstrate compliance with the guidelines through third-party certification or other means.

Breaking it down: The guidelines aim to ensure security is a core requirement of the entire lifecycle of an AI system, and are focused on four themes: secure design, development, deployment and operation. Each section has a series of recommendations to mitigate security risks and safeguard consumer data, such as threat modeling, incident management processes and releasing AI models responsibly.

Homeland Security Secretary Alejandro Mayorkas said in a statement that the guidelines are a “historic agreement that developers must invest in, protecting customers at each step of a system's design and development.”International Studies' Gregory Allen told POLITICO. He said the

The guidance is closely aligned with the U.S. National Institute of Standards and Technology's Secure Software Development Framework (which outlines steps for software developers to limit vulnerabilities in their products) and CISA's secure-by-design principles, which was also released in concert with a dozen other states.

Acknowledgements: The document includes a thank you to a notable list of leading tech companies for their contributions, including Amazon, Anthropic, Google, IBM, Microsoft and OpenAI. Also in the mentions were Georgetown University's Center for Security and Emerging Technology, RAND and the Center for AI Safety and the program for Geopolitics, Technology and Governance, both at Stanford.

Aaron Cooper, VP of global policy at tech trade group BSA | The Software Alliance, said in a statement to MT that the guidelines help “build a coordinated approach for cybersecurity and artificial intelligence,” something that BSA has been calling for in many of its cyber and AI policy recs.


Was Argentina the First AI Election? (NYTimes)

ACM TechNews <technews-editor@acm.org>
Mon, 20 Nov 2023 11:40:21 -0500 (EST)

Jack Nicas and Luc=C3=8Ca Cholakian Herrera The New York Times, 16 Nov 2023 via ACM TechNews, November 20, 2023

Sergio Massa and Javier Milei widely used artificial intelligence (AI) to create images and videos to promote themselves and attack each other prior to Sunday's presidential election in Argentina, won by Milei. AI made candidates say things they did not, put them in famous movies, and created campaign posters. Much of the content was clearly fake, but a few creations strayed into the territory of disinformation. Researchers have long worried about the impact of AI on elections, but those fears were largely speculative because the technology to produce deepfakes was too expensive and unsophisticated. “Now we've seen this absolute explosion of incredibly accessible and increasingly powerful democratized tool sets, and that calculation has radically changed,” said Henry Ajder, an expert who has advised governments on AI-generated content.


As AI-Controlled Killer Drones Become Reality, Nations Debate Limits (The New York Times)

Steve Bacher <sebmb1@verizon.net>
Wed, 22 Nov 2023 16:53:39 -0800

An experimental unmanned aircraft at Eglin Air Force Base in Florida. The drone uses artificial intelligence and has the capability to carry weapons, although it has not yet been used in combat.

As AI-Controlled Killer Drones Become Reality, Nations Debate Limits

Worried about the risks of robot warfare, some countries want new legal constraints, but the U.S. and other major powers are resistant.

https://www.nytimes.com/2023/11/21/us/politics/ai-drones-war-law.html


Reports that Sports Illustrated used AI-generated stories and fake authors are disturbing, but not surprising (Poynter)

Steve Bacher <sebmb1@verizon.net>
Tue, 28 Nov 2023 06:48:00 -0800

It’s unsettling, especially from such a storied name. But comments from its parent company should have told us it was coming.

In a story that has generated both shock and disdain, Futurism’s Maggie Harrison reports <https://futurism.com/sports-illustrated-ai-generated-writers> that Sports Illustrated published stories that were produced or partially produced by artificial intelligence, and that some stories had bylines its parent company should have told us it was coming.

In a story that has generated both shock and disdain, Futurism’s Maggie Harrison reports <https://futurism.com/sports-illustrated-ai-generated-writers> that Sports Illustrated published stories that were produced or partially produced by artificial intelligence, and that some stories had bylines of fake authors. To be clear, the disdain was directed at Sports Illustrated.

But maybe we shouldn't be surprised by any of this, as I’ll explain in a moment. First, the details.

When asked about fake authors, an anonymous source described as a “person involved with the creation of the content” told Harrison, “There's a lot. I was like, what are they? This is ridiculous. This person does not exist. At the bottom (of the page) there would be a photo of a person and some fake description of them like, ‘oh, John lives in Houston, Texas. He loves yard games and hanging out with his dog, Sam.’ Stuff like that. It's just crazy.”

The fake authors even included AI-generated mugshots. If true, that is pretty gross ” photos of authors who don't actually exist, to go along with made-up bios that included made-up hobbies and even made-up pets. […]

https://www.poynter.org/commentary/2023/sports-illustrated-artificial-intelligence-writers-futurism/


Is Anything Still True? On the Internet, No OneK Knows Anymore (WSJ)

geoff goodfellow <geoff@iconia.com>
Tue, 21 Nov 2023 10:07:10 -0700

New tools can create fake videos and clone the voices of those closest to us. This is how authoritarianism arises.

Creating and disseminating convincing propaganda used to require the resources of a state. Now all it takes is a smartphone.

Generative artificial intelligence is now capable of creating fake pictures, clones of our voices <https://www.wsj.com/articles/i-cloned-myself-with-ai-she-fooled-my-bank-and-my-family-356bd1a3>, and even videos depicting and distorting world events. The result: From our personal <https://www.wsj.com/tech/fake-nudes-of-real-students-cause-an-uproar-at-a-new-vvxsxsjersey-high-school-df10f1bb> circles to the political <https://www.wsj.com/world/china/china-is-investing-billions-in-global-disinformation-campaign-u-s-says-88740b85> circuses, everyone must now question whether what they see and hear is true.

We've long been warned <https://www.wsj.com/articles/the-world-isnt-as-bad-as-your-wired-brain-tells-you-1535713201> about the potential of social media to distort our view of the world <https://www.wsj.com/articles/why-social-media-is-so-good-at-polarizing-us-11603105204>, and now there is the potential for more false and misleading information to spread on social media than ever before. Just as importantly, exposure to AI-generated fakes can make us question the authenticity of everything we see <https://www.wsj.com/articles/the-deepfake-dangers-ahead-b08e4ecf>. Real images and real recordings can be dismissed as fake. “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, “I just don't trust anything anymore,” says David Rand <https://mitsloan.mit.edu/faculty/directory/david-g-rand>, a professor at MIT Sloan who studies <https://www.nature.com/articles/s41562-023-01641-6> the creation, spread and impact of misinformation.

This problem, which has grown more acute in the age of generative AI, is known as the liar's dividend <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954>, says Renee DiResta, a researcher at the Stanford Internet Observatory.

The combination of easily-generated fake content and the suspicion that anything might be fake allows people to choose what they want to believe, adds DiResta, leading to what she calls =9Cbespoke realities <https://www.ribbonfarm.com/2019/12/17/mediating-consent/>.

Examples of misleading content created by generative AI are not hard to come by, especially on social media. One widely circulated and fake image of Israelis lining the streets in support of their country has many of the hallmarks of being AI-generated <https://www.reuters.com/fact-check/photo-cheering-crowds-waving-israeli-flags-soldiers-is-ai-generated-2023-10-30/> including telltale oddities that are apparent if you look closely, such as distorted bodies and limbs. For the same reasons, a widely shared image that purports to show fans at a soccer match in Spain displaying a Palestinian flag doesn't stand up <https://factcheck.afp.com/doc.afp.com.33YY7NY> to scrutiny.


ChatGPT x 3 (sundry sources)

Lauren Weinstein <lauren@vortex.com>
Wed, 22 Nov 2023 08:02:57 -0800

ChatGPT Replicates Gender Bias in Recommendation Letters https://www.scientificamerican.com/article/chatgpt-replicates-gender-bias-in-recommendation-letters/

OpenAI and Microsoft hit with copyright lawsuit from non-fiction authors https://www.engadget.com/openai-and-microsoft-hit-with-copyright-lawsuit-from-non-fiction-authors-101505740.html?src=rss

ChatGPT generates fake data set to support scientific hypothesis https://www.nature.com/articles/d41586-023-03635-w


Texas Rejects Science Textbooks Over Climate Change, Evolution Lessons (WSJ)

Monty Solomon <monty@roscom.com>
Sun, 19 Nov 2023 18:19:43 -0500

https://www.wsj.com/us-news/education/texas-rejects-science-textbooks-over-climate-change-evolution-lessons-29a2c2ca


A ‘silly’ attack made ChatGPT reveal real phone numbers and email addresses (Engadget)

Monty Solomon <monty@roscom.com>
Thu, 30 Nov 2023 08:50:28 -0500

https://www.engadget.com/a-silly-attack-made-chatgpt-reveal-real-phone-numbers-and-email-addresses-200546649.html


Meta/Facebook profiting from sale of counterfeit U.S. stamps

<mekabay@gmail.com>

Sun, 26 Nov 2023 17:46:22 -0500

Meta/Facebook post and profit from ads on FB for criminals who sell counterfeit U.S. stamps to unsuspecting victims (or to those who choose to ignore warnings such as the one in the next paragraph). Images of the counterfeit stamps at the time of posting are here. I have reported these crimes to the United States Postal Inspection Service and the FBI's Internet Crime Complaint Center. I have also written to Meta about this criminal activity but never received a reply.

See < http://www.mekabay.com/counterfeit-stamps/ > for images of over 500 ads on FB for counterfeit US stamps.

Warning I post online whenever I can:

These are counterfeit stamps. It is a federal crime to use fake stamps as postage. Don't fall for these scams. https://www.uspis.gov/news/scam-article/counterfeit-stamps


Chaos in the Cradle of AI (The New Yorker)

Monty Solomon <monty@roscom.com>
Fri, 24 Nov 2023 19:51:50 -0500

The Sam Altman saga at OpenAI underscores an unsettling truth: nobody knows what AI safety really means.

https://www.newyorker.com/science/annals-of-artificial-intelligence/chaos-in-the-cradle-of-ai


Impossibility of Strong watermarks for Generative AI

Victor Miller <victorsmiller@gmail.com>
Sun, 19 Nov 2023 15:39:33 +0000

Watermarks have been proposed to allow identification of data (and pictures, etc) generated by AI. This paper shows that that goal is essentially impossible.

https://arxiv.org/pdf/2311.04378.pdf


Hallucinating language models

Victor Miller <victorsmiller@gmail.com>
Mon, 27 Nov 2023 15:38:59 -0800

The introduction is really very clear.

Adam Tauman Kalai, Microsoft Research Santosh S. Vempala, Georgia Tech Calibrated Language Models Must Hallucinate 27 Nov 2023

https://arxiv.org/pdf/2311.14648.pdf


USB worm unleashed by Russian state hackers spreads worldwide (Ars Technica)

Monty Solomon <monty@roscom.com>
Wed, 22 Nov 2023 21:00:41 -0500

https://arstechnica.com/?p=1985993


AutoZone warns almost 185,000 customers of a data breach (Engadget)

Monty Solomon <monty@roscom.com>
Wed, 22 Nov 2023 18:38:06 -0500

https://www.engadget.com/autozone-warns-almost-185000-customers-of-a-data-breach-202533437.html


Okta admits hackers accessed data on all customers during recent breach (TechCrunch)

Monty Solomon <monty@roscom.com>
Wed, 29 Nov 2023 20:47:49 -0500

https://techcrunch.com/2023/11/29/okta-admits-hackers-accessed-data-on-all-customers-during-recent-breach/


USB worm unleashed by Russian state hackers spreads worldwide (Ars Technica)

Victor Miller <victorsmiller@gmail.com>
Fri, 24 Nov 2023 15:37:03 +0000

https://arstechnica.com/security/2023/11/normally-targeting-ukraine-russian-state-hackers-spread-usb-worm-worldwide/


Microsoft’s Windows Hello fingerprint authentication has been bypassed (The Verge)

Monty Solomon <monty@roscom.com>
Wed, 22 Nov 2023 18:23:24 -0500

https://www.theverge.com/2023/11/22/23972220/microsoft-windows-hello-fingerprint-authentication-bypass-security-vulnerability


Thousands of routers and cameras vulnerable to new 0-day attacks by hostile botnet (Ars Technica)

Monty Solomon <monty@roscom.com>
Wed, 22 Nov 2023 20:58:06 -0500

https://arstechnica.com/?p=1986211


A Postcard From Driverless San Francisco

Steve Bacher <sebmb1@verizon.net>
Wed, 29 Nov 2023 08:53:27 -0800

Unexplained stops. Incensed firefighters. Cars named Oregano. The a robotaxis are officially here. Riding with Cruise and Waymo during their debut in San Francisco.

https://www.curbed.com/article/waymo-cruise-driverless-cars-robotaxi-san-francisco.html


Voting machine trouble in Pennsylvania county triggers alarm ahead of 2024

Steve Bacher <sebmb1@verizon.net>
Sat, 25 Nov 2023 08:10:12 -0800

Officials say the issue did not affect the outcome of the votes, but are nonetheless racing to restore voter confidence ahead of next year’s election.

https://www.politico.com/news/2023/11/25/voting-machine-trouble-pennsylvania-00128554

Excerpt:

Skeptics […] say the root of the problem ties back to the basic design of the devices, called the ExpressVote XL.

The machine spits out a paper print-out that records voters’ choices in two ways: a barcode that is used to tabulate their vote and corresponding text so they can verify it was input correctly.

However, in the two races on 7 Nov, the machines swapped voters’ choices in the written section of the ballot -” but not the barcode ” if they voted “yes” to retain one judge and “no” for the other.

ES&S and Northampton officials acknowledged that pre-election software testing, which is conducted jointly, should have caught that problem. They say an ES&S employee first introduced the error during regular programming meant to prepare the machines for Election Day. […]


Intel hardware vulnerability (Daniel Moghimi at Google_

Victor Miller <victorsmiller@gmail.com>
Mon, 20 Nov 2023 19:14:06 -0800

We found another vulnerability inside Intel Corporation CPUs. Somehow instruction prefixes that should be ignored mess up the “fast rep string mov” FRSM extension and causes invalid instruction execution. This vulnerability with high severity rating has serious consequence for cloud providers. It enables an attacker who is renting a cloud VM to: - DDOS an entire server - Elevates privilege gaining access to the entire server (Confirmed by Intel) https://lnkd.in/guzjT3UD https://lnkd.in/gUn-vAvN


Outdated Password Practices are Widespread (Georgia Tech)

ACM TechNews <technews-editor@acm.org>
Wed, 22 Nov 2023 10:48:11 -0500 (EST)

Georgia Tech Research, 17 Nov 23). via ACM TechNews

A majority of the world's most popular websites are putting users and their data at risk by failing to meet minimum password requirement standards, according to researchers at the Georgia Institute of Technology (Georgia Tech). The researchers analyzed 20,000 randomly sampled websites from the Google Chrome User Experience Report, a database of 1 million websites and pages. Using a novel automated tool that can assess a website's password creation policies, they found that many sites permit very short passwords, do not block common passwords, and use outdated requirements like complex characters. Georgia Tech's Frank Li said security researchers have “identified and developed various solutions and best practices for improving Internet and Web security. It's crucial that we investigate whether those solutions or guidelines are actually adopted in practice to understand whether security is improving in reality.”


THE CTIL FILES #1

geoff goodfellow <geoff@iconia.com>
Tue, 28 Nov 2023 19:44:03 -0700

Many people insist that governments aren't involved in censorship, but they are. And now, a whistleblower has come forward with an explosive new trove of documents, rivaling or exceeding the Twitter Files and Facebook Files in scale and importance.

[image: image.png]

US military contractor Pablo Breuer (left), UK defense researcher Sara-Jayne CSJ Terp (center), and Chris Krebs, former director of the U.S. Department of Homeland Security's Cybersecurity and Infrastructure Security Agency (DHS-CISA) A whistleblower has come forward with an explosive new trove of documents, rivaling or exceeding the Twitter Files and Facebook Files in scale and importance. They describe the activities of an anti-disinformation group called the Cyber Threat Intelligence League, or CTIL, that officially began as the volunteer project of data scientists and defense and intelligence veterans but whose tactics over time appear to have been absorbed into multiple official projects, including those of the Department of Homeland Security (DHS). The CTI League documents offer the missing link answers to key questions not addressed in the Twitter Files and Facebook Files. Combined, they offer a comprehensive picture of the birth of the anti-disinformation sector, or what we have called the Censorship Industrial Complex. The whistleblower's documents describe everything from the genesis of modern digital censorship programs to the role of the military and intelligence agencies, partnerships with civil society organizations and commercial media, and the use of sock puppet accounts and other offensive techniques. “Lock your shit down,” explains one document about creating your spy disguise. Another explains that while such activities overseas are “typically” done by “the CIA and NSA and the Department of Defense,” censorship efforts “against Americans” have to be done using private partners because the government doesn't have the “legal authority.” The whistleblower alleges that a leader of CTI League, a former British intelligence analyst, was in the room at the Obama White House in 2017 when she received the instructions to create a counter-disinformation project to stop a “repeat of 2016.” Over the last year, Public, Racket, congressional investigators, and others have documented the rise of the Censorship Industrial Complex, a network of over 100 government agencies and nongovernmental organizations that work together to urge censorship by social media platforms and spread propaganda about disfavored individuals, topics, and whole narratives. The US Department of Homeland Security's Cybersecurity and Information Security Agency (CISA) has been the center of gravity for much of the censorship, with the National Science Foundation financing the development of censorship and disinformation tools and other federal government agencies playing a supportive role. Emails from CISA's NGO and social media partners show that CISA created the Election Integrity Partnership (EIP) in 2020, which involved the Stanford Internet Observatory (SIO) and other US government contractors. EIP and its successor, the Virality Project (VP), urged Twitter, Facebook and other platforms to censor social media posts by ordinary citizens and elected officials alike. […]

https://twitter.com/shellenberger/status/1729538920487305723


Judge rules it's fine for car makers to intercept your text messages

Henry Baker <hbaker1@pipeline.com>
Sun, 19 Nov 2023 17:34:07 +0000

I was worried about this problem the last time I rented a car, because I was able to see all the GPS destinations and the phone numbers of some of the previous rental customers when I first got into the rental car. I didn't want to leave my data available to every subsequent renter.

But clearing the GPS, message and phone number data logs took me (a PhD in Computer Science) at least 15 minutes and a significant amount of research in order to perform this expunging task on a relatively high-end rental car.

Very few people are going to spend the time while turning in their rental car to clear these personal data from the car data logs — especially when they're trying like crazy to get to their airplane on time!

>>There needs to be a industry-wide standard for clearing these
data which takes only a second or two.<<

Furthermore, the car manufacturers should be liable if these supposedly expunged data are subsequently used illegally—e.g., for tracking down an ex-spouse or for identity theft.

https://www.malwarebytes.com/blog/news/2023/11/judge-rules-its-fine-for-car-make rs-to-intercept-your-text-messages

Judge rules it's fine for car makers to intercept your text messages

Posted: November 9, 2023 by Pieter Arntz

A federal judge has refused to bring back a class action lawsuit that alleged four car manufacturers had violated Washington state's privacy laws by using vehicles' on-board infotainment systems to record customers' text messages and mobile phone call logs.

The judge ruled that the practice doesn't meet the threshold for an illegal privacy violation under state law. The plaintiffs had appealed a prior judge's dismissal.

https://www.documentcloud.org/documents/24133084-22-35448

Car manufacturers Honda, Toyota, Volkswagen, and General Motors were facing five related privacy class action suits. One of those cases, against Ford, had been dismissed on appeal previously.

Infotainment systems in the company's vehicles began downloading and storing a copy of all text messages on smartphones when they were connected to the system. Once messages have been downloaded, the software makes it impossible for vehicle owners to access their communications and call logs but does provide law enforcement with access, the lawsuit said.

The Seattle-based appellate judge ruled that the interception and recording of mobile phone activity did not meet the Washington Privacy Act's (WPA) standard that a plaintiff must prove that “his or her business, his or her person, or his or her reputation” has been threatened.

In a recent Lock and Code podcast, we heard from Mozilla researchers that the data points that car companies say they can collect on you include social security number, information about your religion, your marital status, genetic information, disability status, immigration status, and race. And they can sell that data to marketers.

https://www.malwarebytes.com/blog/podcast/2023/09/what-does-a-car-need-to-know-about-your-sex-life

This is alarming. Given the increasing number of sensors being placed in cars every year, this is becoming an increasingly grave problem. In the same podcast, we also explored the booming revenue stream that car manufacturers are tapping into by not only collecting people's data, but also packaging it together for targeted advertising. According to the Mozilla research, popular global brands including BMW, Ford, Toyota, Tesla, Kia, and Subaru:

“Can collect deeply personal data such as sexual activity, immigration status, race, facial expressions, weight, health and genetic information, and where you drive. Researchers found data is being gathered by sensors, microphones, cameras, and the phones and devices drivers connect to their cars, as well as by car apps, company websites, dealerships, and vehicle telematics.”

In fact, the seasoned Mozilla team said “cars are the worst product category we have ever reviewed for privacy” after finding that all 25 car brands they researched earned the “Privacy Not Included” warning label.

Since that doesn't give us much of a choice to go for a brand that respects our privacy, I suggest we turn off our phones before we start the car. It's both safer and better for your privacy.


Protecting Critical Infrastructure from Cyber Attacks (RMIT)

ACM TechNews <technews-editor@acm.org>
Mon, 27 Nov 2023 11:51:33 -0500 (EST)

RMIT University, 22 Nov 23, via ACM TechNews

A mathematical breakthrough by researchers at the Royal Melbourne Institute of Technology and tech startup Tide Foundation in Australia allows system access authority to be spread invisibly and securely across a network. Dubbed “ineffable cryptograph,” the technology has been incorporated into a prototype access-control system specifically for critical infrastructure management, known as KeyleSSH, and successfully tested with multiple companies. It works by generating and operating keys across a decentralized network of servers, each operated by independent organizations. Each server in the network can only hold part of a key—no one can see the full keys, all the processes they are partially actioning, or the assets they are unlocking.


Crypto Crashed and Everyone's In Jail. Investors Think It's Coming Back Anyway. (Vice)

Monty Solomon <monty@roscom.com>
Mon, 20 Nov 2023 18:58:47 -0500

https://www.vice.com/en/article/7kxmpg/crypto-crashed-and-everyones-in-jail-investors-think-its-coming-back-anyway


Feds seize Sinbad crypto mixer allegedly used by North Korean hackers (TechCrunch)

Monty Solomon <monty@roscom.com>
Wed, 29 Nov 2023 20:49:51 -0500

https://techcrunch.com/2023/11/29/feds-seize-sinbad-crypto-mixer-allegedly-used-by-north-korean-hackers/


A lost bitcoin wallet passcode helped uncover a major security flaw (The Washington Post)

Gabe Goldberg <gabe@gabegold.com>
Thu, 30 Nov 2023 18:37:21 -0500

If you created a bitcoin wallet before 2016, your money may be at risk—A company that helps recover cryptocurrency discovered a software flaw putting as much as $1 billion at risk from hackers. Now it’s going public in hopes people will move their money before they get robbed.

https://www.washingtonpost.com/technology/2023/11/14/bitcoin-wallet-passcode-flaw/


Ontario's Crypto King still jet-setting to UK, Miami, and soon Australia despite bankruptcy (CBC)

Matthew Kruk <mkrukg@gmail.com>
Thu, 30 Nov 2023 09:35:52 -0700

https://www.cbc.ca/news/canada/toronto/ontario-crypto-king-jetsetting-abroad-while-bankrupt-1.7042719


British Library confirms customer data was stolen by hackers, with outage expected to last months (TechCrunch)

Monty Solomon <monty@roscom.com>
Thu, 30 Nov 2023 08:35:24 -0500

https://techcrunch.com/2023/11/29/british-library-customer-data-stolen-ransomware/


PSA: Update Chrome browser now to avoid an exploit already in the wild (The Verge)

Monty Solomon <monty@roscom.com>
Thu, 30 Nov 2023 08:39:33 -0500

https://www.theverge.com/2023/11/30/23982296/google-chrome-browser-update-sandbox-escape-exploit-security-vulnerability


WeWork has failed. Like a lot of other tech startups, it left damage in its wake (CBC)

Matthew Kruk <mkrukg@gmail.com>
Sun, 19 Nov 2023 08:39:46 -0700

https://www.cbc.ca/news/business/armstrong-start-ups-wework-uber-1.7032264

The worksharing giant WeWork was supposed to fundamentally alter the future of the office. It raised billions of dollars, signed leases in office towers across North America but filed for bankruptcy protection last week.

Analysts say it collapsed, at least in part, because it never had a viable business model.

“It didn't really have a clear path to profitability. It never made any money,” said Susannah Streeter, head of money and markets at the financial services firm Hargreaves Lansdown.


Re: The AI Pin (RISKS-33.94)

Rob Slade <rslade@gmail.com>
Mon, 20 Nov 2023 12:00:49 -0800

[Ummmmm, somehow my posting got truncated, and the risky part left off:]

> On the other hand, as we have seen in various events to do with Siri and
> Alexa, this is “always on” surveillance.  The AI Pin will always be
> listening for commands.  (And, in common with Siri, Alexa, Gboard, and all
> the others, those verbal commands will be sent back to HQ for processing
> into text and parsing.)  By accident (and possibly by design?) it will be
> listening to everything that goes on around you.  (And, with the camera,
> possibly looking, too.)
>
> And, if it gets popular enough, who knows what you can find out with all
> that aggregated data …

Re: Social media gets teens hooked while feeding aggression and impulsivity, and researchers think they know why (CBC)

“C.J.S. Hayward” <cjsh@cjshayward.com>
Wed, 22 Nov 2023 09:44:45 +0000

https://www.cbc.ca/news/health/smartphone-brain-nov14-1.7029406

> Kids who spend hours on their phones scrolling through social media are
> showing more aggression, depression and anxiety, say Canadian researchers.
> […

That is part of the dehumanizing effect I studied in “How Can I Take my Life Back from my Phone?”, https://cjshayward.com/phone/.

Using phones the way that seems “natural” opens a Pandora's box. Once privilege could be marked by not owning a television. Now privilege can be marked by not owning a phone, or as in my case, learning to use it with non-obvious ways that curb its presence as an intravenous drip of noise.


Re: Garble in Schneier's AI post (RISKS-33.84]

Steve Singer <sws@dedicatedresponse.com>
Sun, 19 Nov 2023 09:47:58 -0500

The text of this post was garbled by software (what could possibly go wrong?) ;-)

The links at the beginning and end of Schneier's post are unaffected and contain the embedded references of the original, ungarbled:

https://www.schneier.com/blog/archives/2023/11/ten-ways-ai-will-change-democracy.html

https://ash.harvard.edu/ten-ways-ai-will-change-democracy


Re: Using your iPhone to start your car is about to get a lot easier (RISKS-33.94)

Sam Bull <sam@sambull.org9wqnn1@sambull.org>
Mon, 27 Nov 2023 19:05:26 +0000

Not much different from what Tesla has been doing for years (which both supports unlocking remotely via an API and unlocking locally via Bluetooth).


Re: Oveview of the iLeakage Attack (Jericho, RISKS-33.93)

Sam Bull <9wqnn1@sambull.org>
Sat, 25 Nov 2023 02:29:08 +0000
> Sorry… godfather implies at least two generations, if not three.

Wouldn't that be grandfather? I'm a godfather to my sister. 0 generations

Please report problems with the web pages to the maintainer

x
Top