Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
New “spoofing” attacks resulting in total navigation failure have been occurring above the Middle East for months, which is “highly significant” for airline safety.
Joseph Bambridge, Politico Europe, 27 Nov 2023
Cybersecurity authorities in 18 major European and Western countries, including all G7 states, today released joint guidelines on how to develop artificial intelligence systems in ways that ensure their cybersecurity.
The United Kingdom, United States, Germany, France, Italy, Australia, Japan, Israel, Canada, Nigeria, Poland and others backed what they called the world's first AI cybersecurity guidelines. The initiative was led by the U.K.'s National Cyber Security Centre and follows London's AI Safety Summit that took place early November.
The 20-page document sets out practical ways providers of AI systems can ensure they function as intended, don't reveal sensitive data and aren't taken offline by attacks.
AI systems face both traditional threats and novel vulnerabilities like data poisoning and prompt injection attacks, the authorities said. The guidelines—which are voluntary—set standards for how technologists design, deploy and maintain AI systems with cybersecurity in mind.
The U.K.'s NCSC will present the guidelines at an event Monday after noon.
(Joseph Bambridge, Politico, PGN-ed for RISKS)
U.S. and UK UNVEIL AI CYBER GUIDELINES
The UK's National Cyber Security Center and U.S. Cybersecurity and Infrastructure Security Agency on Monday unveiled what they say are the world's first AI cyber guidelines, backed by 18 countries including Japan, Israel, Canada and Germany. It's the latest move on the international stage to get ahead of the risks posed by AI as companies race to develop more advanced models, and as systems are increasingly integrated in government and society.
“Overall I would assess them as some of the early formal guidance related to the cybersecurity vulnerabilities that derive from both traditional and unique vulnerabilities,” the Center for Strategic and guidelines appeared to be aimed at both traditional cyberthreats and new ones that come with the continued advancement of AI technologies.
Although the guidelines are voluntary, Allen said they could be made mandatory for selling to the U.S. federal government for certain types of risk-averse activities. In the private sector, Allen said companies buying AI technologies could require vendors to demonstrate compliance with the guidelines through third-party certification or other means.
Breaking it down: The guidelines aim to ensure security is a core requirement of the entire lifecycle of an AI system, and are focused on four themes: secure design, development, deployment and operation. Each section has a series of recommendations to mitigate security risks and safeguard consumer data, such as threat modeling, incident management processes and releasing AI models responsibly.
Homeland Security Secretary Alejandro Mayorkas said in a statement that the guidelines are a “historic agreement that developers must invest in, protecting customers at each step of a system's design and development.”International Studies' Gregory Allen told POLITICO. He said the
The guidance is closely aligned with the U.S. National Institute of Standards and Technology's Secure Software Development Framework (which outlines steps for software developers to limit vulnerabilities in their products) and CISA's secure-by-design principles, which was also released in concert with a dozen other states.
Acknowledgements: The document includes a thank you to a notable list of leading tech companies for their contributions, including Amazon, Anthropic, Google, IBM, Microsoft and OpenAI. Also in the mentions were Georgetown University's Center for Security and Emerging Technology, RAND and the Center for AI Safety and the program for Geopolitics, Technology and Governance, both at Stanford.
Aaron Cooper, VP of global policy at tech trade group BSA | The Software Alliance, said in a statement to MT that the guidelines help “build a coordinated approach for cybersecurity and artificial intelligence,” something that BSA has been calling for in many of its cyber and AI policy recs.
Jack Nicas and Luc=C3=8Ca Cholakian Herrera The New York Times, 16 Nov 2023 via ACM TechNews, November 20, 2023
Sergio Massa and Javier Milei widely used artificial intelligence (AI) to create images and videos to promote themselves and attack each other prior to Sunday's presidential election in Argentina, won by Milei. AI made candidates say things they did not, put them in famous movies, and created campaign posters. Much of the content was clearly fake, but a few creations strayed into the territory of disinformation. Researchers have long worried about the impact of AI on elections, but those fears were largely speculative because the technology to produce deepfakes was too expensive and unsophisticated. “Now we've seen this absolute explosion of incredibly accessible and increasingly powerful democratized tool sets, and that calculation has radically changed,” said Henry Ajder, an expert who has advised governments on AI-generated content.
An experimental unmanned aircraft at Eglin Air Force Base in Florida. The drone uses artificial intelligence and has the capability to carry weapons, although it has not yet been used in combat.
As AI-Controlled Killer Drones Become Reality, Nations Debate Limits
Worried about the risks of robot warfare, some countries want new legal constraints, but the U.S. and other major powers are resistant.
It’s unsettling, especially from such a storied name. But comments from its parent company should have told us it was coming.
In a story that has generated both shock and disdain, Futurism’s Maggie Harrison reports <https://futurism.com/sports-illustrated-ai-generated-writers> that Sports Illustrated published stories that were produced or partially produced by artificial intelligence, and that some stories had bylines its parent company should have told us it was coming.
In a story that has generated both shock and disdain, Futurism’s Maggie Harrison reports <https://futurism.com/sports-illustrated-ai-generated-writers> that Sports Illustrated published stories that were produced or partially produced by artificial intelligence, and that some stories had bylines of fake authors. To be clear, the disdain was directed at Sports Illustrated.
But maybe we shouldn't be surprised by any of this, as I’ll explain in a moment. First, the details.
When asked about fake authors, an anonymous source described as a “person involved with the creation of the content” told Harrison, “There's a lot. I was like, what are they? This is ridiculous. This person does not exist. At the bottom (of the page) there would be a photo of a person and some fake description of them like, ‘oh, John lives in Houston, Texas. He loves yard games and hanging out with his dog, Sam.’ Stuff like that. It's just crazy.”
The fake authors even included AI-generated mugshots. If true, that is pretty gross ” photos of authors who don't actually exist, to go along with made-up bios that included made-up hobbies and even made-up pets. […]
New tools can create fake videos and clone the voices of those closest to us. This is how authoritarianism arises.
Creating and disseminating convincing propaganda used to require the resources of a state. Now all it takes is a smartphone.
Generative artificial intelligence is now capable of creating fake pictures, clones of our voices <https://www.wsj.com/articles/i-cloned-myself-with-ai-she-fooled-my-bank-and-my-family-356bd1a3>, and even videos depicting and distorting world events. The result: From our personal <https://www.wsj.com/tech/fake-nudes-of-real-students-cause-an-uproar-at-a-new-vvxsxsjersey-high-school-df10f1bb> circles to the political <https://www.wsj.com/world/china/china-is-investing-billions-in-global-disinformation-campaign-u-s-says-88740b85> circuses, everyone must now question whether what they see and hear is true.
We've long been warned <https://www.wsj.com/articles/the-world-isnt-as-bad-as-your-wired-brain-tells-you-1535713201> about the potential of social media to distort our view of the world <https://www.wsj.com/articles/why-social-media-is-so-good-at-polarizing-us-11603105204>, and now there is the potential for more false and misleading information to spread on social media than ever before. Just as importantly, exposure to AI-generated fakes can make us question the authenticity of everything we see <https://www.wsj.com/articles/the-deepfake-dangers-ahead-b08e4ecf>. Real images and real recordings can be dismissed as fake. “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, “I just don't trust anything anymore,” says David Rand <https://mitsloan.mit.edu/faculty/directory/david-g-rand>, a professor at MIT Sloan who studies <https://www.nature.com/articles/s41562-023-01641-6> the creation, spread and impact of misinformation.
This problem, which has grown more acute in the age of generative AI, is known as the liar's dividend <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954>, says Renee DiResta, a researcher at the Stanford Internet Observatory.
The combination of easily-generated fake content and the suspicion that anything might be fake allows people to choose what they want to believe, adds DiResta, leading to what she calls =9Cbespoke realities <https://www.ribbonfarm.com/2019/12/17/mediating-consent/>.
Examples of misleading content created by generative AI are not hard to come by, especially on social media. One widely circulated and fake image of Israelis lining the streets in support of their country has many of the hallmarks of being AI-generated <https://www.reuters.com/fact-check/photo-cheering-crowds-waving-israeli-flags-soldiers-is-ai-generated-2023-10-30/> including telltale oddities that are apparent if you look closely, such as distorted bodies and limbs. For the same reasons, a widely shared image that purports to show fans at a soccer match in Spain displaying a Palestinian flag doesn't stand up <https://factcheck.afp.com/doc.afp.com.33YY7NY> to scrutiny.
ChatGPT Replicates Gender Bias in Recommendation Letters https://www.scientificamerican.com/article/chatgpt-replicates-gender-bias-in-recommendation-letters/
OpenAI and Microsoft hit with copyright lawsuit from non-fiction authors https://www.engadget.com/openai-and-microsoft-hit-with-copyright-lawsuit-from-non-fiction-authors-101505740.html?src=rss
ChatGPT generates fake data set to support scientific hypothesis https://www.nature.com/articles/d41586-023-03635-w
Meta/Facebook post and profit from ads on FB for criminals who sell counterfeit U.S. stamps to unsuspecting victims (or to those who choose to ignore warnings such as the one in the next paragraph). Images of the counterfeit stamps at the time of posting are here. I have reported these crimes to the United States Postal Inspection Service and the FBI's Internet Crime Complaint Center. I have also written to Meta about this criminal activity but never received a reply.
See < http://www.mekabay.com/counterfeit-stamps/ > for images of over 500 ads on FB for counterfeit US stamps.
Warning I post online whenever I can:
These are counterfeit stamps. It is a federal crime to use fake stamps as postage. Don't fall for these scams. https://www.uspis.gov/news/scam-article/counterfeit-stamps
The Sam Altman saga at OpenAI underscores an unsettling truth: nobody knows what AI safety really means.
Watermarks have been proposed to allow identification of data (and pictures, etc) generated by AI. This paper shows that that goal is essentially impossible.
The introduction is really very clear.
Adam Tauman Kalai, Microsoft Research Santosh S. Vempala, Georgia Tech Calibrated Language Models Must Hallucinate 27 Nov 2023
Unexplained stops. Incensed firefighters. Cars named Oregano. The a robotaxis are officially here. Riding with Cruise and Waymo during their debut in San Francisco.
Officials say the issue did not affect the outcome of the votes, but are nonetheless racing to restore voter confidence ahead of next year’s election.
Skeptics […] say the root of the problem ties back to the basic design of the devices, called the ExpressVote XL.
The machine spits out a paper print-out that records voters’ choices in two ways: a barcode that is used to tabulate their vote and corresponding text so they can verify it was input correctly.
However, in the two races on 7 Nov, the machines swapped voters’ choices in the written section of the ballot -” but not the barcode ” if they voted “yes” to retain one judge and “no” for the other.
ES&S and Northampton officials acknowledged that pre-election software testing, which is conducted jointly, should have caught that problem. They say an ES&S employee first introduced the error during regular programming meant to prepare the machines for Election Day. […]
We found another vulnerability inside Intel Corporation CPUs. Somehow instruction prefixes that should be ignored mess up the “fast rep string mov” FRSM extension and causes invalid instruction execution. This vulnerability with high severity rating has serious consequence for cloud providers. It enables an attacker who is renting a cloud VM to: - DDOS an entire server - Elevates privilege gaining access to the entire server (Confirmed by Intel) https://lnkd.in/guzjT3UD https://lnkd.in/gUn-vAvN
Georgia Tech Research, 17 Nov 23). via ACM TechNews
A majority of the world's most popular websites are putting users and their data at risk by failing to meet minimum password requirement standards, according to researchers at the Georgia Institute of Technology (Georgia Tech). The researchers analyzed 20,000 randomly sampled websites from the Google Chrome User Experience Report, a database of 1 million websites and pages. Using a novel automated tool that can assess a website's password creation policies, they found that many sites permit very short passwords, do not block common passwords, and use outdated requirements like complex characters. Georgia Tech's Frank Li said security researchers have “identified and developed various solutions and best practices for improving Internet and Web security. It's crucial that we investigate whether those solutions or guidelines are actually adopted in practice to understand whether security is improving in reality.”
Many people insist that governments aren't involved in censorship, but they are. And now, a whistleblower has come forward with an explosive new trove of documents, rivaling or exceeding the Twitter Files and Facebook Files in scale and importance.
US military contractor Pablo Breuer (left), UK defense researcher Sara-Jayne CSJ Terp (center), and Chris Krebs, former director of the U.S. Department of Homeland Security's Cybersecurity and Infrastructure Security Agency (DHS-CISA) A whistleblower has come forward with an explosive new trove of documents, rivaling or exceeding the Twitter Files and Facebook Files in scale and importance. They describe the activities of an anti-disinformation group called the Cyber Threat Intelligence League, or CTIL, that officially began as the volunteer project of data scientists and defense and intelligence veterans but whose tactics over time appear to have been absorbed into multiple official projects, including those of the Department of Homeland Security (DHS). The CTI League documents offer the missing link answers to key questions not addressed in the Twitter Files and Facebook Files. Combined, they offer a comprehensive picture of the birth of the anti-disinformation sector, or what we have called the Censorship Industrial Complex. The whistleblower's documents describe everything from the genesis of modern digital censorship programs to the role of the military and intelligence agencies, partnerships with civil society organizations and commercial media, and the use of sock puppet accounts and other offensive techniques. “Lock your shit down,” explains one document about creating your spy disguise. Another explains that while such activities overseas are “typically” done by “the CIA and NSA and the Department of Defense,” censorship efforts “against Americans” have to be done using private partners because the government doesn't have the “legal authority.” The whistleblower alleges that a leader of CTI League, a former British intelligence analyst, was in the room at the Obama White House in 2017 when she received the instructions to create a counter-disinformation project to stop a “repeat of 2016.” Over the last year, Public, Racket, congressional investigators, and others have documented the rise of the Censorship Industrial Complex, a network of over 100 government agencies and nongovernmental organizations that work together to urge censorship by social media platforms and spread propaganda about disfavored individuals, topics, and whole narratives. The US Department of Homeland Security's Cybersecurity and Information Security Agency (CISA) has been the center of gravity for much of the censorship, with the National Science Foundation financing the development of censorship and disinformation tools and other federal government agencies playing a supportive role. Emails from CISA's NGO and social media partners show that CISA created the Election Integrity Partnership (EIP) in 2020, which involved the Stanford Internet Observatory (SIO) and other US government contractors. EIP and its successor, the Virality Project (VP), urged Twitter, Facebook and other platforms to censor social media posts by ordinary citizens and elected officials alike. […]
I was worried about this problem the last time I rented a car, because I was able to see all the GPS destinations and the phone numbers of some of the previous rental customers when I first got into the rental car. I didn't want to leave my data available to every subsequent renter.
But clearing the GPS, message and phone number data logs took me (a PhD in Computer Science) at least 15 minutes and a significant amount of research in order to perform this expunging task on a relatively high-end rental car.
Very few people are going to spend the time while turning in their rental car to clear these personal data from the car data logs — especially when they're trying like crazy to get to their airplane on time!
>>There needs to be a industry-wide standard for clearing these data which takes only a second or two.<<
Furthermore, the car manufacturers should be liable if these supposedly expunged data are subsequently used illegally—e.g., for tracking down an ex-spouse or for identity theft.
Judge rules it's fine for car makers to intercept your text messages
Posted: November 9, 2023 by Pieter Arntz
A federal judge has refused to bring back a class action lawsuit that alleged four car manufacturers had violated Washington state's privacy laws by using vehicles' on-board infotainment systems to record customers' text messages and mobile phone call logs.
The judge ruled that the practice doesn't meet the threshold for an illegal privacy violation under state law. The plaintiffs had appealed a prior judge's dismissal.
Car manufacturers Honda, Toyota, Volkswagen, and General Motors were facing five related privacy class action suits. One of those cases, against Ford, had been dismissed on appeal previously.
Infotainment systems in the company's vehicles began downloading and storing a copy of all text messages on smartphones when they were connected to the system. Once messages have been downloaded, the software makes it impossible for vehicle owners to access their communications and call logs but does provide law enforcement with access, the lawsuit said.
The Seattle-based appellate judge ruled that the interception and recording of mobile phone activity did not meet the Washington Privacy Act's (WPA) standard that a plaintiff must prove that “his or her business, his or her person, or his or her reputation” has been threatened.
In a recent Lock and Code podcast, we heard from Mozilla researchers that the data points that car companies say they can collect on you include social security number, information about your religion, your marital status, genetic information, disability status, immigration status, and race. And they can sell that data to marketers.
This is alarming. Given the increasing number of sensors being placed in cars every year, this is becoming an increasingly grave problem. In the same podcast, we also explored the booming revenue stream that car manufacturers are tapping into by not only collecting people's data, but also packaging it together for targeted advertising. According to the Mozilla research, popular global brands including BMW, Ford, Toyota, Tesla, Kia, and Subaru:
“Can collect deeply personal data such as sexual activity, immigration status, race, facial expressions, weight, health and genetic information, and where you drive. Researchers found data is being gathered by sensors, microphones, cameras, and the phones and devices drivers connect to their cars, as well as by car apps, company websites, dealerships, and vehicle telematics.”
In fact, the seasoned Mozilla team said “cars are the worst product category we have ever reviewed for privacy” after finding that all 25 car brands they researched earned the “Privacy Not Included” warning label.
Since that doesn't give us much of a choice to go for a brand that respects our privacy, I suggest we turn off our phones before we start the car. It's both safer and better for your privacy.
RMIT University, 22 Nov 23, via ACM TechNews
A mathematical breakthrough by researchers at the Royal Melbourne Institute of Technology and tech startup Tide Foundation in Australia allows system access authority to be spread invisibly and securely across a network. Dubbed “ineffable cryptograph,” the technology has been incorporated into a prototype access-control system specifically for critical infrastructure management, known as KeyleSSH, and successfully tested with multiple companies. It works by generating and operating keys across a decentralized network of servers, each operated by independent organizations. Each server in the network can only hold part of a key—no one can see the full keys, all the processes they are partially actioning, or the assets they are unlocking.
If you created a bitcoin wallet before 2016, your money may be at risk—A company that helps recover cryptocurrency discovered a software flaw putting as much as $1 billion at risk from hackers. Now it’s going public in hopes people will move their money before they get robbed.
The worksharing giant WeWork was supposed to fundamentally alter the future of the office. It raised billions of dollars, signed leases in office towers across North America but filed for bankruptcy protection last week.
Analysts say it collapsed, at least in part, because it never had a viable business model.
“It didn't really have a clear path to profitability. It never made any money,” said Susannah Streeter, head of money and markets at the financial services firm Hargreaves Lansdown.
[Ummmmm, somehow my posting got truncated, and the risky part left off:]
> On the other hand, as we have seen in various events to do with Siri and > Alexa, this is “always on” surveillance. The AI Pin will always be > listening for commands. (And, in common with Siri, Alexa, Gboard, and all > the others, those verbal commands will be sent back to HQ for processing > into text and parsing.) By accident (and possibly by design?) it will be > listening to everything that goes on around you. (And, with the camera, > possibly looking, too.) > > And, if it gets popular enough, who knows what you can find out with all > that aggregated data …
> Kids who spend hours on their phones scrolling through social media are > showing more aggression, depression and anxiety, say Canadian researchers. > […
That is part of the dehumanizing effect I studied in “How Can I Take my Life Back from my Phone?”, https://cjshayward.com/phone/.
Using phones the way that seems “natural” opens a Pandora's box. Once privilege could be marked by not owning a television. Now privilege can be marked by not owning a phone, or as in my case, learning to use it with non-obvious ways that curb its presence as an intravenous drip of noise.
The text of this post was garbled by software (what could possibly go wrong?) ;-)
The links at the beginning and end of Schneier's post are unaffected and contain the embedded references of the original, ungarbled:
Not much different from what Tesla has been doing for years (which both supports unlocking remotely via an API and unlocking locally via Bluetooth).
> Sorry… godfather implies at least two generations, if not three.
Wouldn't that be grandfather? I'm a godfather to my sister. 0 generations
Please report problems with the web pages to the maintainer