Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
The California DMV suspended the company's driverless permits, citing public safety. Cruise may apply to reinstate them, but the DMV gave no timeline.
Jason Kim (Georgia Tech) Stephan von Schaik (U. Michigan) Daniel Genkin (Georgia Tech) Juval Yarom (Ruhr University Bochum)
Overview of the iLeakage Attack.
We present iLeakage, a transient execution side channel targeting the Safari web browser present on Macs, iPads and iPhones. iLeakage shows that the Spectre attack is still relevant and exploitable, even after nearly 6 years of effort to mitigate it since its discovery. We show how an attacker can induce Safari to render an arbitrary webpage, subsequently recovering sensitive information present within it using speculative execution. In particular, we demonstrate how Safari allows a malicious webpage to recover secrets from popular high-value targets, such as Gmail inbox content. Finally, we demonstrate the recovery of passwords, in case these are autofilled by credential managers.
Demo Videos. Recovering Instagram Credentials We show a scenario where the target uses an autofilling credential manager (LastPass in this demo) to sign into Instagram with Safari on macOS.
Today is the 35th anniversary of the Internet Worm.
“Ancient history,” you say? Or perhaps, “What's that?”
Read my blog post about it to get my perspective on why it is important: https://www.cerias.purdue.edu/site/blog/post/reflecting_on_the_internet_worm_at_35/
Dan Milmo, The Guardian, 25 Oct 2023. via ACM TechNews
A group of experts including “godfathers” of artificial intelligence (AI) Geoffrey Hinton and Yoshua Bengio, both ACM Turing Award recipients, said AI companies must be held accountable for the damage their products cause, ahead of an AI safety summit in London. The University of California, Berkeley's Stuart Russell, one of 23 experts who composed AI policy proposals released Tuesday, called developing increasingly powerful AI systems before understanding how to render them safe “utterly reckless.” The proposed policies include having governments and companies commit 33% of their AI research and development resources to safe and ethical AI use. Companies that discover dangerous capabilities in their AI models also must adopt specific safeguards.
In an op-ed for Bloomberg Law, EPIC's Executive Director Alan Butler argued for the need for an overriding federal privacy law.
“The startling realism has implications for malevolent uses of the technology: its potential weaponization in disinformation campaigns for political or other gain, the creation of false porn for blackmail, and any number of intricate manipulations for novel forms of abuse and fraud. Developing countermeasures to identify deepfakes has turned into an ‘arms race’ between security sleuths on one side and cybercriminals and cyberwarfare operatives on the other.”
Deepfaked content reaffirms human susceptibility to truth default interpretation (https://en.wikipedia.org/wiki/Truth-default_theory). The human psyche is easily and quickly hooked into believing a whole-cloth
Tiffany Hsu and Stuart A. Thompson, The New York Times, 28 Oct 2023, via ACM TechNews, 30 Oct 2023
Disinformation researchers have found the use of artificial intelligence (AI) to spread falsehoods in the Israel-Hamas war is sowing doubt about the veracity of online content. The researchers discovered people on social media platforms and forums accusing political figures, media outlets, and others of attempts to influence public opinion through deepfakes, even when the content is authentic. Experts say bad actors are exploiting AI's availability to facilitate the so-called liar's dividend by convincing people genuine content is fake. Deepfake detection services like U.S.-based AI or Not also have been used to label content as fake, and synthetic media specialist Henry Ajder said such tools “provide a false solution to a much more complex and difficult-to-solve problem.”
For context, Australia has the concept of “parliamentary privilege” under which members of Parliament (both federal and state) cannot be sued for defamation or libel for statements made in Parliament. This privilege extends to Parliamentary inquiries and Senate committees, whereupon anyone (not just MPs) presenting evidence are covered by parliamentary privilege.
So we have AI-generated rubbish presented in a situation which doesn't allow recourse for those impacted. I'm no fan of the Big Four, or the behaviour of some of their partners, but the fact that some partners lost their jobs over this is terrible.
Pranshu Verma and Will Oremus, The Washington Post
Artificial intelligence voice-cloning software has rapidly increased in quality. It's allowing anyone from foreign actors to music fans to copy somebody's voice.
University of Sheffield (UK), 24 Oct 2023, via ACM TechNews
Scientists at the U.K.'s University of Sheffield, the North China University of Technology, and e-commerce giant Amazon found hackers can trick natural language processing tools like OpenAI's ChatGPT into generating malicious code for possible use in cyberattacks. The researchers discovered and successfully exploited security flaws in six commercial artificial intelligence (AI) tools, including ChatGPT, Chinese intelligent dialoge platform Baidu-UNIT, structured query language (SQL) generators AI2SQL, AIHelperBot, and Text2SQL, and online tool resource ToolSKE. They learned that asking these AIs specific questions caused them to produce malicious code that would leak confidential database information, or disrupt or even destroy database operation. The team also found AI language models are susceptible to simple backdoor attacks. Sheffield's Xutan Peng said the vulnerabilities are rooted in the fact that “more and more people are using [AIs like ChatGPT] as productivity tools, rather than a conversational bot.”
AI is diagnosing diseases and recommending treatments, but the systems aren't always regulated like drugs or medical devices.
Washington hasn't written the rules for the new artificial intelligence in health care even though doctors are rapidly deploying it—to interpret tests, diagnose diseases and provide behavioral therapy.
Products that use AI are going to market without the kind of data the government requires for new medical devices or medicines. The Biden administration hasn't decided how to handle emerging tools like chatbots that interact with patients and answer doctors' questions—even though some are already in use. And Congress is stalled. Senate Majority Leader Chuck Schumer said this week that legislation was months away.
Advocates for patient safety warn that until there’s better government oversight, medical professionals could be using AI systems that steer them astray by misdiagnosing diseases, relying on racially biased data or violating their patients’ privacy.
[These are just some impressions of war in the 21sta century, from the POV of a retired hi-tech man whose latest military experience was 30 years ago. I'll try to keep it relevant to RISKS.]
Part 1: It's a Smartphone war
Forget walkie-talkies, forget battleground maps, communication lines, Signaling Corps. The main way to communicate, by soldiers and civilians, is Whatsapp. Soldiers get their marching orders on their phones, which include maps, drone images of targets, real-time situation profiles.
Other applications are also employed: Whatsapp's “Share Location” feature was essential during the first hours, and enabled soldiers to reach and whisk out civilians who were caught in the fire lines, and also locate terrorists. There is also an app which alerts people that their area is under attack. Other applications help coordinate manpower and supplies.
A lot has been said about how terrorists had used low-tech means to overcome hi-tech defenses (even since 9/11), but in organized operations, high-tech warfare seems to be a lot more efficient.
Part 2: The Role of Women.
This may be relevant to RISKS because ever since the invention of the typewriter, women in the military have been assigned the roles of operators of high-tech machinery. As the military had become more advanced technologically, more women are stationed at frontline HQ and CC units.
In this war, such units were attacked, and women had to fight along with the men to defend their positions, They had proven to be every bit as courageous and effective fighters.
A section of the front was defended by a tank company, which was meant to be “experimental” and staffed entirely by women, They virtually saved the entire southern sector of the front. I guess it can be concluded that the experiment was successful.
Part 3: The Rockets' Red Glare
The Iron Dome defense system consists of long and short range radars, which can detect incoming missiles and rockets, calculate where they might land, operate air-raid sirens in the affected areas, and launch interceptor missiles to shoot them down.
The system does not intercept missiles whose target area is uninhabited. This saves on interceptor missiles, but can be scary for those living nearby, who sometimes are given no warning that a missile is going to come down and explode next door.
The accuracy of the system is on the scale of a small town or borough. It's an unparalleled experience to have your afternoon coffee on your porch, while watching a missile attack unfold over the next town: Air-raid sirens, the rockets' red glare, interceptors launched, and a few very loud bangs when they explode in mid-air.
Twelve days into a ransomware attack that has upended health-care services at five hospitals in southwestern Ontario, a cybercriminal group claimed responsibility in an online blog describing how the attack happened and what it says are the millions of private patient records it has stolen.
In a report to Windsor Regional Hospital Thursday, CEO David Musyj said the hospital is slowly getting back on track, working hard to restore services. He noted that although the impacted hospitals “closely examined” the ransom demand from the cybercriminals, they decided against paying it.
Danny Nelson Drained Crypto Accounts at IRA Financial Leave Victims Searching for Answer
They joined IRA Financial Trust eager to build a nest egg in crypto. Instead, some users told CoinDesk their retirement accounts were drained, frozen and locked—with little explanation of what happens next.
It's been nearly one week since an apparent security breach threw IRA Financial's clients into crisis mode. With $36 million of their retirement savings in limbo and no full explanation from either IRA Financial or Gemini — the crypto exchange owned by the Winklevoss twins, Cameron and Tyler, and custodian where their crypto was held—they've begun organizing a response to crypto's latest hack…
…The incident is one of the first high-profile exploits to hit crypto retirement accounts in the U.S. Appealing to tax-savvy bitcoiners, this cottage industry has for the past few years hawked products in partnership with top crypto brands. […]
The New York Times, 28 October 2023, Business section front page in the National Edition
From Twitter's town square to a spammy, shrinking X: Since the billionaire bought Twitter and rebranded it as X, disinformation and hateful speech have surged, among several other effects.
The Poynter Report https://mailchi.mp/poynter/lb6mw105q6?e=8084435636
Reviewed, Gannett's product reviews site, took down several affiliate marketing articles that some of its journalists claimed were generated by artificial intelligence.
The articles in question first went up on Friday and included reviews of products that Reviewed does not typically cover, like dietary supplements, according to the Reviewed Union, which represents journalists and lab and operations workers at the outlet. The posts, which were part of a new shopping page <https://reviewed.usatoday.com/shopping>, did not have bylines, and union members decried the work as an attempt to replace their labor. By Tuesday morning, the page was gone. Reviewed then republished the stories in the afternoon with a disclaimer that they had not been written by staff before taking the page down again.
As of Tuesday evening, the shopping page was still down, though links <https://reviewed.usatoday.com/shopping/similar/Greens-Steel/vacuum-tumbler> to individual <https://reviewed.usatoday.com/shopping/similar/National-Geographic-Snorkeler/Scuba-Mask> stories <https://reviewed.usatoday.com/shopping/similar/nbpure/Best-Liver-Supplements> still worked.
The articles were created by third-party freelancers hired by a marketing agency partner, not AI, Reviewed spokesperson Lark-Marie Anton wrote in an emailed statement: “The pages were deployed without the accurate affiliate disclaimers and did not meet our editorial standards.”
Reviewed follows USA Today's ethical guidelines <https://cm.usatoday.com/ethical-conduct/> regarding AI-generated content, Anton added. Those guidelines stipulate that journalists disclose the use of AI and its limitations when publishing AI-assisted content.
Any company that hired freelance IT workers over the last few years more than likely hired someone from North Korea, pretending to be an American. https://www.zetter-zeroday.com/p/how-north-korean-workers-tricked
FBI guidance: https://www.ic3.gov/Media/Y2023/PSA231018
- Neither article says if anyone is combing the work of these programmers for backdoors they left in their code, or if anyone has notified the target companies. The FBI closed 17 websites, but only one has been reported: edenprogram.com
Call-center operators use pop-ups, malware, and cold calls to get people to pay for PC fixes they don't really need.
Philips argued in court that its U.S. subsidiary should be responsible for damages caused by its CPAP machines and ventilators. Patients' attorneys say safety decisions were made at the Dutch company's highest levels.
A vaccine against tuberculosis, the world's deadliest infectious disease, has never been closer to reality, with the potential to save millions of lives. But its development slowed after its corporate owner focused on more profitable vaccines.
The Education Department announced on Monday it would penalize the student loan servicer MOHELA for its failure to send timely billing statements to 2.5 million borrowers.
More surgeons are opting for a complicated hernia repair that they learned from videos on social media showing shoddy techniques.
The Patent Fight That Could Take Apple Watches Off the Market https://www.nytimes.com/2023/10/30/opinion/apple-watch-masimo.html
Schools in Orlando took a tougher approach than a new state law required. Student engagement increased. So did the hunt for contraband phones.
Federal judges in three states have blocked children's privacy and parental oversight laws, saying they very likely violate free speech rights.
The decision by a California jury is the first involving a fatal accident that lawyers representing the victims said was the fault of Tesla’s self-driving technology.
The location of the attacking computer doesn't say much (or anything) about where the hackers themselves are actually located. They could be using cloud services or botnets with computer located in other countries than their own.
Please report problems with the web pages to the maintainer