Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
https://www.theverge.com/2024/2/2/24059114/tesla-recall-brake-system-font-size-power-steering
https://www.wired.com/story/chatgpt-memory-openai/
ChatGPT is now like a first date who never forgets the details.
Yan Zhuang, The New York Times, 11 Feb 2024
Despite being imprisoned, former Pakistan Prime Minister Imran Khan has garnered support for his political party using AI. Khan's AI-generated voice was used to make a victory speech on Feb. 10, stating that his party, Pakistan Tehreek-e-Insaf, won the most seats in the general election. The speech, which featured a disclaimer about the use of AI, rejected the victory claim of Khan's rival and called on supporters to defend the election's results
Christina A. Cassidy, Associated Press,8 Feb 2024
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) is rolling out a program to help state and local election officials enhance election security. The agency hired 10 new people for the program, each with significant election experience, who will be placed at various locations nationwide to work alongside staff already performing cyber and physical security reviews as requested by election offices.
I had a conversation with someone this morning about electronic odometers in modern cars. If you recall, they were mandated because the old (mechanical) odometers were being rolled back, allowing used cars to be sold for higher prices.
Maybe others were aware, but electronic odometers are now rolled back. If you go to eBay, “odometer correction tools” for ~$300 can make whatever adjustments you want. They're not subtle—on one I looked at, it shows the before and after odometer reading, subtracting off 40,000 kilometers. (I don't know if the devices are illegal in the US, or just the act of changing the odometer.) Carfax claims that rollbacks are up significantly in the past few years (*). I'd imagine they have some incentive to detecting manipulations, since they could get sued by the consumer (or state) if they sell vehicles with tampered odometers.
Anyway, the motivation for moving from paper to [particularly unauditable] DREs 25 years ago was (in part) to reduce ballot fraud, and we know how that went. I'm not predicting a return to mechanical odometers the way we've returned to paper ballots, but I found it an interesting analogue. Moving to electronic systems doesn't always have the expected result!
(*) https://www.carfax.com/press/resources/odometer
Rizwan Choudhury, Interesting Engineering, 11 Feb 2024
Northeastern University researchers have developed a way to access video feeds from home security, dashboard, and smartphone cameras through walls. The EM Eye technique detects electromagnetic radiation emitted by the cameras' wires using a radio antenna, decodes the signal, and uses machine learning to reproduce real-time video without sound at a similar quality as the original. A test on 12 different types of cameras revealed that, depending on the model, EM Eye could successfully eavesdrop within a range of up to 16 feet.
Madison Goldberg, WiReD, 11 Feb 2024
Cryptographers at the University of California, San Diego have developed a more efficient LLL-style algorithm, based on the original lattice-based cryptography-breaking algorithm released in 1982. The algorithm, named after the researchers who published it—Arjen Lenstra, Hendrik Lenstra Jr., and László Lovász—has also proven useful in advanced mathematical arenas such as computational number theory. The new algorithm can break tasks down into smaller pieces and better balance speed and accuracy.
Once, drug dealers and money launderers saw cryptocurrency as perfectly untraceable. Then a grad student named Sarah Meiklejohn proved them all wrong – and set the stage for a decade-long crackdown.
https://www.wired.com/story/27-year-old-codebreaker-busted-myth-bitcoins-anonymity
On a sweltering July evening, the din from thousands of computers mining for Bitcoins pierced the night. Nearby, Matt Brown, a member of the Arkansas legislature, monitored the noise alongside a local magistrate.
As the two men investigated complaints about the operation, Mr. Brown said, a security guard for the mine loaded rounds into an AR-15-style assault rifle that had been stored in a car.
“He wanted to make sure that we knew he had his gun—that we knew it was loaded,” Mr. Brown, a Republican, said in an interview.
The Bitcoin outfit here, 45 minutes north of Little Rock, is one of three sites in Arkansas owned by a network of companies embroiled in tense disputes with residents, who say the noise generated by computers performing trillions of calculations per second ruins lives, lowers property values, and drives away wildlife.
https://www.nytimes.com/2024/02/03/us/bitcoin-arkansas-noise-pollution.htm=l
The lawsuit takes aim at the ecommerce giant turning on ads for Prime Video users and charging them an additional fee for its ad-free tier.
ArsTechnica reports that teardowns of unbranded USB memory devices have revealed a fair number to include discarded parts and parts used in inappropriate ways. This leads to failures and loss of data.
In the past, unbranded media has been a questionable practice. This report should serve as a warning to carefully consider one's archival practices, particularly when selecting external storage devices for long-term storage.
Uh, no thanks—There goes Firefox down the drain! -L
https://arstechnica.com/gadgets/2024/02/mozilla-lays-off-60-people-wants-to-build-ai-into-firefox/
Ride-hailing company made it difficult for drivers to access their information and failed to sufficiently disclose its data practices, regulator says
https://www.wsj.com/articles/uber-fined-almost-11-million-by-dutch-privacy-watchdog-ddd57a80
This is from the LA Times coverage of the Rebecca Grossman hit-and-run murder case:
Excerpts:
Prosecutors rested their case with a retired California Highway Patrol officer turned crash expert, John Grindey, who testified that Grossman was going so fast that her Mercedes safety system couldn't detect the two boys in the crosswalk to automatically apply the brakes.
Emphasizing the repeated prosecution theme of deadly speed, Grindey said Grossman’s Mercedes 43 GLE approached the Triunfo Canyon Road crosswalk at 81 mph.
”Over … 44 mph, [the safety system] does not detect small children,” he told jurors.
[Really? These highly touted safety systems aren't effective enough to be relied upon? “Over 44 mph” is well below the standard speed limit on American highways.]
What could possibly go wrong?
Oh yeah, count me OUT. -L
https://www.theregister.com/2024/02/13/google_micropayments_plan/
Also, scroll down for the comments on Slashdot:
Over the weekend I was hiking in northern Israel, using a free navigation app. Somewhere along the route, the app noted: “Distance to target: 144 km”, so I asked it to show me where it thinks I was: In the middle of an airport. Beirut airport. At the end of the hike, it showed that I'd walked 878 km—8 km actual hiking, plus 870 km of phantom-hiking 3 times to Beirut and back.
It seems that on a few spots along the way, someone (or something) was spoofing GPS, in order to deflect or confuse incoming GPS-guided missiles. I don't know if that actually worked, but it's a fact that since the start of the war, hundreds of rockets were fired across the border, but none of them were the supposedly accurate long-range ones.
More serious apps like Waze and Google Maps did not fall for this trick, and just marked those spots as temporary loss of GPS signal, but my Google timeline still shows this visit to Beirut.
https://arstechnica.com/?p=2002579
https://arxiv.org/abs/2402.04607
https://techcrunch.com/2024/02/11/googles-and-microsofts-chatbots-are-making-up-super-bowl-stats/
Everyone recall boothole? https://eclypsium.com/research/theres-a-hole-in-the-boot/#breaking
yadda yadda buffer overflow, yadda malicious bootloader…cue massive UEFI patch and dbx update scramble.
It's back.
Similar root cause, buffer overflow, different vector. This ones in netbooting (i/PXE).
https://eclypsium.com/blog/the-real-shim-shady-how-cve-2023-40547-impacts-most-linux-systems/
This one isn't as concerning though. Certainly no system is in place that sources boot images outside of your local datacenter. Right?
https://arstechnica.com/?p=2002777
Russian access to satellite Internet system would negate a major battlefield advantage for Kyiv.
https://www.wsj.com/world/russia-using-musks-starlink-at-the-front-line-ukraine-says-516701f0
Oc=E9ane Herrero, Gian Volpicelli, Antoaneta Roussi Politico, 13 Feb 2024
Technology giants are planning a new industry accord to fight back against deceptive AI artificial intelligence election content that is threatening the integrity of major democratic elections across the world this year.
A draft Tech Accord, seen by POLITICO, showed technology companies want to work together to create tools like watermarks and detection techniques to spot, label and debunk deepfake AI-manipulated images and audio of public figures. The pledge also includes commitments to open up more about how the firms are fighting AI-generated disinformation on their platforms.
“We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders.”
The text, which is a draft and could still change, is planned to be presented when political and security leaders gather at the Munich Security Conference starting Friday. The conference has seen tech firms take up increasing attention and space throughout the years, as threats like informational warfare and cyberattacks have risen.
The draft Tech Accord has the backing of technology giants Microsoft, Google and Facebook and Instagram owner Meta, according to four people with knowledge of the process, granted anonymity because discussions were ongoing. Three of the people said TikTok, OpenAI and Adobe are also planning to sign. Other companies are likely to join the initiative still.
Meta confirmed to POLITICO it was part of the accord, adding that Adobe, Google, LinkedIn, Microsoft, OpenAI and TikTok would also join.
Google and OpenAI did not respond to a request for comment in time for publication. Microsoft declined to comment.
A report <https://y3r710.r.eu-west-1.awstrack.me/L0/https://dmp.politico.eu/?email=eisgrau@hq.acm.org&destination=https://securityconference.org/en/publications/munich-security-report-2024//1/0102018da0d736fa-aa3488f1-c874-4b13-b314-7c53289fb400-000000/j_oILCStne_BdVJquv0ThZAm4dc=360> by the Munich Security Conference organizers, presented Monday, showed how fears around artificial intelligence had shot up in the past year in major global economies, especially in Italy, France and Brazil.
Tools to fight the deepfake flood
Tech firms are under pressure from governments including the European Union to get a grip on the problem of AI-generated deepfakes and misleading material. Several firms, including OpenAI and Meta, have said they will start labeling deepfakes in the coming months.
Political deepfakes have already popped up in the United States, Poland and the United Kingdom, among many other countries. Most recently, the U.S. was rocked by a robocall impersonating President Joe Biden, raising fears over the tech's impact on the country's politics.
The European Union's Artificial Intelligence Act would require all AI-generated content to be clearly labeled as such; the bloc is also using its Digital Services Act to force the tech industry to curb deepfakes.
The draft accord floated ideas including developing detection technology, open standards-based identifiers for deepfake content and watermarks using the standards C2PA and SynthID, which are existing initiatives that involve Microsoft, Google and a wide range of other tech firms.
But it added technical tools like metadata, watermarking, classifiers, or other forms of provenance or detection techniques can't fully mitigate the risks of AI, suggesting that the initiative would need the support of governments and other organizations to raise public awareness on the issue of deepfakes.
Others in the technology industry criticized the initiative because it would divert attention from keeping tech companies in check with regulation and oversight.
Democracies are “well past the era where we can trust companies to self regulate,” Meredith Whittaker, co-founder of the AI Now Institute, who saw a draft of the document last week, told POLITICO. “Deepfake doesn't really matter unless you have a platform you can disseminate it on,” arguing the pledge failed to address issues with how social media platforms and advertising models are used to target certain voters
(NOT MY) Question to consumer advocate:
I recently had a very, very painful experience when I bought an HP Envy x360 2-in-1 laptop. I received it a few months ago, and the product was defective.
The touch screen did not work. I wasted 40 hours on the phone with HP tech support but could not get a replacement screen. I finally sent it back to HP for a repair.
HP recently told me that it couldn't get a replacement screen because it discontinued the laptop. I asked the company to send me a similar product, the HP Spectre x360 2-in-1 laptop, or refund the $1,434 I spent. But HP denied the exchange and refused me a full refund (it said I could not get the sales taxes back).
I’d like HP to replace my broken laptop with one of equal or better value or send me a full $1,434 refund. Can you help?
Perhaps I'm incredibly naive, but it sounds to me that the 737 MAX 9 would be safer if the ‘door plug’ were replaced by an actual door !
An actual door can be seen and monitored; checked, whereas the ‘door plug’ (aka ‘fake door’) is painted over and ignored.
There's a long history in the computer community of ‘unintended/unexpected’ ‘upgrades’—e.g., pulling a wire ‘upgrades’ a computer to higher performance, dormant code that was never intended for execution suddenly becomes a platform for exploitation, etc. Sometimes excess ‘optionality’ has overwhelming costs…
Paolo Benanti advises the Roman Catholic Church and the Italian government on the tricky questions, moral and otherwise, raised by the rapidly advancing technology.
(But who advises them on the semiconductors needed to implement that AI? Why, he's the Chip Monk.)
A 2024 plea for lean software Bert Hubert <https://spectrum.ieee.org/u/bert_hubert> writes: https://ethz.ch/en/news-and-events/eth-news/news/2024/01/computer-pioneer-niklaus-wirth-has-died.html>/
/This post is dedicated to the memory of //Niklaus Wirth/ computing pioneer who passed away 1 January 2024. In 1995 he wrote an influential article called “//A Plea for Lean Software/ <https://cr.yp.to/bib/1995/wirth.pdf>/,”//published in /Computer <https://ieeexplore.ieee.org/document/348001>/, the magazine for members of the IEEE Computer Society, which I read early in my career as an entrepreneur and software developer. In what follows, I try to make the same case nearly 30 years later, updated for today’s computing horrors. A version of this post was originally published/ <https://berthub.eu/articles/posts/a-2024-plea-for-lean-software/>/on my personal blog, Berthub.eu <http://berthub.eu>./us-wirth-has-died.html>/
Please report problems with the web pages to the maintainer