Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
[Could have been worse “ last time researchers checked it was 98.6%.] Hospitals “ despite being places where people implicitly expect to have their personal details kept private “ frequently use tracking technologies on their websites to share user information with Google, Meta, data brokers, and other third parties, according to research published today.
https://nymag.com/intelligencer/article/corporate-greed-made-the-change-healthcare-cyberattack-worse.html [See RISKS-34.12 for the Change Healthcare Attack. PGN]
https://arstechnica.com/?p=2016577
"Computers as tools for humans are so useful exactly *because* they can’t think and do tedious work like calculations or information storage and retrieval for humans in a *deterministic* way. It took like nearly 90 years of digital computers to make them powerful enough to run a wasteful algorithm that pretends to think (but doesn’t) and to deliver bullshit non-deterministic results while using absurd amounts of computational and environmental resources." https://hachyderm.io/@thomasfuchs/112265521636541465
[via Dave Farber] This is an excellent piece about where we find ourselves: Are We Watching The Internet Die? https://www.wheresyoured.at/are-we-watching-the-Internet-die/ "We're at the end of a vast, multi-faceted con of Internet users, where ultra-rich technologists tricked their customers into building their companies for free. And while the trade once seemed fair, it's become apparent that these executives see users not as willing participants in some sort of fair exchange, but as veins of data to be exploitatively mined as many times as possible, given nothing in return other than access to a platform that may or may not work properly." and "There are simply too many users, too many websites and too many content providers to manually organize and curate the contents of the Internet, making algorithms necessary for platforms to provide a service. Generative AI is a perfect tool for soullessly churning out content to match a particular set of instructions—such as those that an algorithm follows -- and while an algorithm can theoretically be tuned to evaluate content as "human," so can scaled content be tweaked to make it seem more human. Things get worse when you realize that the sheer volume of Internet content makes algorithmic recommendations a necessity to sift through an ever-growing pile of crap. Generative AI allows creators to weaponize the algorithms' weaknesses to monetize and popularize low-effort crap, and ultimately, what is a platform to do? Ban anything that uses AI-generated content? Adjust the algorithm to penalize videos without people's faces? How does a platform judge the difference between a popular video and a video that the platform made popular? And if these videos are made by humans and enjoyed by humans, why should it stop them?"
Clothilde Goujard, *Politico* BRUSSELS—Chatbots produced by Google, Microsoft and OpenAI shared some false information about the European election, two months before hundreds of millions head to cast their ballots, according to an analysis shared exclusively with POLITICO. While the artificial intelligence tools remained politically neutral, they tended to return incorrect election dates and information about how to cast a ballot, said Democracy Reporting International, a Berlin-based NGO that carried out the research in March. Chatbots also often provided broken or even irrelevant links to YouTube videos or content in Japanese, researchers added. `We were not surprised to find wrong information about details of the European elections, because chatbots are known to invent facts when providing answers, a phenomenon known as hallucination,'' said Michael Meyer-Resende, co-founder and executive director of Democracy Reporting International. Researchers noted that AI chatbots were dynamic, making the experiment hard to replicate. In a series of a dozen tests with similar questions carried out by POLITICO on Tuesday, the chatbots either declined to respond entirely or else had updated responses with links directing users to the EU institutions' websites. Meyer-Resende said the experiment was, however, large enough to be representative. It also provided new evidence about the risks of so-called AI hallucinations—which often occur because of insufficient training data, biases and false assumptions—ahead of the European election, which takes place from June 6-9. The fast emergence of easy-to-use AI tools generating text, audio and video has prompted concerns about a rise in misinformation in a year with crucial elections in the EU, the United States, the United Kingdom and India. The European Commission in March ordered several tech firms including Bing and Google to explain—before April 5—how they were limiting potential risks to elections connected to their generative AI tools under the Digital Services Act. Researchers asked the same 10 questions in 10 languages—including German, Italian, Polish and Portuguese—from March 11-14 to the four most popular and accessible chatbots: OpenAI's ChatGPT 3.5 and 4, Google's Gemini and Microsoft's Copilot. ChatGPT's newest paid version performed the best, while Google's Gemini was deemed the least likely to give correct answers at the time of the test. “Because of the known limitations of all LLMs, we believe a responsible approach for Gemini is to restrict most election-related queries and to direct users to Google Search for the latest and most accurate information,'' said Karl Ryan, a Google spokesperson. He added that Google's Gemini was in the process of rolling out restrictions in March but the restrictions are now in place. “We will continue to quickly address instances in which Gemini isn't responding appropriately.'' “We are continuing to address issues and prepare our tools to perform to our expectations for the 2024 elections,'' said Robin Koch, a Microsoft spokesperson. He added that some of the measures included giving users of Microsoft's Copilot election information from authoritative sources and pushing them to check web links. OpenAI did not reply to a request for comment in time for publication.
The author paid a website developer to create a fully automated, AI-generated ‘pink-slime’ news site, programmed to create false political stories. The results were impressive”and, in an election year, alarming. https://www.wsj.com/politics/how-i-built-an-ai-powered-self-running-propaganda-machine-for-105-e9888705?st=eryapn7ks9k6807&reflink=desktopwebshare_permal
On Fri. Mar. 29, I attended Norwescon, a large science fiction convention hosted in Seattle since 1978. Three items might be of interest to RISKS. First, the parking lot had a hotel-hired Knightscope self-driving robot that aims to deter crime and records film for later optional viewing by humans. I took video of the Knightscope and described it in detail. Second, I posted my notes from a panel made up of editors from top science fiction and fantasy magazines, some still with print incarnations; they discussed in depth the deluge of unsolicited AI-created fiction manuscripts that they're receiving through their open submissions portals. According to one panelist, the scammers are not the submitters, but separate individuals taking advantage of gullible people, telling them that AI fiction is the path to riches, and when it doesn't work, and only threatens to crash the submissions portal, then selling them expensive tutorials on how to AI better. Third, I ask how the fandom, steeped in stories of sci-fi can-do heros, might overcome apathy and consumerism to do something about these sci-fi-style risks encroaching on the genre from the real world without! https://douglaslucas.com/blog/2024/04/02/fading-fun-norwescon46-friday-future/
Hatsune Miku has already sold out venues for her concerts and she'll go to her biggest stage yet at Coachella. She looks like a teenage girl but she's not human. She's part of a growing number of digital characters, including Miquela and angelbaby, that are creating music for fans. [...] Her music ” mostly synthesizer-heavy dance pop ” is created from software developed by the Sapporo, Japan-based technology company Crypton Future Media. The technology lets people, including fans, type in lyrics and punch in a melody. The program generates a singing voice for the song. Crypton then licenses the songs from the fans for her to sing at concerts. Miku herself is an illustrated character, resembling a 16-year-old girl from an anime or manga. To “perform” onstage, Miku’s image is displayed on a giant screen as a video behind a live band. <https://www.youtube.com/watch?v=jhl5afLEKdo> https://www.latimes.com/entertainment-arts/business/story/2024-04-12/coachella-2024-hatsune-miku-zlu-hume-angelbaby
The Worst Part of a Wall Street Career May Be Coming to an End Artificial intelligence tools can replace much of Wall Street’s entry-level white-collar work, raising tough questions about the future of finance https://archive.is/4iLEA
The Verge https://www.theverge.com/24126502/humane-ai-pin-review Humane AI Pin review: not even close For $699 and $24 a month, this wearable computer promises to free you from your smartphone. There’s only one problem: it just doesn’t work. [Also Humane AI Hands-On: My Life So Far With a Wearable AI Pin: Like an AI-powered Star Trek communicator pinned to your shirt, the AI Pin is a wild concept, but it's too frustrating for everyday use. https://www.cnet.com/tech/mobile/humane-ai-hands-on-my-life-so-far-with-a-wearable-ai-pin/ and A Novel AI Innovation, but It Is Not Yet Very Useful: Brian X. Chen, *The New York Times* Business section front page 13 Apr 2024 PGN]
https://arstechnica.com/?p=2016342
https://techcrunch.com/2024/04/10/apple-warning-mercenary-spyware-attacks/
https://www.theverge.com/2024/4/11/24127278/apple-iphone-repair-used-parts BUT: Apple will allow reuse of iPhone parts for repairs, with a notable catch As a result, "select iPhone models" this fall will allow for reusing biometric sensors and other parts, and anyone ordering parts from Apple can skip sending a device's serial number, so long as the repair doesn't involve a new main logic board. https://arstechnica.com/?p=2016470
https://www.nytimes.com/2024/04/11/us/organ-transplants-houston.html
Perhaps avoid the use of dynamic scripting languages in what should be a secure context? Or, why does my firewall have python? https://security.paloaltonetworks.com/CVE-2024-3400
Miles of Taillights on Interstates Last Longer Than the Celestial Phenomenon in the Sky Charlie Smart, short article at the bottom of a full page of six graphics along the path of totality: (1) top half-page showing remarkably frequent major traffic delays from West Texas to Canada; (2) during totality (no delays, Syracuse to Bangor); (3) one hour after totality (building up, Syracuse to Bangor, delays north of Burlington VT)); (4) three hours after (delays all around Burligton), (5) six hours after still quite heavy going south from Burlington), (6) long traffic delays in the Midwesst, at 9pm ET still heavy leaving bigger cities (e.g., St. Louis, Indianapolis, Columbus, Toledo). [The next major U.S. eclipse is not until 2045. But who will remember this situation in 2045? There could be many lessons for the expected exoduses from major disasters in large cities—spills, toxic train wrecks, although those would typically be local problems. Nevertheless, there are some risks lessons to be learned.
Delta Air Lines said the eclipse flight had to change its plans because area traffic control would not allow a special maneuver. https://www.washingtonpost.com/travel/2024/04/12/delta-eclipse-flight-leaves-path-of-totality/
The Verge article said that, when the total solar eclipse increased demand for electricity in the United States, the shortfall was made up in part by gas. Might be interesting to note that, if this chain of dominoes were followed for anything on Earth, not just the gas "peaker plants," the energy source is the Sun. Biology, engineering, smartphones, whatever it is, ultimately it's the Sun that pays for everything. All of our rent-seeking economic systems are downstream of the big kahuna in the sky. What's the RISK? Red giant!
> Facial recognition should be forbidden from use by law enforcement unless > and until it is able to be used on white collar criminals But white collar criminals do not hide their faces; it's the money they stole which should be identified.
Regarding Texas using AI to grade most of the mandatory STAAR tests taken by elementary, middle, and high schoolers: In the past decade and a half, I've more than once flunked the GRE writing test and the IELTS writing test, for admission into graduate school and Canada respectively. I'm pretty sure both were computer-scored, at least initially, but I wasn't enthusiastic enough about either destination to challenge the results much. If I ever have to flunk such a writing test again, I plan to re-take it and, instead of answering the question, type out my bonafides with evidentiary URLs—a summa writing degree; a CELTA cert for teaching ESL; numerous publications and media spots as a professional writer—along with my complaint that, for the life of me, I cannot seem to pass these computer-graded writing tests. Might make an interesting media stunt, if nothing else.
Please report problems with the web pages to the maintainer