Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
[Today's *The New York Times* Science Times, A DNA Quandary, Elizabeth Anne Brown, 16 May 2023: Tiny bits of genetic material that humans leave everywhere can be collected and analyzed, raising ethical concerns about privacy and civil liberties. PGN] https://www.nytimes.com/2023/05/15/science/environmental-dna-ethics-privacy.html David Duffy, a wildlife geneticist at the University of Florida, just wanted a better way to track disease in sea turtles. Then he started finding human DNA everywhere he looked. <https://www.whitney.ufl.edu/people/current-research-faculty/david-duffy-phd/> Over the last decade, wildlife researchers have refined techniques for recovering environmental DNA, or eDNA—trace amounts of genetic material that all living things leave behind. A powerful and inexpensive tool for ecologists, eDNA is all over—floating in the air, or lingering in water, snow, honey and even your cup of tea. Researchers have used the method to detect invasive species before they take over, to track vulnerable or secretive wildlife populations and even to rediscover species thought to be extinct. <https://www.nytimes.com/2020/02/24/science/environmental-dna-sampling.html> The eDNA technology is also used in wastewater surveillance systems to monitor Covid and other pathogens. <https://www.nytimes.com/2023/05/11/health/covid-sewage-wastewater-data.html> But all along, scientists using eDNA were quietly recovering gobs and gobs of human DNA. To them, it's pollution, a sort of human genomic by-catch muddying their data. But what if someone set out to collect human eDNA on purpose? New DNA collecting techniques are like catnip for law enforcement officials, says Erin Murphy. <https://its.law.nyu.edu/facultyprofiles/index.cfm?fuseaction=profile.overview&personid=31567> a law professor at the New York University School of Law who specializes in the use of new technologies in the criminal legal system. The police have been quick to embrace unproven tools, like using DNA to create probability-based sketches of a suspect. <https://www.nytimes.com/2015/02/24/science/building-face-and-a-case-on-dna.html> That could pose dilemmas for the preservation of privacy and civil liberties, especially as technological advancement allows more information to be gathered from ever smaller eDNA samples. Dr. Duffy and his colleagues used a readily available and affordable technology to see how much information they could glean from human DNA gathered from the environment in a variety of circumstances, such as from outdoor waterways and the air inside a building. The results of their research, published Monday in the journal Nature Ecology & Evolution <https://www.nature.com/articles/s41559-023-02056-2>, <https://www.nature.com/articles/s41559-023-02056-2> demonstrate that scientists can recover medical and ancestry information from minute fragments of human DNA lingering in the environment. Forensic ethicists and legal scholars say the Florida team's findings increase the urgency for comprehensive genetic privacy regulations. For researchers, it also highlights an imbalance in rules around such techniques in the United States—that it's easier for law enforcement officials to deploy a half-baked new technology than it is for scientific researchers to get approval for studies to confirm that the system even works. Genetic trash to genetic treasure. [...] https://dnyuz.com/2023/05/15/your-dna-can-now-be-pulled-from-thin-air-privacy-experts-are-worried/ [Captured in a crowd, someone else's DNA might easily be associated with you, mistakenly. It seems as if this has plenty of other risks as well. PGN]
Being able to accurately determine your location anywhere on the planet is a useful technological trick. But when tracking isn't done by you, but to you -- without your knowledge or consent—it's a violation of your privacy. That's why at EFF we've long fought against dragnet surveillance, mobile device tracking, and warrantless GPS tracking. Several weeks ago, an EFF supporter brought her car to a mechanic, and found a mysterious device wired into her car under her driver's seat. This supporter, who we'll call Sarah (not her real name), sent us an email asking if we could determine whether this device was a GPS tracker, and if so, who might have installed it. Confronted with a mystery that could also help us learn more about tracking, our team got to work. https://www.eff.org/deeplinks/2022/03/eff-investigation-mystery-gps-tracker-supporters-car
[Thanks to Arik Hesseldahl for reporting this item.] https://www.inquirer.com/news/philadelphia/philadelphia-inquirer-hack-cyber-disruption-20230514.html [Monty Solomon noted another take on this: Possible Cyberattack Disrupts The Philadelphia Inquirer https://www.nytimes.com/2023/05/15/business/media/philadelphia-inquirer-cyberattack.html PGN]
*The creator of advanced chatbot ChatGPT has called on US lawmakers to regulate artificial intelligence (AI). * Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before a U.S. Senate committee on Tuesday about the possibilities—and pitfalls -- of the new technology. https://www.bbc.com/news/world-us-canada-65616866
How ChatGPT is like lossy compression, and a cautionary tale of a famous old bug of Xerox copying machines. “Think of ChatGPT as a blurry *JPEG* of all the text on the Web. It retains much of the information on the Web, in the same way that a *JPEG* retains much of the information of a higher-resolution image, but, if you're looking for an exact sequence of bits, you won't find it.'' More at (may be paywalled): https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
Earlier this year, a sales director in India for tech security firm Zscaler got a call that seemed to be from the company's chief executive. As his cellphone displayed founder Jay Chaudhry's picture, a familiar voice said “Hi, it's Jay. I need you to do something for me.'' A follow-up text over WhatsApp explained why. “I think I'm having poor network coverage as I am traveling at the moment. Is it okay to text here in the meantime?'' Then the caller asked for assistance moving money to a bank in Singapore. Trying to help, the salesman went to his manager, who smelled a rat and turned the matter over to internal investigators. They determined that scammers had reconstituted Chaudhry' voice from clips of his public remarks in an attempt to steal from the company. Chaudhry recounted the incident last month on the sidelines of the annual RSA cybersecurity conference in San Francisco, where concerns about the revolution in artificial intelligence dominated the conversation. Criminals have been early adopters, with Zscaler citing AI as a factor in the 47 percent surge in phishing attacks it saw last year. Crooks are automating more personalized texts and scripted voice recordings while dodging alarms by going through such unmonitored channels as encrypted WhatsApp messages on personal cellphones. Translations to the target language are getting better, and disinformation is harder to spot, security researchers said. That is just the beginning, experts, executives and government officials fear, as attackers use artificial intelligence to write software that can break into corporate networks in novel ways, change appearance and functionality to beat detection, and smuggle data back out through processes that appear normal. ` `It is going to help rewrite code,'' National Security Agency cybersecurity chief Rob Joyce warned the conference. “Adversaries who put in work now will outperform those who don't. The result will be more believable scams, smarter selection of insiders positioned to make mistakes, and growth in account takeovers and phishing as a service, where criminals hire specialists skilled at AI. [...] https://www.msn.com/en-us/news/us/cybersecurity-faces-a-challenge-from-artificial-intelligence-s-rise/ar-AA1b2Fms
The theory of technological singularity predicts a point in time when humans lose control over their technological inventions and subsequent developments due to the rise of machine consciousness and, as a result, their superior intelligence. Reaching singularity stage, in short, constitutes artificial intelligence's (AI's) greatest threat to humanity. Unfortunately, AI singularity is already underway. AI will be effective not only when machines can do what humans do (replication), but when they can do it better and without human supervision (adaptation). Reinforcement learning (recognized data leading to predicted outcomes) and supervised learning algorithms (labeled data leading to predicted outcomes) have been important to the development of robotics, digital assistants and search engines. But the future of many industries and scientific exploration hinges more on the development of unsupervised learning algorithms (unlabeled data leading to improved outcomes), including autonomous vehicles, non-invasive medical diagnosis, assisted space construction, autonomous weapons design, facial-biometric recognition, remote industrial production and stock market prediction, [...] https://thehill.com/opinion/technology/4003870-entering-the-singularity-has-ai-reached-the-point-of-no-return/ [Massive insertion of URLs pruned for RISKS, and truncated the rest of the piece. PGN]
Help! My Political Beliefs Were Altered by a Chatbot!* AI assistants may be able to change our views without our realizing it. Says one expert: “What's interesting here is the subtlety.'' When we ask ChatGPT or another bot to draft a memo, email, or presentation, we think these artificial-intelligence assistants are doing our bidding. A growing body of research shows that they also can change our thinking, without our knowing. One of the latest studies in this vein, from researchers spread across the globe, found that when subjects were asked to use an AI to help them write an essay, that AI could nudge them to write an essay either for or against a particular view, depending on the bias of the algorithm. Performing this exercise also measurably influenced the subjects' opinions on the topic, after the exercise. <https://scholar.google.com/citations?view_op=view_citation&hl=en&user=IeqjwlIAAAAJ&sortby=pubdate&citation_for_view=IeqjwlIAAAAJ:_axFR9aDTf0C> “You may not even know that you are being influenced,'' says Mor Naaman, a professor in the information science department at Cornell University, and the senior author of the paper. He calls this phenomenon “latent persuasion.” These studies raise an alarming prospect: As AI makes us more productive, it may also alter our opinions in subtle and unanticipated ways. This influence may be more akin to the way humans sway one another through collaboration and social norms, than to the kind of mass-media and social media influence peo’re familiar with. Researchers who have uncovered this phenomenon believe that the best defense against this new form of psychological influence --indeed, the only one, for now—is making more people aware of it. In the long run, other defenses, such as regulators mandating transparency about how AI algorithms work, and what human biases they mimic, may be helpful. All of this could lead to a future in which people choose which AIs they use -- at work and at home, in the office and in the education of their children -- based on which human values are expressed in the responses that AI gives. And some AIs may have different personalities—including political persuasions. If you're composing an email to your colleagues at the environmental not-for-profit where you work, you might use something called, hypothetically, ProgressiveGPT. Someone else, drafting a missive for their conservative PAC on social media, might use, say, GOPGPT. Still others might mix and match traits and viewpoints in their chosen AIs, which could someday be personalized to convincingly mimic their writing style. By extension, in the future, companies and other organizations might offer AIs that are purpose-built, from the ground up, for different tasks. Someone in sales might use an AI assistant tuned to be more persuasive -- call it SalesGPT. Someone in customer service might use one trained to be extra polite—SupportGPT. *How AIs can change our minds* Looking at previous research adds nuance to the story of latent persuasion. One study from 2021 showed that the AI-powered automatic responses that Google's Gmail suggests—called smart replies—”which tend to be quite positive, influence people to communicate more positively in general. A second study found that smart replies, which are used billions of times a day, can influence those who receive such replies to feel the sender is warmer and more cooperative. [...] https://www.wsj.com/articles/chatgpt-bard-bing-ai-political-beliefs-151a0fe4?st=t8gtp9ijzk0xi41
As China and the United States jockey for tech primacy, wireless carriers in dozens of states are tearing out Chinese equipment. That has turned into a costly, difficult process. https://www.nytimes.com/2023/05/09/technology/cellular-china-us-zte-huawei.html
Vice Media Group, popular for websites such as Vice and Motherboard, filed for bankruptcy protection on Monday to engineer its sale to a group of lenders, capping years of financial difficulties and top-executive departures. Vice said that the lender consortium, which includes Fortress Investment Group, Soros Fund Management and Monroe Capital, will provide about $225 million U.S. in the form of a credit bid for substantially all of the company's assets and also assume significant liabilities at closing.
I don't see this as a bad thing. On the one hand you have the proliferation of livestreaming cameras impacting individual privacy (despite their disclaimers) and national security. On the other hand, what? There's no Constitutional right to watch and record everything going on, is there? It's not necessarily embarrassment that's driving the Navy's decision; more like this exposure has made them aware that military-related activities are being broadcast to potentially everybody out there, and therefore they needed to take action to prevent that.
Makes me wonder if they would look similarly into whatever John Oliver did to encourage his Last Week Tonight audience to show their support for net neutrality back in 2014 and 2017: https://www.youtube.com/watch?v=fpbOEoRrHyU&t=659s https://www.youtube.com/watch?v=92vuuZt7wak&t=1126s I suppose this is not nearly on the same level as what those companies did, however.
Abstract: With growing awareness of the dangers of artificial intelligence, there have been increased efforts to foster more ethical practices. This essay explores the potential risks of so-called "Ethical AI" in health and medicine through investigating a system designed to reduce racial disparities in pain medicine. Ultimately, the essay reflects on how much of Ethical AI preserves the broader technology sector's bias toward action and recommends a more precautionary approach to technological intervention in systemic health inequities. This essay is adapted from Chapter 6 of The Doctor and the Algorithm: Promise, Peril, and the Future of Health AI (Oxford University Press, 2022). [You will have to dig for this one, which I managed to find here: https://cse.umn.edu/cbi/interfaces The URL received was inoperable. PGN]
Please report problems with the web pages to the maintainer