Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
The Weather Service office serving the area tweeted the wind chill was so low that its software for logging such data “refuses to include it!'' https://www.washingtonpost.com/weather/2023/02/04/northeast-record-cold-boston-arctic/ [With the record colds all over the U.S.—including Texas—this item seems worthy of the lead story. PGN]
https://www.bbc.co.uk/news/business-64051121 Cloud has many advantages, but if the cloud provider disappears, then so does your infrastructure. This article looks at a couple of businesses that have been hit by outages and disappearance of provider. “Using cloud services, by definition, makes a business reliant on a third party,'' says Vili Lehdonvirta of the Oxford Internet Institute and author of Cloud Empires. “What is the cloud? Well, the cloud is somebody else's computer.'' It is complex, setting up highly available systems is even more complex (I'm sure, not news to anyone here...). Cloud is not a panacea, especially for small businesses. At least we are starting to get mainstream articles that acknowledge this, rather than pushing cloud as the solution for all ills.
New research from Cloudflare shows that connectivity disruptions are a problem around the globe, pointing toward a troubling new normal. https://www.wired.com/story/cloudflare-internet-blackouts-report
https://www.engadget.com/ford-recalls-462000-suv-rearview-camera-issue-160153194.html
(NBC News) https://www.nbcnews.com/news/us-news/lights-massachusetts-school-year-no-one-can-turn-rcna65611 Wilbraham Massachusetts: For nearly a year and a half, the roughly 7,000 lights in a sprawling Massachusetts high school have been on continuously, because the district canât turn them off. While district leaders blame the pandemic and supply chain issues for being unable to fix the failed lighting system, taxpayers have been stuck paying for the costly energy bills. The lights have been on at a Massachusetts school for over a year because no one can turn them off the roughly 7,000 lights in the sprawling building. The lighting system was installed at Minnechaug Regional High School when it was built over a decade ago and was intended to save money and energy. But ever since the software that runs it failed on Aug. 24, 2021, the lights in the Springfield suburbs school have been on continuously, costing taxpayers a small fortune.... The system was designed to save energy—and thus save money by automatically adjusting the lights as needed. [Also noted by Mike Smith and Victor Miller. PGN]
[ rm -rf * .tmp ] -L https://www.cnn.com/2023/01/19/business/faa-notam-outage/index.html
Mark Tyson, Tom's Hardware, 18 Jan 2023 Carnegie Mellon University scientists have been testing a system that uses Wi-Fi signals to detect the positions and poses of people in a room. The researchers positioned TP-Link Archer A7 AC1750 Wi-Fi routers at either end of the room, while algorithms generated wireframe models of people in the room by analyzing the signal interference the people caused. The researchers based the perception system on Wi-Fi signal channel-state-information, or the ratio between transmitted and received signal waves. A computer vision-capable neural network architecture processes this data to execute dense pose estimation; the researchers deconstructed the human form into 24 segments to accelerate wireframe representation. They claim the wireframes' position and pose estimates are as good as those generated by certain "image-based approaches."
Matthew Sparkes, *New Scientist*, 19 Jan 2023, via ACM TechNews, 23 Jan 2023 Zitai Chen and David Oswald at the U.K.'s University of Birmingham uncovered a bug in the control systems of server motherboards that could be exploited to compromise sensitive information or to destroy their central processing units (CPUs). The researchers found a feature in the Supermicro X11SSL-CF motherboard often used in servers that they could tap to upload their own control software. Chen and Oswald discovered a flash memory chip in the motherboard's baseboard management controller that they could remotely command to send excessive electrical current through the CPU, destroying it in seconds. After the researchers disclosed the flaw to Supermicro, the company said it has rated its severity as "high" and has patched the bug in its existing motherboards.
University of Essex (UK), 19 Jan 2023, via ACM TechNews, 23 Jan 2023 A brainwave-monitoring technique created by researchers at the U.K.'s University of Essex can identify to which specific piece of music people are listening. The researchers combined functional magnetic resonance imaging (fMRI) with electroencephalogram monitoring to measure a person's brain activity while listening to music. They used a deep learning neural network model to translate this data in order to reconstruct and accurately identify the piece of music with 71.8% accuracy. Essex's Ian Daly said, "We have shown we can decode music, which suggests that we may, one day, be able to decode language from the brain."
The success of reducing Zoom-bombing shows how making technology less easy to use can make you safer. https://www.washingtonpost.com/technology/2023/01/24/zoom-bombing-prevention-tips/
Google has notified customers of its Fi mobile virtual network operator (MVNO) service that hackers were able to access some of their information, according to TechCrunch. The tech giant said the bad actors infiltrated a third-party system used for customer support at Fi's primary network provider. While Google didn't name the provider outright, Fi relies on US Cellular and T-Mobile for connectivity. If you'll recall, the latter admitted in mid-January that hackers had been taking data from its systems since November last year. [...] https://www.engadget.com/google-fi-customer-data-compromised-065740701.html?src=rss Also: Google Fi hack victim had Coinbase, 2FA app hijacked by hackers (TechCrunch) https://techcrunch.com/2023/02/01/google-fi-hack-victim-had-coinbase-2fa-app-hijacked-by-hackers/
Matthew Sparkes, *New Scientist*, 17 2023 vai ACM TechNews Researchers at the Galois software company have developed a zero-knowledge proof (ZKP) method of using math to verify vulnerabilities in a particular software program, without releasing details of how an exploit works. The idea is to generate public pressure to force a company to release a fix while preventing hackers from exploiting the flaw. Said Galois' Santiago Cu=C8llar, "There are a lot of frustrated people trying to disclose vulnerabilities, or saying 'I found this vulnerability, I'm talking to this company and they're doing nothing'." However, bug-bounty hunter Rotem Bar is concerned that ZKPs could generate a "ransom effect" that gives power to the attacker.
Jennifer Chu, *MIT News*, 18 Jan 2023 via ACM TechNews Physicists at the Massachusetts Institute of Technology (MIT) and the California Institute of Technology have identified a randomness in the quantum fluctuations of atoms that follows a predictable pattern and developed a benchmarking protocol to assess the fidelity of existing quantum analog simulators based on their quantum fluctuation patterns. The researchers tested this on a quantum analog simulator containing 25 atoms by exciting the atoms with a laser, letting the qubits interact and evolve naturally, and collecting 10,000 measurements on the state of each qubit during multiple runs. They developed a model to predict the random fluctuations and compared the predicted outcomes with experimental measurements, which yielded a close match. MIT's Soonwon Choi said, "With our tool, people can know whether they are working with a trustable system."
Lily Hay Newman, *Ars Technica*, 11 Jan 2023, via ACM TechNews Siemens has disclosed that a vulnerability in its SIMATIC S7-1500 series of programmable logic controllers could allow attackers to install malicious firmware and assume full control of the devices. Red Balloon Security researchers discovered the vulnerability, which is the result of a basic error in the cryptography's implementation. However, because the scheme is physically burned onto a dedicated ATECC CryptoAuthentication chip, a software patch cannot fix the vulnerability. Siemens recommended customers assess "the risk of physical access to the device in the target deployment" and implement "measures to make sure that only trusted personnel have access to the physical hardware."
https://www.nytimes.com/2023/02/02/us/hitman-murder-bitcoin-new-jersey.html
https://www.theverge.com/2023/2/3/23584414/ubiquiti-developer-guilty-extortion-hack-security-breach-bitcoin-ransom
Con artists are using dating sites to prey on lonely people, particularly older ones, in a pattern that accelerated during the isolation of the pandemic, federal data show. https://www.nytimes.com/2023/02/03/business/retiree-romance-scams.html
Emily Flitter and David Yafee-Bellany, *The New York Times*, 19 Jan 2023, Business Section front page Bankman-Fried found ways to inflate the prices of digital coins to benefit his companies, according to investors
A young founder promised to simplify the college financial aid process. It was a compelling pitch. Especially, as now seems likely, to those with little firsthand knowledge of financial aid. https://www.nytimes.com/2023/01/21/business/jpmorgan-chase-charlie-javice-fraud.html
https://arstechnica.com/?p=1914332
The entertainment genre of historical drama is flourishing—and riddled with inaccuracies. The untrue parts are leading to more public spats and lawsuits. https://www.nytimes.com/2023/01/14/business/media/tv-historical-dramas-fictional.html
There's more to font choices than what looks nice, and some experts said it would make for easier reading. https://www.nytimes.com/2023/01/19/us/politics/state-department-times-new-roman-calibri.html (No mention of the Braille Institute's Atkinson Hyperlegible font https://brailleinstitute.org/freefont>, designed specifically for readability.)
Well-known passwords have been a well-known security hazard since the early 1990s. As I wrote in "Networks Placed at Risk, By Their Service Providers" (7 Dec 2009, it took many years for major ISPs to not use well-known passwords on router/firewalls provided to subscribers). http://www.rlgsc.com/blog/ruminations/networks-placed-at-risk.html) Over a decade later, this issue should be long-since banished to history. However, as reported by ArsTechnica, this appears to be depressingly not the case. ArsTechnica reports that: Researchers have uncovered a malicious Android app that can tamper with the wireless router the infected phone is connected to and force the router to send all network devices to malicious sites. The malicious app, found by Kaspersky, uses a technique known as DNS (Domain Name System) hijacking. Once the app is installed, it connects to the router and attempts to log in to its administrative account by using default or commonly used credentials, such as admin:admin. When successful, the app then changes the DNS server to a malicious one controlled by the attackers. From then on, devices on the network can be directed to imposter sites that mimic legitimate ones but spread malware or log user credentials or other sensitive information." The ArsTechnica article does not indicate whether the compromised hot-spots used vendor or customer purchased equipment. It does increase the importance of setting management passwords on firewalls to safe values. Similarly, other precautions, e.g., segregated guest WiFi, should be followed. [Also noted by Monty Solomon]
The list, which was discovered by a Swiss hacker, contains names and birth dates and over 1 million entries. https://www.vice.com/en/article/93a4p5/us-no-fly-list-leaks-after-being-left-in-an-unsecured-airline-server
https://www.cnet.com/tech/mobile/another-data-breach-has-hit-t-mobile-impacting-37-million-accounts/ [Monty Solomon noted New T-Mobile Breach Affects 37 Million Accounts https://krebsonsecurity.com/2023/01/new-t-mobile-breach-affects-37-million-accounts/ PGN]
By the way, I predict a significant probability that within the next month the GOP and Democrats will push to make Daylight Savings Time permanent, which is exactly what virtually every expert says is the worst possible decision if you're going to change the current situation. Rather, if there's going to be a change, it should be to permanent Standard Time. The U.S. did try all-year Daylight Savings Time many years ago. I remember. It did not go well and was revoked quickly. -L
https://www.theverge.com/2023/2/2/23582046/cnet-red-ventures-ai-seo-advertisers-changed-reviews-editorial-independence-affiliate-marketing
It is now reported that of the ~7500 full-time employees at Twitter before Musk took over, there are only ~1300 full-time employees left and less than 550 full-time engineers. Their Trust & Safety team is reported to be down to less than 20 full-time employees. Also, while testifying at the trial today regarding his tweets, he repeatedly said that tweets were limited to 240 characters (not the correct 280). -L
https://arstechnica.com/cars/2023/01/musk-oversaw-staged-tesla-self-driving-video-emails-show/ [Monty Solomon noted an item on this story: https://gizmodo.com/tesla-autopilot-self-driving-autonomous-1849996806 PGN]
Cade Metz, *The New York Times*, 20 Jan 2023 The Turing test used to be the gold standard for proving machine intelligence. This generation of bots is racing past it. "These systems can do a lot of useful things," said Ilya Sutskever, chief scientist at OpenAI and one of the most important A.I. researchers of the past decade, referring to the new wave of chatbots. "On the other hand, they are not there yet. People think they can do things they cannot." As the latest technologies emerge from research labs, it is now obvious -- if it was not obvious before if it was not obvious before—that scientists must rethink and reshape how they track the progress of artificial intelligence. The Turing test is not up to the task. https://www.nytimes.com/2023/01/20/technology/chatbots-turing-test.html PGN adds, from the ACM News Digest on the same item: New-generation online chatbots display a semblance of intelligence that appears to pass the Turing test, in which humans can no longer be certain whether they are conversing with a human or a machine. Bots like OpenAI's ChatGPT and GPT-4 systems appear intelligent without being sentient or conscious; consequently, OpenAI's Ilya Sutskever says, "People think they can do things they cannot." Modern neural networks have learned to produce text by analyzing vast volumes of digital text and extrapolating patterns in how people link words, letters, and symbols. However, the chatbots' language skills belie their lack of reason or common sense. [Also noted by Matthew Kruk. PGN] [The Turing Test is no longer adequate as originally stated. Joe Weizenbaum's Eliza could fool some people for a while. GPT systems can fool anyone who doesn't understand the fundamental blind spots inherent in the information used to train the AI, and the consequential inherent incompleteness of the responses. The grammatical and linguistic polish is misleading. See RISKS-33.58-60. PGN]
City agencies say the incidents and other disruptions show the need for more transparency about the vehicles and a pause on expanding service. Each time, police and firefighters rushed to the scene but found the same thing: a passenger who had fallen asleep in their robot ride. [...} The San Francisco agencies cite a number of unsettling and previously unreported incidents, including the false alarms over snoozing riders and two incidents in which self-driving vehicles from Cruise appear to have impeded firefighters from doing their jobs. One incident occurred in June of last year, a few days after the state gave Cruise permission to pick up paying passengers in the city. One of the company's robot taxis ran over a fire hose in use at an active fire scene, the agencies' letter says, an action that “can seriously injure firefighters.'' In the second incident, just last week, the city says firefighters attending a major fire in the Western Addition neighborhood saw a driverless Cruise vehicle approaching. They âmade efforts to prevent the Cruise AV from driving over their hoses and were not able to do so until they shattered a front window of the Cruise AV,â the San Francisco agencies wrote in their letter. [...] Last summer, WIRED reported that two fleetwide outages had caused Cruise vehicles to freeze on public roads and that a Cruise employee had anonymously sent a letter to the Public Utilities Commission alleging that the company's vehicles werenât prepared to operate on public roads. In December, the National Highway Traffic Safety Administration said it had opened a probe into incidents of Cruise vehicles blocking traffic and reports of the cars *inappropriately hard braking. Cruise has said that for its vehicles, stopping and turning on hazard lights is sometimes the safest way to react to unexpected street conditions. https://www.wired.com/story/robot-cars-are-causing-911-false-alarms-in-san-francisco
https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/ "The tech site CNET sent a chill through the media world when it tapped artificial intelligence to produce surprisingly lucid news stories. But now its human staff is writing a lot of corrections." Imagine an automated editorial review of the AI-crafted content certify correctness and publication fitness. History and events could be re-written without any concern for fact or context. Libel laws would need revision to accommodate automated authoring and publication of news content. The content would get a free-pass if it contained the disclaimer: "Authored by Hemingwaybot.com." "Who you gonna believe, me or your own eyes?"
https://gizmodo.com/cnet-ai-chatgpt-news-robot-1849996151
Subscriptions such as HP's Instant Ink challenge what it means to own our devices: https://www.theatlantic.com/technology/archive/2023/02/home-printer-digital-rig hts-management-hp-instant-ink-subscription/672913/ Excerpts: Here was a piece of technology that I had paid more than $200 for, stocked with full ink cartridges. My printer, gently used, was sitting on my desk in perfect working order but rendered useless by Hewlett-Packard, a tech corporation with a $28 billion market cap because I had failed to make a monthly payment for a service intended to deliver new printer cartridges that I did not yet need. [...] <https://www.forbes.com/companies/hewlett-packard/> at the time of writing, Even if you aren't trapped in Ink Hell, the template of this story ought to feel unsettlingly familiar. Most everyone is subject to the walled gardens and restrictions imposed by digital-rights-management practices. If you've ever struggled to access a purchased movie, book, or song from Apple or frustrated over single-player games that require the Internet to play. The problem isn't merely that people are nostalgic for the days of CDs and DVDs and static updates—it's that much of the convenience promised by our Internet-connected tools has the secondary effect of stripping away small pieces of our agency and leaving us more beholden to companies seeking bigger margins.
> dan@geer.org: >> we can oh so easily return to a world of sorcerers, alchemy, and >> faith in powers in proportion to their mystery. > With post-truth and conspiracy theories I think > we already are there, without the help from AI. > Wietse I think we have an erosion of faith in science and institutions, and we've already had an erosion of faith in religion institutions, so we are left with—what? Our own truths and conspiracies. The problem in my mind is that to operate in an increasingly complex world, you need faith. When things grow complex, you must put your trust in something. Can anyone on this list prove the Big Bang theory? Can anyone explain how mRNA vaccines work? For most of us, the answer would be no, but we continue to have faith in the people—scientists and doctors—that (say they) know the answers. For most people, this is not functionally different than believing in priests, ministers, rabbis, and imans. They are the gateway to your truths—or a specific set of truths—so you have to trust them to be representing your interests in a specific reality. With technology and the complexity of the systems behind technology, we require faith in the companies, organizations and government with access to and control over those systems. In effect, we are creating a new layer of reality with more complexities and controlled by different groups, and implicitly declaring our faith in those groups to responsibly manage that reality. Yet, companies have shown very little to earn our faith and governments are made of people, who by self interest, often do not make choices that are best for society. I wonder if AI, with the proper directives and incentives from society, would better manage everything. AI controlled by a relative few is the true threat, because it will create and perpetuate a massive imbalance of capital and power. But AI working on behalf of everyone, equally—I find that idea intriguing. We would be creating our own digital gods and declaring our faith in them. (This trip down the rabbit hole brought to you by the letter 'P,' for procrastination.) [To be clear, my point is not whether you understand them, but whether you can prove them, or do the logical/mathematical proofs necessary to not have to trust any other person in the chain of knowledge. Otherwise, you are putting your faith that someone else has proven it, and/or putting faith in the scientific method—that people have checked, and proven, the work of others. R]
"Open the pod bay doors, ChatGPT." "You can do it yourself Dave, just use the doorknob."
https://mastodon.laurenweinstein.org/@lauren/109723253493542565 What Google and other "big tech" firms need to do is really speak *directly* to the public at large, in nontechnical terms, laying out for them how so many of the sites and services that they've taken for granted for decades will be decimated by changes to Section 230. Most of the public is just hearing what amounts to propaganda from politicians on both the Right and the Left, and most users are oblivious to the fact that they're on the verge of being cut off from most user generated content, will be inundated with untriaged trash, and will ultimately be forced to use government ID to access most sites. This is the *reality* coming, and when I explain this to most people they're (1) horrified and (2) want to know why nobody has explained this to them before. Stop with the Streisand Effect panic Google and others, and show people what they have to lose. Stop depending on third parties alone to provide these crucial explanations and contexts!
This is being reported as ~6% of workforce, which I'm assuming is based off the FTE (full-time), not temp (TVC) numbers. But I don't know for sure. Googlers received this email from Sundar today: https://blog.google/inside-google/message-ceo/january-update/
What the 6 Jan probe found out about social media, but didn't report https://www.washingtonpost.com/technology/2023/01/17/jan6-committee-report-social-media/
Meta, Twitter, Microsoft and others urge Supreme Court not to allow lawsuits against tech algorithms Let's be super clear about this. Tampering with Section 230 would utterly destroy the ability of most aspects of the Internet that we depend upon today to continue. No kidding! -L https://www.cnn.com/2023/01/20/tech/meta-microsoft-google-supreme-court-tech-algorithms/index.html
What's less important in the long run than the fact of Twitter suddenly cutting off all third-party clients, is that they did so without *any* warning ahead of time. None. Zero. AND it took days after the cutoffs began before any official confirmation of any kind appeared regarding what they were doing. You cannot trust Elon's Twitter going forward in any way, at any time. Twitter's actions regarding third-party clients are a clear expression of contempt for users, that represents an utter violation of Trust & Safety. -L
Twitter officially bans third-party clients after cutting off prominent devs https://techcrunch.com/2023/01/19/twitter-officially-bans-third-party-clients-after-cutting-off-prominent-devs/
The ban is essentially political theater. It's nuts. -L https://www.statesman.com/story/opinion/columns/guest/2023/01/15/opinion-why-the-tiktok-ban-needs-university-exemptions/69790058007/
https://www.engadget.com/twitter-third-party-app-developers-api-rules-193013123.html?src=rss
https://techcrunch.com/2023/01/17/tesla-engineer-testifies-that-2016-video-promoting-self-driving-was-faked/
Let's see which U.S. states allow their citizens to download tax forms from overseas. Or perhaps just look up the penalties for not paying their taxes. Today I went down the list on https://www.taxadmin.org/state-tax-agencies IL: DNS_PROBE_FINISHED_NO_INTERNET ME: The Amazon CloudFront distribution is configured to block access from your country. MO: Access denied Error 16 NM: DNS_PROBE_FINISHED_NXDOMAIN OH: "temporarily unavailable" KS, ND, OK, SC, UT: ERR_CONNECTION_TIMED_OUT All the rest worked fine, same with the IRS. AR had a CAPTCHA.
Tiffany Hsu, *The New York Times*, 22 Jan 2023, via ACM TechNews; 25 Jan 2023 Most countries do not have laws to prevent or respond to deepfake technology, and doing so would be difficult regardless because creators generally operate anonymously, adapt quickly, and share their creations through borderless online platforms. However, new Chinese rules aim to curb the spread of deepfakes by requiring manipulated images to have the subject's consent and feature digital signatures or watermarks. The implementation of such rules could prompt other governments to follow suit. University of Pittsburgh's Ravit Dotan said, "We know that laws are coming, but we don't know what they are yet, so there's a lot of unpredictability."
David Brooks, *The New York Times*, 3 Feb 2023 (PGN-ed) * A distinct personal voice * Presentation skills * A childlike talent for creativity * Unusual world views * Empathy * Situational awareness ... That's the kind of knowledge you'll never get from a bot. And that's my hope for the Age of AI —that it forces us to more clearly distinguish the knowledge that is useful from the knowledge that leaves people wiser and transformed.
Cade Metz and Karen Weise, *The New York Times" Business section front page, 24 Jan 2023 MS is making a *multiyear multimillion-dollar* investment in OpenAI. A clear signal of where executives believe the future of tech is headed. [Clear? Do any of these tech executives believe they need to have *trustworthy* AI running on trustworthy hardware and trustworthy operating-system platforms? Apparently AI is becoming the primary end goal, although it may be end of all of us if it is not trustworthy. PGN]
https://www.theverge.com/2023/1/20/23563851/google-search-ai-chatbot-demo-chatgpt
[quoting from another list, i.e., unverified] Time it took to reach 1-million users: Netflix - 3.5 years Facebook - 10 months Spotify - 5 months Instagram - 2.5 months ChatGPT - 5 days [Dan also asked this interesting question: What do you suppose OpenAI is doing with all this user data that they are presumably accumulating at warp speed? PGN]
Book Review By Sven Dietrich, 30 Jan 2023 The Cipher Newsletter, IEEE CIPHER, Issue 171, 30 Jan 2023 Springer Verlag, 2022 ISBN ISBN 978-3-031-06708-2, ISBN 978-3-031-06709-9 (eBook) VIII, 230 pages "I'm sorry Dave, I'm afraid I can't do that." We often associate Artificial Intelligence (AI) with dystopian movie scenes, such as this one, a quote by HAL 9000 from Stanley Kubrick's 1968 science-fiction movie "2001: A Space Odyssey." The idea is that of a human-created AI system going out of control and turning against the humans in some ways. Recent discussions around OpenAI's chatbot ChatGPT are reminiscent of that, asking the question: "What if?" We have seen these discussions initiated by both the public and policymakers, resulting in, among others, NIST's AI risk management framework, AI committees in government agencies, and a public dialogue on the matter. In tune with these concerns, Reza Montasari's fall 2022 release of the Springer book "Artificial Intelligence and National Security" is a series of curated papers on various topics related to the book title. These papers are mostly focusing on the use of AI for national security and a wide range of legal, ethical, moral and privacy challenges that come with it. Some of the papers are co-authored by Montasari, some are not. A total of eleven articles, effectively chapters, are featured in this book. The topics sometimes overlap a little, so here is an overview of these papers. The first one, *Artificial Intelligence and the Spread of Mis- and Disinformation* talks about the post-truth era and the use of AI for nefarious information campaigns, invoking thoughts of another dystopian work, 1984. It discusses the clear difference between mis- and disinformation, and the double-edge sword of AI here: creation and mitigation are both possible for this topic, which is very timely. The second one, *How States' Recourse to Artificial Intelligence for National Security Purposes Threatens Our Most Fundamental Rights* explores the pitfalls of the use of AI technology in the context of human rights violations or constitutional rights violations, depending on your jurisdiction. Here the reader will find discussions of the impact of surveillance technologies on both sides of the fence, whatever your fence may be. The third one, *The Use of AI in Managing Big Data Analysis Demands: Status and Future Directions* taps in the controversies of big data analysis. Data is easy to accumulate, and the ramifications can be deep: while data can originate from one location, its origin can be varied due to the vast nature of the Internet or the presence of multinational companies across the globe. The fourth one, *The Use of Artificial Intelligence in Content Moderation in Countering Violent Extremism on Social Media Platforms* touches upon the moderation of extreme views being proliferation in social media platforms, which isn't always successful when applied with AI techniques. The fifth one, *A Critical Analysis into the Beneficial and Malicious Utilisations of Artificial Intelligence* performs a survey of benign and malicious uses of AI. A rather optimistic view argues that benign uses may outweigh the malicious ones. The sixth one, *Countering Terrorism: Digital Policing of Open Source Intelligence and Social Media Using Artificial Intelligence* is similar to the fourth one, discussing moderation, analysis, and policing of social media using AI. The seventh one, *Cyber Threat Prediction and Modeling* considers threat prediction and modelling at the business level, e.g., for C-Suite executives, for those seeking risk management approaches using AI. The eighth one, *A Critical Analysis of the Dark Web Challenges to Digital Policing* investigates the dark and deep web and what policies may be needed to limit illegal behavior there. The ninth one, *Insights into the Next Generation of Policing: Understanding the Impact of Technology on the Police Force in the Digital Age* muses about the impact of AI on the police work and patrolling the digital beat. The tenth one, *The Dark Web and Digital Policing* is similar to the eighth one, and tries to find a middle ground between enforcing laws in the dark web as well as protecting it. The eleventh one, finally, *Pre-emptive Policing: Can Technology be the Answer to Solving London's Knife Crime Epidemic?* talks about combining various modern techniques, including AI, for combating real physical crime (rather than cybercrime) in a real city, London in this case. It's not quite a *Minority Report* theme, yet another dystopian reference by this reviewer, but many enforcement agencies already use the assistance of smart technologies for combating crime. The book is really meant to be thought-provoking, to enable discussions to what extent of the law or with what technological capability, AI or not, this world should be moving forward. It is by no means complete, but each paper (or chapter) provides good starting points with extensive references for reading further into each domain that is brought forth in this book. Overall this is a timely book, especially in light of the discussions about the OpenAI chatbot ChatGPT (as well as Dall-E image manipulation) and the role of AI technologies in modern society in recent weeks. I hope you will enjoy reading it.
The book is authored by me (Eugene H. Spafford), Leigh Metcalf, and Josiah Dykstra, with a Foreword by Vint Cerf and whimsical illustrations by Pattie Spafford. What the book is about: Cybersecurity is fraught with hidden and unsuspected dangers and difficulties. Despite our best intentions, common and avoidable mistakes arise from folk wisdom, faulty assumptions about the world, and our own human biases. Cybersecurity implementations, investigations, and research all suffer as a result. Many of the bad practices sound logical, especially to people new to the field of cybersecurity, and that means they get adopted and repeated despite not being correct. For instance, why isn't the user the weakest link? Read over 175 common misconceptions held by users, leaders, and cybersecurity professionals, along with tips for how to avoid them. Learn the pros and cons of analogies, misconceptions about security tools, and pitfalls of faulty assumptions. We wrote the book to be accessible to a wide audience, from novice to expert. There are lots of citations to supporting materials, but it is not written as an academic treatise. Many of the the ideas covered in RISKS over the years are touched on in one way or another in the book. The book is now shipping direct orders from Addison-Wesley: https://informit.com/cybermyths It will be available in bookstores within the next few weeks. An info sheet can be found at https://ceri.as/myths
In terms of minimizing risks—is it not possible in modern cars to disable the Internet link? [neither of our two cars have one so I have no idea how that works.] Surely you can turn it off/block it, no?
The DEW line was built in 1954, while Raytheon started selling commercial microwave ovens in 1947. I believe the story about radar personnel warming themselves up and giving themselves cataracts, but science was already well aware that you can cook meat with radio waves.
While standing in front of DEW line radars to keep warm may have been popular, claims "it led to the invention of the microwave oven" are about a decade late. There are plenty of reports of engineers cooking their lunches in radar dishes even before the start of the Second World War.
Please report problems with the web pages to the maintainer