Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
*Military tests that jam and spoof GPS signals are an accident waiting to happen* Early one morning last May, a commercial airliner was approaching El Paso International Airport, in West Texas, when a warning popped up in the cockpit: *GPS Position Lost*. The pilot contacted the airline's operations center and received a report that the U.S. Army's White Sands Missile Range <https://www.wsmr.army.mil/Pages/home.aspx>, in South Central New Mexico, was disrupting the GPS signal. “We knew then that it was not an aircraft GPS fault,'' the pilot wrote later. The pilot missed an approach on one runway due to high winds, then came around to try again. “We were forced to Runway 04 with a predawn landing with no access to [an instrument landing] with vertical guidance,'' the pilot wrote. “Runway 04 has a high CFIT threat due to the climbing terrain in the local area.'' CFIT stands for “controlled flight into terrain,'' and it is exactly as serious as it sounds. The pilot considered diverting to Albuquerque, 370 kilometers away, but eventually bit the bullet and tackled Runway 04 using only visual aids. The plane made it safely to the ground, but the pilot later logged the experience on NASA's Aviation Safety Reporting System <https://asrs.arc.nasa.gov/>, a forum where pilots can anonymously share near misses and safety tips. This is far from the most worrying ASRS report involving GPS jamming. In August 2018, a passenger aircraft in Idaho, flying in smoky conditions, reportedly suffered GPS interference from military tests and was saved from crashing into a mountain only by the last-minute intervention of an air traffic controller. “Loss of life can happen because air traffic control and a flight crew believe their equipment are working as intended, but are in fact leading them into the side of the mountain,'' wrote the controller. “Had [we] not noticed, that flight crew and the passengers would be dead. I have no doubt.''. [...] https://spectrum.ieee.org/aerospace/aviation/faa-files-reveal-a-surprising-threat-to-airline-safety-the-us-militarys-gps-tests [For further background on this topic, see Kate Murphy, Our GPS System Is Too Vulnerable, *The New York Times* Sunday Review, 24 Jan 2021. “We need a backup for a service that is essential but full of weaknesses.'' Sounds quite consistent with other RISKS items! PGN]
https://www.theguardian.com/media/2021/jan/20/australias-proposed-media-code-could-break-the-world-wide-web-says-the-man-who-invented-it
[via NNSquad] Some of my contemporaries are jumping on the "Big Tech is the Enemy" bandwagon. I could not disagree more. I am convinced that "Big Tech" is ultimately our salvation—and that does include social media. The goal must be fixing the problems we have created, not killing Big Tech.
No cameras in the bedroom? https://arstechnica.com/information-technology/2021/01/home-alarm-tech-backdoored-security-cameras-to-spy-on-customers-having-sex/
[via Dave Farber] Volunteers couldn't tell AI-generated comments from those penned by humans. Will Knight, Ars Technica, 17 Jan 2021 https://arstechnica.com/tech-policy/2021/01/ai-powered-text-from-this-program-could-fool-the-government/ In October 2019, Idaho proposed changing its Medicaid program. The state needed approval from the federal government, which solicited public feedback via Medicaid.gov. Roughly 1,000 comments arrived. But half came not from concerned citizens or even Internet trolls. They were generated by artificial intelligence. And a study found that people could not distinguish the real comments from the fake ones. The project was the work of Max Weiss, a tech-savvy medical student at Harvard, but it received little attention at the time. Now, with AI language systems advancing rapidly, some say the government and Internet companies need to rethink how they solicit and screen feedback to guard against deepfaketext manipulation and other AI-powered interference. “The ease with which a bot can generate and submit relevant text that impersonates human speech on government websites is surprising and really important to know,'' says Latanya Sweeney, a professor at Harvard's Kennedy School who advised Weiss on how to run the experiment ethically. Sweeney says the problems extend well beyond government services, but it is imperative that public agencies find a solution. “AI can drown speech from real humans,'' she says. “Government websites have to change.'' The Centers for Medicare and Medicaid Services says it has added new safeguards to the public comment system in response to Weiss's study, though it declines to discuss specifics. Weiss says he was contacted by the US General Services Administration, which is developing a new version of the federal government website for publishing regulations and comments, about ways to better protect it from fake comments. Government systems have been the target of automated influence campaigns before. In 2017, researchers discovered that over a million comments submitted to the Federal Communications Commission regarding plans to roll back net neutrality rules had been auto-generated, with certain phrases copied and pasted into different messages. Weiss's project highlights a more serious threat. There has been remarkable progress in applying AI to language over the past few years. When powerful machine-learning algorithms are fed huge amounts of training data=94in the form of books and text scraped from the Web=94they can produce programs capable of generating convincing text. Besides myriad useful applications, this raises the prospect that all sorts of Internet messages, comments, and posts could be faked easily and less detectably. “As technology gets better,'' Sweeney says, “human speech venues become subject to manipulation without human knowledge that it has happened.'' Weiss was working at a health care consumer-advocacy organization in the summer of 2019 when he learned about the public feedback process required to make Medicaid changes. Knowing that these public comments had swayed previous efforts to change state Medicaid programs, Weiss looked for tools that could auto-generate comments. “I was a bit shocked when I saw nothing more than a submit button standing in the way of your comment becoming a part of the public record,'' he says. Weiss discovered GPT-2, a program released earlier that year by OpenAI, an AI company in San Francisco, and realized he could generate fake comments to simulate a groundswell of public opinion. “I was also shocked at how easy it was to fine tune GPT-2 to actually spit out the comments,'' Weiss says. “It's relatively concerning on a number of fronts.'' Besides the comment-generating tool, Weiss built software for automatically submitting comments. He also conducted an experiment in which volunteers were asked to distinguish between the AI-generated comments and ones written by humans. The volunteers did no better than random guessing. After submitting the comments, Weiss notified the Centers for Medicare and Medicaid Services. He had added a few characters to make it easy to identify each fake comment. Even so, he says, the AI feedback remained posted online for several months. GPT-3 OpenAI released a more capable version of its text-generation program, called GPT-3, last June. So far, it has only been made available to a few AI researchers and companies, with some people building useful applications such as programs that generate email messages from bullet points. When GPT-3 was released, OpenAI said in a research paper that it had not seen signs of GPT-2 being used maliciously, even though it had been aware of Weiss's research. OpenAI and other researchers have released a few tools capable of identifying AI-generated text. These use similar AI algorithms to spot telltale signs in the text. It's not clear if anyone is using these to protect online commenting platforms. Facebook declined to say if it is using such tools; Google and Twitter did not respond to requests for comment. It also isn't clear if sophisticated AI tools are yet being used to create fake content. In August, researchers at Google posted details of an experiment that used deepfake-text-detection tools to analyze over 500 million webpages. They found that the tools could identify pages hosting auto-generated text and spam. But it wasn't clear if any of the content was made using an AI tool such as GPT-2.
From self-driving cars to computers that can win game shows, humans have a natural curiosity and interest in artificial intelligence (AI). As scientists continue making machines smarter and smarter however, some are asking “what happens when computers get too smart for their own good?'' From The Matrix to The Terminator, the entertainment industry has already started pondering if future robots will one day threaten the human race. Now, a new study concludes there may be no way to stop the rise of machines. An international team says humans would not be able to prevent super artificial intelligence from doing whatever it wanted to. Scientists from the Center for Humans and Machines at the Max Planck Institute have started to picture what such a machine would look like. Imagine an AI program with an intelligence far superior to humans. So much so that it could learn on its own without new programming. If it was connected to the Internet, researchers say the AI would have access to all of humanity's data and could even take control of other machines around the globe. Study authors ask what would such an intelligence <https://www.studyfinds.org/human-brains-computer-see-objects/> do with all that power? Would it work to make all of our lives better? Would it devote its processing power to fixing issues like climate change? Or, would the machine look to take over the lives <https://www.studyfinds.org/majority-of-office-workers-feel-artificial-intelligence-could-replace-them-within-5-years/> of its human neighbors? Controlling the uncontrollable? The dangers of super artificial intelligence [...] https://www.studyfinds.org/no-way-to-control-super-artificial-intelligence-ai/
Catalin Cimpanu, ZDNet, 19 Jan 2021 via ACM TechNews, 25 Jan 2021 Researchers in Israeli boutique cybersecurity consultancy JSOF have disclosed seven vulnerabilities that affect Dnsmasq, a domain name system (DNS) forwarding client for *NIX-based operating systems. The vulnerabilities involve DNSpooq software in millions of devices sold worldwide, including networking gear like routers, access points, firewalls, and VPNs from numerous companies. The researchers say the vulnerabilities could be combined to poison DNS cache entries recorded by Dnsmasq servers, allowing attackers to redirect users to clones of legitimate websites. Four of the vulnerabilities are buffer overflows in the Dnsmasq code that could result in remote code execution scenarios, and the remainder enable DNS cache poisoning. The researchers advise users to apply security updates released by the Dnsmasq project. https://orange.hosting.lsoft.com/trk/click?ref=znwrbbrs9_6-291b8x2279e7x070793&
I have seen many stories about cleaners unplugging various systems so they could plug in the vacuum cleaner, etc. This is the first one I have seen where the system was alarmed for this very scenario. Toto said, the freezer at the Boston pharmacy "was in a secure location and had an alarm system installed. The plug was found loose after a contractor accidentally removed it while cleaning." He said they are investigating why the incident occurred and why the alarm system did not work as it was supposed to. https://abcnews.go.com/Health/1900-doses-moderna-vaccine-destroyed-cleaner-accidentally-unplugs/story?id=75419665
Various new outlets have reported systemic problems with the COVID-19 vaccination program in the United States. The most recent installment in my blog, Ruminations, discussed some of the major issues I encountered. The general public is rarely impacted by poor choices in IT implementations. Unfortunately, the COVID-19 vaccination program has become an example of how not to implement important public-facing computer systems. ... The full text can be found at: http://www.rlgsc.com/blog/ruminations/public-health-endangered-by-deficient-user-models.html
https://www.nytimes.com/2021/01/22/us/politics/dia-surveillance-data.html
Heidi Tworek, *The Atlantic*, 26 May 2019 [via Dave Farber] https://www.theatlantic.com/international/archive/2019/05/germany-war-radio-social-media/590149/?fbclid=IwAR1o7hi3wl70oEtokq9Q4ofduG45sSF-4XqAb6tXfS7lUKnPjZeglRRg0H0 Regulators should think carefully about the fallout from well-intentioned new rules and avoid the mistakes of the past “Our way of taking power and using it would have been inconceivable without the radio and the airplane,'' Nazi Propaganda Minister Joseph Goebbels claimed in August 1933. [Timely byt very long item truncated for RISKS. PGN]
https://www.nytimes.com/2021/01/19/us/politics/biden-peloton.html
Here's what we know about the $5.3-billion aircraft https://www.businessinsider.com/what-we-know-about-the-air-force-one-replacement-project-2020-7 Favorite line: The Air Force announced in April that Boeing will develop the owner's manual for the new VC-25B aircraft and the service branch is paying $84 million for it, DefenseOne reported. The manual will reportedly contain over 100,000 pages and won't even be ready at the time of the jet's estimated delivery to the Air Force, with DefenseOne reporting that it will arrive in January 2025. That is one serious manual! And it better have a quick index for pilots...
https://markets.businessinsider.com/currencies/news/bitcoin-price-cryptocurrency-should-be-curtailed-terrorism-concerns-yellen-2021-1-1029985692 On the other hand... http://broadbandbreakfast.com/2021/01/panelists-at-ces-2021-agree-widespread-adoption-of-cryptocurrency-is-imminent/
https://twitter.com/knowIedgehub/status/1352235869143330819
https://www.nytimes.com/2021/01/21/technology/loon-google-balloons.html
The theft, by a 19-year-old who worked at a Kroger in Duluth, Georgia., occurred over two weeks when a supermarket compliance officer was away, the authorities said. https://www.nytimes.com/2021/01/21/us/kroger-atlanta-teen-arrested.html The risk? Let me think...
https://www.scientificamerican.com/article/forever-chemicals-are-widespread-in-u-s-drinking-water/ "A handful of states have set about trying to address these contaminants, which are scientifically known as perfluoroalkyl and polyfluoroalkyl substances (PFASs). But no federal limits have been set on the concentration of the chemicals in water, as they have for other pollutants such as benzene, uranium and arsenic. With a new presidential administration coming into office this week, experts say the federal government finally needs to remedy that oversight. 'The PFAS pollution crisis is a public health emergency,' wrote Scott Faber, EWG's senior vice president for government affairs, in a recent public statement." Cast iron cookware is safer than non-stick, though maintenance is higher. Can also be used for weight training! The movie "Dark Waters" dramatizes the protracted effort to hold industry accountable for PFAS water pollution.
*Herzliya-based startup StoreDot unveils solution for main obstacle to widespread use of electric vehicles, but it requires major upgrades to charging stations* Israeli company StoreDot announced Tuesday that in a landmark achievement in the electric vehicle industry, it had managed to develop the world's first car battery that can be fully charged in just five minutes. However, the invention will take time to become commercially feasible since the ultra-fast charge would require much higher-power chargers than are currently available, The Guardian *reported* [... PGN-truncated] <https://www.theguardian.com/environment/2021/jan/19/electric-car-batteries-race-ahead-with-five-minute-charging-times> <https://www.timesofisrael.com/israeli-startup-storedot-unveils-ultra-fast-charging-batteries-for-drones/> https://www.timesofisrael.com/revving-up-electric-car-industry-israeli-firm-develops-5-minute-charge-battery/
Gabe Goldberg reported *The Washington Post* on an NHTSA investigation into crashes by Teslas. The study concluded that there was no design fault, but rather Driver Error: Mistakenly stepping on the accelerator rather than the brake. https://www.washingtonpost.com/transportation/2021/01/08/tesla-brakes/ Gabe then editorializes in the cute quip manner that has become all too common on RISKS: "[Doesn't speak well of Tesla owners' driving skills...]." I believe it is a Design Fault—not just of Tesla, but of automobiles in general and the standards committees. Mistaken application of the accelerator pedal rather than the brake is a reasonably frequent event in automobiles, so frequent that it even has an acronym: SUA. Sudden Unintended Acceleration. Why? Because the accelerator and brake pedals are adjacent, sometimes at approximately the same height (especially loved by racers, so they can "heel and toe" between the toe pedals rapidly. In modern autos, there is no clutch pedal, so there is lots of room to space the pedals differently. There are other solutions to the placement of the pedals, but each change will have its own perceived risks, so rather than make suggestions only to have people point out the flaws, I say, why not turn it over to the Human Factors engineers. Every major car manufacturer -- and even NHTSA-- employs them. Let the studies begin! (Caveat: I'm a Fellow of the Human Factors society, among others, so I am biased.) I also suspect that for many of the Tesla accidents, the driver's foot was on the floor or otherwise resting. Why? In the Tesla (or any auto with adaptive cruise control), there is nothing for the right foot to do. Acceleration and appropriate speed is automatically handled by the vehicle. Why not rest the foot. I know I do. If there suddenly is a need to brake, a small percentage of misses is likely. Note too that in the case of Tesla, all the SUA events did have forces applied to the accelerator pedal (the auto has extensive record keeping), so these were unlikely to simply be faulty automation. Of the 217 cases examined by NHTSA. 28% were in parking lots and 12% in Driveways—40%! Tesla—and many Electric Vehicles (EVs) have a feature that can be dangerous in this situation: Electric motors have high torque even at startup, so the initial acceleration, even (especially) from a stopped position can be unexpectedly rapid. Notice that most of the cases were in zero or low velocity situations. The NHTSA report states: "Eighty-six (86) percent of these crashes occurred in parking lots, driveways or other close-quarter *not-in-traffic* locations." Moreover, NHTSA says: "Almost all of these crashes were of short duration, with crashes occurring within three seconds of the alleged SUA event." I don't have comparable statistics for the multiple crashes that Toyota had due to SUA or for any of the other manufacturers who were also afflicted. But Norman's Rule of Design is that when there are multiple, repeated incidents of the same type of accident, even though the tendency is to blame the person, invariably it is actually due to inappropriate design. When I see one or two cases, blaming the person might be appropriate. But when the number of cases gets into the multiple hundreds, something else is going on. It is cute to make fun of drivers, whether for their age, gender, or choice of automobile. Cute statements often are false statements. And false statements can cause damage and death. In the case of automobile accidents, a false belief that incidents are caused by driver error prevents government agencies and automobile manufacturers from believing they should do something about it. Please people, stop calling faulty design "human error." (I couldn't find the NHTSA report on the NHTSA site, but it is available at https://www.teslarati.com/tesla-sudden-acceleration-nhtsa-closes-review/ .) Don Norman, Founding Director Emeritus, Design Lab, University of California, San Diego USA. [John Levine noted that in the 1980s a bunch of unexpected acceleration events in Audi 100's were also due to pedal confusion. Audi recalled them to move the pedals farther apart and to add an interlock so you had to step on the brake before putting the car in gear. Michael Bacon noted that many air crashes have been attributed to "pilot error", but examination of later incidents found issues with design, materials, systems, construction, maintenance, inspection, manuals, training, operations, etc. PGN]
> And if they ran out of money to destroy things, what was left > to *buy* things? Different bucket. Congress probably allocated X dollars to destroy and Y dollars to replace.
The deleted records were linked to police investigations that were terminated before charge (No Further Action) or to those where an individual had been acquitted at court. Statistically, few of them will relate to murders, rapes or other serious crimes. That's not to say there is little or no risk, but it's not as serious as the opposition parties or the British Broadcasting Corporation would like to make out.
The deleted records were linked to police investigations that were terminated before charge (No Further Action) or to those where an individual had been acquitted at court. Statistically, few of them will relate to murders, rapes or other serious crimes. That's not to say there is little or no risk, but it's not as serious as the opposition parties or the British Broadcasting Corporation would like to make out.
Not a sophisticated, modern problem but: Some years ago, in Sydney (Australia) there was a company named Computer Accounting and Systems, or CAS for short. For a while people were sending cheques (checks) were to pay 'CAS' until some enterprising person changed the recipient name by adding a 'H' converting it to a cash cheque.
> The calculus involved here is complex. [...] The UK thinks the calculus is simple. Firstly, it appears that 12 weeks is the optimum delay to provide the longest protection. Secondly, and far more importantly, while a single dose may only offer 50% or 60% protection against infection, it DOES seem to offer *100%* protection against hospitalisation. No, we don't want "one shot" people getting infected and spreading it, but the more people we can keep out of hospital, the better. (And getting infected seems to offer 85% immunity after you've recovered.)
Please report problems with the web pages to the maintainer