Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
[… 'cuz, well, Texas]
A power outage knocked Houston's 911 system offline for several hours Saturday afternoon, officials said.
The incident began when the Houston Emergency Center lost power mid-day Saturday, Houston Fire Chief Sam PeÃ±a said. Generators restored power, but when the regular power came back on, a technical malfunction prevented the call center from restoring power to its computer aided dispatch system, he said.
An apparent supply chain attack exploited Kaseya's IT management software to encrypt a “monumental” number of victims all at once
Businesses around the world rushed Saturday to contain a ransomware attack that has paralyzed their computer networks, a situation complicated in the U.S. by offices lightly staffed at the start of the Fourth of July holiday weekend.
It's not yet known how many organizations have been hit by demands that they pay a ransom in order to get their systems working again. But some cybersecurity researchers predict the attack targeting customers of software supplier Kaseya could be one of the broadest ransomware attacks on record.
‘“Historically, nations do not settle arms race until a mutual assured destruction situation presents itself. Russian cyberattacks could be viewed as an attempt to reach this point. Until we get closer to the mutual assured destruction point, do not expect an international treaty anytime soon. Instead, expect more cyberattacks and data losses. Organizations and governments need to get serious and buckle up—it's going to be a rough ride.”’
Not hard to imagine scenarios that might quickly escalate.
Here's one: one nation's favorite junk food supplier is knocked out by a malicious, and misattributed, cyber assault; popular outrage compels political leadership to reciprocate.
“Staines, get Premier Kissoff on the hot line!”
“Earlier efforts to use T1 values to categorize brain tumors were impeded by technical inconsistencies and found to be unreliable. But recent advances in quantitative measurement methods have led to improvements in accuracy, repeatability and acquisition speed. The new study is a step toward applying diagnostic threshold T1 measurement across multiple clinical sites.”
Risk: Inconsistent MRI data weights skew diagnostic accuracy.
“Within the mouth, also referred to as the intra-oral cavity, there is a rich supply of both sensory and motor nerves. In particular, sensorimotor nerves in the soft palate and tongue coordinate several intraoral movements related to swallowing, speech and respiration. And so, damage to either the sensory or motor nerve fibers due to neurotrauma or disease can compromise these essential functions, reducing the quality of life of those afflicted.
“Electrical nerve stimulation might help jumpstart the nerves into action, much like how a pacemaker can electrically stimulate nerves in the heart, causing the heart muscle to contract. But unlike a pacemaker, the details on the frequency and amplitude of the electrical currents needed for proper stimulation of different parts of the mouth have not been investigated.”
An implanted medical device to re-enable paralyzed intraoral muscular actions. Like a pacemaker or ICD (signal processor plus discharge batteries), there's inappropriate shock risk.
The stimulated mandibular contraction might shatter teeth or dislodge fillings if the electrical pulse is too strong or not subject to duty cycle restraint or fail-safe. Infection risk from implantation/explanation, broken electrodes, battery depletion, oral bacteria migrating into the body can be especially fatal, etc. —Richard M. Stein email@example.com
Securing applications it the API-first era can be an uphill battle. As development accelerates, accountability becomes unclear, and getting controls to operate becomes a challenge in itself. It's time that we rethink our application security strategies to reflect new priorities, principles and processes in the API-first era. Securing tomorrow's applications begins with assessing the business risks today. The trends and risks shaping today's applications
As the world continues to become more and more interconnected via devices ” and the APIs that connect them ” individuals are growing accustomed to the frictionless experience that they provide. While this frictionless reality is doubtlessly more user-friendly, i.e., faster and more convenient, it also requires a trade-off. This convenience demands openness, and openness is a risk when it comes to cybersecurity.
According to Sidney Gottesman <https://www.linkedin.com/in/sidneygottesman/>, Mastercard's SVP for Security Innovation, the above situation leads to one of the biggest trends shaping the security posture for today's applications: A crisis of trust between individuals and the applications they use.
A second major trend is that of the supply chain. Simply handling your own risks isn't enough, as attacks increasingly penetrate internal systems via 3rd party, vendor-supplied components. In digital products and even connected hardware products, supply chains are now composed of different services bundled together in the final product through APIs, creating a new type of integration risk rooted in the supply chain. […]
This is reported by Svenska Dagbladet <https://www.svd.se/>.
“One of our subcontractors was affected by a computer attack, and for this reason, our cash registers are no longer working,” Coop Sweden, which represents about 20% of the sector in the Nordic country, said in a press release. […] https://eurnews.net/in-sweden-a-supermarket-chain-was-forced-to-close-800-stores-due-to-a-cyber-attack/
In recent years, protecting sensitive user data on-device has become of increasing importance, particularly now that our phones, tablets and computers are used for creating, storing and transmitting the most sensitive data about us: from selfies and family videos to passwords, banking details, health and medical data and pretty much everything else.
With macOS, Apple took a strong position on protecting user data early on, implementing controls as far back as 2012 in OSX Mountain Lion under a framework known as Transparency, Consent and Control, or TCC for short. With each iteration of macOS since then, the scope of what falls under TCC has increased to the point now that users can barely access their own data — or data-creating devices like the camera and microphone—without jumping through various hoops of giving consent or control to the relevant applications through which such access is mediated.
There have been plenty of complaints about what this means with regards to usability, but we do not intend to revisit those here. Our concern in this paper is to highlight a number of ways in which TCC fails when users and IT admins might reasonably expect it to succeed.
We hope that by bringing attention to these failures, users and admins might better understand how and when sensitive data can be exposed and take that into account in their working practices. Crash Course: What's TCC Again? […]
I love libraries. I love books, I love information, and I love free access to information.
I love my local library. It uses Bibliocommons. If you use online services related to your library, you may be familiar with it, even if you don't realize it, because Bibliocommons seems to be/sell/provide a sort of “Interface-as-a-Service” to libraries. As such, I have become accustomed to sending Bibliocommons bug reports to our local library IT guy.
One of the services Bibliocommons supports is a “For Later Shelf.” This is a list, that you can populate, of items you'd like to look at some time, but not put on hold right now. I usually run with about 250 items on my list. As a result of Gloria asking that I vary some of our DVD fare, I did some searching and added about another 75 items to my list yesterday.
At which point, I couldn't find a whole bunch of stuff that had been on my list for some time. I tried various ways to recover pages of items that I knew must be there, to no avail. I even tried logging out and back in again, and still couldn't see the whole list. (During some of my attempts I started to see pages duplicating what had already been shown, even when I asked for something else. It was very strange.)
I did, finally, get my full list back, by starting at the beginning, and stepping through the entire list. Obviously nothing had happened to the data, but the display had been very strange through numerous attempts.
I've seen something similar, in a much smaller way, when I have removed items from the “For Later” list. Frequently, when I do so, and look at other pages, the subsequent pages still seem to assume that the removed item is still there, and display accordingly.
Bibliocommons has a definite problem with latency. Logging on can take over a dozen seconds, even though it's a very simple (and not terribly secure) process. Requesting the next page in a list can take even longer. I strongly suspect that, in an effort to reduce latency, Bibliocommons makes extensive (probably very extensive) use of caching. But the result is that the information the system gives back is slightly incorrect.
Which raises an issue for applications security. There are generally unintended consequences to our “fixes” for a given problem. I am reminded of a quote from Larry Wall: “Optimizations always bust things, because all optimizations are, in the long haul, a form of cheating, and cheaters eventually get caught.”
My old research group's work was motivated by the fact that every single tabulation system I have ever examined for RCV/IRC/STV schemes has been incorrect.
This brings to mind two stories:
1) J Paul Gibson, from Ireland but more recently on the faculty at Telcom Sudparis (France) investigated the STV voting system in use in Ireland. One thing he determined was that the official Irish law setting out the rules for tabulating the election, in Gaelic, had been somewhat mistranslated into English. Then, the Delphi software used to implement those rules—written from the English version of the law, contained errors. So, he took the question to the Irish Supreme Court.
Their ruling horrified him. They ruled that the software was the law. This was an expedient ruling, because it eliminated the possibility that the ruling would call any past elections into question. However, given that the code is not easy to understand and not immortal, it means that anyone trying to port the code to a newer platform will have to port the errors and misunderstandings in that code and not merely implement what is in one or the other versions of the law.
2) My students and I were asked by the Student Senate at the University of Iowa to write IRV tabulation software for student government elections. So, of course, we asked them what the rules for an IRV election were. They said, in effect, that it was simple, you just run multiple rounds eliminating the loser in each round. So, of course, I asked the dangerous question: What if there is a tie for loser? Who do you eliminate then? Their answer boiled down to “duh…”
The problem is, I can think of several rules: Eliminate all who tie for loser. Eliminate the one who had the fewest votes in the most recent previous round where they differed (look back). Eliminate the one who will have the fewest votes in the next round if you eliminate the others who tied (look forward). Eliminate one at random. Of course, these can be combined, so I can imagine this rule:
In case of tie, eliminate the candidate who had the least votes in some previous round where they differed, unless they were tied in all previous rounds. In that case, eliminate the candidate who would receive the fewest new votes if the other tied candidates were eliminated. If that does not resolve the tie, throw the dice to pick the loser.
The student government response was: “but ties are improbable!” There, they were certainly wrong. We're talking about ties for loser, not ties for winner. In any election with more than a few candidates, most of them will receive only a very small number of votes. Ties for loser are far more likely than ties for winner.
After we hashed that out, my students wrote the software to process the CVRs, and I (working independently) counted the votes by hand (with machine assist—I used vi plus sort to do it on a Unix system). The winner was the one who'd have won by straight plurality based on the first round, but the election went for 3 rounds before anyone got 50%.
It is remarkably hard for typical software engineers to codify tabulation algorithms based upon statutory descriptions of complex election schemes.
Amen, and that certainly doesn't imply that those who wrote the statute understood what they wrote!
“Researchers at the University of Sydney have raised the threshold for correcting quantum calculation errors with the help of the Gadi supercomputer of Australia's National Computational Infrastructure (NCI) organization. The researchers used Gadi to run about 87 million simulations for all possible qubit arrangements and aligned the threshold with the actual error rates of physical quantum computing systems.”
The thing is, that we are using a supercomputer, and calculations that we can't possibly check for errors, ourselves, to figure out whether quantum computers, when we finally get them, are telling us the truth.
I am afraid that the concept of “trusted platform,” already seriously bruised, is really going to be hammered over this type of thing …
Before we go slamming the Supreme Court of the United States, I think it’s worth clarifying an important point.
Yes, TransUnion had records on about 8,000 people that erroneously identified them as terrorists or drug traffickers. And yes, for some small number of people, TransUnion share that information with third parties.
Those people were not the subject of this case, and their class action lawsuit against TransUnion stands.
The court simply ruled that the people whose faulty records that were never shared couldn't be part of the class, because they could not have suffered any damages.
Put another way: No harm, no foul.
> I have never, not once, had a useful interaction with a chatbot.
I have, but only in very specific contexts.
One time I bought a case of Vegemite (I like it, so sue me) from Amazon which was poorly packed and some of the bottles broke. Their chatbot walked me through a straightforward process to identify the item I was asking about. Then, since it was obviously a chatbot, I just typed some keywords, something like badly packed broken bottles.
OK, it said, we'll give you a full refund. Since 9 of the 12 bottles weren't broken, I didn't argue.
I agree that once you get out of situations like this where there's not many things you're likely to say, they turn into frustration loops.
I have just had a personal experience of a particular bot-related weakness.
I have bank accounts in various countries.
It turns out the bot for the particular bank I am at this moment using does not understand English… …and I, being a heathen native English speaker, do not, of course, speak any other language (well, not more than enough to make hapless locals ears bleed, and certainly anyway only to speak, not to write).
Now, if it were an FAQ, I could use Google Translate to get something I could understand, but I can't use Google Translate to get something the bot understands.
It seems that these legislators cannot distinguish between an operating system and a browser. They still think in terms of one device connected by a wire to a single server where a file is stored.
Just for example, the contents of the news report page cited here, are loaded from dozens of web sources, some of which are generated on the fly while the page is being displayed.
On the receiving end, on my standard Windows system, there are about 20 icons of popular applications on the desktop; just out of these, I counted 10 which are capable of accessing web addresses and displaying their contents.
How do they think they can include all of these in their filtering scheme?
As seen in “Rethinking Public Key Infrastructures and Digital Certificates: Building in Privacy” (by Stefan Brands) verification of age (or other properties) does not require disclosing the age. Since the book is over 20 years old (the link does disclose the age) any patent obtained then should have expired. <https://www.amazon.com/Rethinking-Public-Infrastructures-Digital-Certificates/dp/0262024918/>
Even without such innovation I don't know why the website (as opposed to the browser/user-agent) would need to know whether the user is above a given age.
Please report problems with the web pages to the maintainer