Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
Sandworm group uses emails to send root commands to buggy Exim servers.
Google warned on Thursday that the EU's definition of artificial intelligence was too broad and that Brussels must refrain from over-regulating a crucial technology.
The search and advertising giant made its argument in feedback to the European Commission, the EU's powerful regulator that has reached out to big tech as it draws up ways to set new rules for AI.
The EU has not decided yet on how to regulate AI, but is putting most of its focus on what it calls q“high risk” sectors, such as healthcare and transport.
It's plans, to be spearheaded by EU commissioners Margrethe Vestager and Thierry Breton, are not expected until the end of the year.
“A clear and widely understood definition of AI will be a critical foundational element for an effective AI regulatory framework,” the company said in its 45-page submission.
The EU's own definition of AI was so broad that it “effectively puts all contemporary software potentially in scope,” it said. […] https://techxplore.com/news/2020-05-google-cautions-eu-ai-rule-making.html
The retailer denies there is any widespread issue with the software, but a group expressed frustration—and public health concerns.
https://www.wired.com/story/walmart-shoplifting-artificial-intelligence-everseen/
AI to the … rescue?
https://archiveprogram.github.com/
“The GitHub Arctic Code Vault is a data repository preserved in the Arctic World Archive (AWA), a very-long-term archival facility 250 meters deep in the permafrost of an Arctic mountain. The archive is located in a decommissioned coal mine in the Svalbard archipelago, closer to the North Pole than the Arctic Circle. GitHub will capture a snapshot of every active public repository on 02/02/2020 and preserve that data in the Arctic Code Vault.”
Skeptical Perspective…
https://linuxinsider.com/story/github-aims-to-make-open-source-code-apocalypse-proof-in-arctic-vault-86367.html The odds aren't terribly good that GitHub's plan will actually work, he suggested.
First, someone would have to look for, find, and gain access to the repository. Then there is the matter of the discoverers decoding instructions, starting up power supplies, getting systems up and running, and learning to code.
“The farther away you get from the day the materials are stored, the less likely that the rosy outcome GitHub envisions is likely to occur,” King told LinuxInsider.
GitHub's plan is almost certainly a public relations play designed to generate buzz for the company, said Phil Strazzulla, founder of Select Software Reviews.
“Think about all of the servers that are stored around the world that hold repositories of this code. The only way the Arctic vault would be useful is if the entire human civilization was essentially wiped out, and then somehow another form of life eventually figured out how to find and analyze this code,” he told LinuxInsider.
He sees the bottom line as the absence of any scenario in the future in which saving open source technology would become useful, even if you believe there is a high likelihood of doomsday scenarios.
“This is more a calculus of how much the effort will cost relative to the amount of press that it will generate,” Strazzulla said.
[OK, great. But what if the lock gets frozen?
And what if some court order orders all copies of Jamie R. Junioropolis's paragraph 3 of his 37th comment to removed from all archives worldwide, as it contains sensitive government info? -DJ]
Testing applications on mobile devices has its own set of perils. For how many of these are you prepared?
https://www.functionize.com/blog/the-mobile-testing-gotchas-you-need-to-know-about/
Load testing —“ where you discover the point at which a computer system fails -“ is based on preparing for (graceful) failure by knowing its breaking point. Successful load testers anticipate high demand—but at what point do you pass from high demand to ridiculous? The guideline: Expect the unexpected.
The vulnerabilities described in this advisory allow an attacker who can connect to the “request server” port to bypass all authentication and authorization controls and publish arbitrary control messages, read and write files anywhere on the “master” server filesystem and steal the secret key used to authenticate to the master as root. The impact is full remote command execution as root on both the master and all minions that connect to it.
The vulnerabilities, allocated CVE ids CVE-2020-11651 CVE-2020-11652, are of two different classes. One being authentication bypass where functionality was unintentionally exposed to unauthenticated network clients, the other being directory traversal where untrusted input (i.e. parameters in network requests) was not sanitized correctly allowing unconstrained access to the entire filesystem of the master server.
https://labs.f-secure.com/advisories/saltstack-authorization-bypass
Lagging far behind others, SSH developers finally deprecate aging hash function.
Losing your 2FA codes can be bad. Having backups stolen can be worse. What to do?
The social media giant for the first time this week labeled three of the president's tweets
https://www.washingtonpost.com/technology/2020/05/29/inside-twitter-trump-label/
Also,
Twitter Had Been Drawing a Line for Months When Trump Crossed It Inside the company, one faction wanted Jack Dorsey, Twitter's chief, to take a hard line against the president’s tweets while another urged him to remain hands-off. https://www.nytimes.com/2020/05/30/technology/twitter-trump-dorsey.html
Three and half minutes into the test, it was clear that something had gone wrong.
At 7:30 a.m. on 18 Dec 1970, the Baneberry test began at the Nevada Test Site. A nuclear bomb had been lowered into a hole a little more than seven feet in diameter. More than 900 feet underground, the bomb—relatively small for a nuclear bomb—was detonated.
Less than a decade before, after the U.S. signed onto the Partial Test Ban Treaty, nuclear testing had gone underground. The treaty was meant to stop the venting of nuclear materials into the atmosphere and limit human exposure to radioactive fallout. But the Baneberry test, named for a desert shrub, did not go as planned.
https://www.atlasobscura.com/articles/do-underground-nuclear-tests-have-fallout
Re: “I'm sure that those making professional use of MC methods know all about …“
Andy Walker is certainly correct that slow convergence of Monte Carlo methods can be improved through various mitigation techniques, including “biasing” techniques.
However, his assumption that those behind the Imperial model “know all about …” may be unreasonably generous, as the Imperial model has already been shown to produce dramatically varying results depending upon the random numbers used. If these mitigation techniques had worked well in the Imperial model, this dependence on the particular sequence of random numbers should have averaged out over enough runs, but they didn't.
Both my toy “Bernoulli” model and my toy lognormal model for the product of independent random samples have closed form solutions, so toy systems can often be mathematically tractable when a more “realistic” model such as the Imperial model cannot be. I claim that attempting Walker's mitigations for the Imperial model would require a proof that the mitigations only improve convergence and would not change the eventual answers.
Walker has still not addressed the basic mathematical fact that distributions with gigantic variances have no useful predictive value, and hence do not fit the definition of ‘science’.
E.g., my toy Bernoulli product model can be represented exactly with a probability generating function:
G(z,p,q,a,b,n):
n ==== k i n-i \ i n-i a b > binomial(n, i) p q z / ==== i = 0
where p=1/100,q=99/100,a=98,b=2,n=10.
Mean(G): 10 (b q + a p)
I.e., mean^10 of a single Bernoulli sample, as expected.
With p=1/100,q=99/100,a=98,b=2, this mean is:
4923990397355877376 ------------------- ~ 51631.78154897835 95367431640625
Var(G),p=1/100,q=99/100,a=98,b=2:
909494701748682556481786171327006234749251354624 ------------------------------------------------ 9094947017729282379150390625
rounded to an integer is:
99999999997334159134 ~ 10^20
This is an astoundingly high variance, which indicates that the probability density is almost zero almost everywhere.
Similarly, my toy lognormal distribution L(m,v):
2 (log(x) - m n) - ------------- 2 n v %e ----------------------------- sqrt(2) sqrt(%pi) sqrt(n v) x
has mean:
n v --- + m n 2 %e ~ 51631.78154897708
and variance:
n v n v + 2 m n (%e - 1) %e ~ 9.9999999997E+19
The value of the lognormal pdf at the mean is:
5 n v - ----- - m n 8 %e --------------------------- ~ 7.4643385877E-8 sqrt(2) sqrt(%pi) sqrt(n v)
i.e., 1/13397034, a probability density of 1 in ~14 million.
Thus, the pdf is almost flat, as well as almost infinitesimal, from some small fraction of the mean to some large multiple of the mean.
Thus, there is nothing to particularly choose the ‘mean’ over any other ’nearby’ (or in this case, no-so-nearby) value as ‘the answer’.
This is a generic problem with exploding variances, which cannot be mitigated, because it is an essential feature/bug resulting from exponentiating large variance random variables.
On 30/05/2020 17:30, Henry Baker wrote:
> Walker has still not addressed the basic mathematical fact that > distributions with gigantic variances have no useful predictive value, and > hence do not fit the definition of ‘science’.
That, surely, depends on what you are trying to predict? Many of the properties of the current pandemic can be modeled with a pencil and the back of an envelope—as indeed we have almost been doing in this thread.
> Thus, the pdf is almost flat, as well as almost infinitesimal, from some > small fraction of the mean to some large multiple of the mean.
In the real world, this is, rather, evidence that the model has broken down.
> This is a generic problem with exploding variances, which cannot be > mitigated, because it is an essential feature/bug resulting from > exponentiating large variance random variables.
OK, but that still doesn't mean that we can't do anything useful with the result. It just means that you have an unstable or even chaotic model in terms of predicting means and variances; there may be other properties of the model that are relatively easy to get at. In addition, if the theory of “superspreaders“ is anything like correct, then that gives us a target — viz to identify them and/or the situations in which they superspread [such as schools, restaurants, prisons, care homes or football matches], which is a first step towards doing something about it other than locking down the entire population.
Reminder on Zoom 5.0 — update your clients before May 30
Zoom 5.0 became generally available on April 27, and a system-wide account enablement to AES 256-bit GCM encryption will occur on May 30, 2020. Only Zoom clients on version 5.0 or later, including Zoom Rooms, will be able to join Zoom Meetings starting that day. We urge all users to update to Zoom 5.0 or higher today, if you have not done so already.
Please report problems with the web pages to the maintainer