Quantum computers are likely to require significant shielding to sustain coherent qubit interactions while they compute solutions. Quantum computers apply Josephson junctions to host qubit state. The junctions are sensitive to perturbations at many wavelengths, including those arising from earth tremor, thermal, x-ray, etc. How deep a basement, with tremor isolation, ionization shields, etc. will be needed, is yet to be determined. This experiment demonstrates qubit decoherence potential when high-energy photons strike.
I am reminded of a similar issue affecting the “old” silicon-based supercomputers: These massively parallel machines consist of separate physical memory and cpu modules, each interconnected to each other via a speedy message-passing network. See https://spectrum.ieee.org/computing/hardware/how-to-kill-a-supercomputer-dirty-power-cosmic-rays-and-bad-solder.
The memory modules are prone to cosmic ray intercept. The incident radiation causes memory bit failures, often permanently disabling use. Extended computations (protein folding, nuclear weapon stockpile simulation, etc.) crash, as does the machine, until triage can disable a row/column of physical memory.
“In the summer of 2003, Virginia Tech researchers built a large supercomputer out of 1,100 Apple Power Mac G5 computers. They called it Big Mac. To their dismay, they found that the failure rate was so high it was nearly impossible even to boot the whole system before it would crash.”
“The problem was that the Power Mac G5 did not have error-correcting code (ECC) memory, and cosmic ray-induced particles were changing so many values in memory that out of the 1,100 Mac G5 computers, one was always crashing. Unusable, Big Mac was broken apart into individual G5s, which were sold one by one online. Virginia Tech replaced it with a supercomputer called System X, which had ECC memory and ran fine.”
The driver was charged with a violation of the state's “move over” law and with having a television in the car.
Elon Musk confirmed Thursday night that a ransomware gang had approached a Gigafactory employee with alleged promises of a big payout.
The DarkSide operators are just the latest group to adopt a veneer of professionalism, while at the same time escalating the consequences of their attacks.
Email service provider Sendgrid is grappling with an unusually large number of customer accounts whose passwords have been cracked, sold to spammers, and abused for sending phishing and email malware attacks. Sendgrid's parent company Twilio says it is working on a plan to require multi-factor authentication for all of its customers, but that solution may not come fast enough for organizations having trouble dealing with the fallout in the meantime.
Many companies use Sendgrid to communicate with their customers via email, or else pay marketing firms to do that on their behalf using Sendgrid's systems. Sendgrid takes steps to validate that new customers are legitimate businesses, and that emails sent through its platform carry the proper digital signatures that other companies can use to validate that the messages have been authorized by its customers.
But this also means when a Sendgrid customer account gets hacked and used to send malware or phishing scams, the threat is particularly acute because a large number of organizations allow email from Sendgrid's systems to sail through their spam-filtering systems.
To make matters worse, links included in emails sent through Sendgrid are obfuscated (mainly for tracking deliverability and other metrics), so it is not immediately clear to recipients where on the Internet they will be taken when they click.
Dealing with compromised customer accounts is a constant challenge for any organization doing business online today, and certainly Sendgrid is not the only email marketing platform dealing with this problem. But according to multiple emails from readers, recent threads on several anti-spam discussion lists <https://email@example.com&q=sendgrid> <https://cwiki.apache.org/confluence/display/SPAMASSASSIN/MailingLists>, and interviews with people in the anti-spam community, over the past few months there has been a marked increase in malicious, phishous and outright spammy email being blasted out via Sendgrid's servers. […]
Fortunately a fix is on the way.
Microsoft is currently testing a fix for Windows 10 bug that could cause the operating system to defragment solid state drives (SSDs) more often than is needed. While periodic defragging of a mechanical hard disk drive (HDD) is a good thing, doing it too often on SSDs can actually degrade their integrity and shorten their lifespan. […]
As spotted by Bleeping Computer, when Microsoft rolled out the May 2020 update for Windows 10, it introduced a bug to the Optimize Drives feature causing it to incorrectly determine the last time a drive has been optimized. When you open it up, you might notice your SSD says “Needs optimization” even if the routine was recently run (Windows 10 handles this automatically).
Correct address placement on the map is how the ambulance can find your house, in rural areas. (And no I'm not talking about G**gle Maps, I'm talking about official (Taiwan) government e-maps.)
(Junior, a/k/a me, has taken it upon himself to be the unsung local hero, saving many potential lives as usual, asking for justice for those precarious misplaced address nodes scattered on the hills on the e-map on my computer screen.)
So how are these addresses born? Well the applicant brings his stack of property deeds to the Household Bureau office, and, well, we behind the desk need to fill in a parcel number on the application form, so, well, just grab one of the deeds and use that parcel number. Oh yes, visit the site and take photographs for the records. So new address 35 is recorded as being located on parcel 1234 instead of actual 1240…
Causing now twenty years later some addresses to be located in orchards or on slopes hundreds of meters from where their respective actual houses are. Yup, those parcel numbers are what we now used to place the address nodes on the spanking new e-maps. Back in the old days some dusty number in a ledger—has now become a two-dimensional point on an e-map.
“That's the parcel number they applied with. They need to bring in their documents to the office if they want to change the location.”
Problem is, to the homeowner, there is nothing wrong with their address, happily attached to their house. And indeed, let's say they are highly literate (and still alive.) Well they still often won't be able to tell you which of their stack of title deeds is the one referring to their house's land vs. the one referring to their orchard.
Also who is going to tell lots of average citizens they need to march down to the Household Bureau to correct some internal coordinate problem on some obscure e-map? “Didn't your office take enough photos of my house back when I applied already?”
Anyway, my suggestion to the Household Bureau is to simply connect to the Land Bureau's computer and see if (thankfully usually still after all those years) the parcel with the house belongs to the same person as the parcel with the orchard, and update the records accordingly.
Or, just let it slide. And be blamed when the ambulance can't find somebody in need.
And then there are those address nodes that ended up on nobody's land in the middle of the creek…
“Giving self-learning algorithms the responsibility to make and execute decisions affecting workers is called ‘algorithmic management.’ It carries a host of risks in depersonalizing management systems and entrenching pre-existing biases.”
The essay cites an example of an algorithm at work:
“At Amazon's fulfillment centre in south-east Melbourne, they set the pace for ‘pickers’, who have timers on their scanners showing how long they have to find the next item. As soon as they scan that item, the timer resets for the next. All at a ‘not quite walking, not quite running’ speed.”
Reminiscent of “John Henry” (see https://en.wikipedia.org/wiki/John_Henry_(folklore)). Would the algorithm increase the interval if a picker tripped or was injured during item fulfillment? How does/would it learn of these outcomes? Are there feedback variables that account for injuries? At what frequency is the algorithm adjusted to account for under-fulfillment or over-fulfillment? Does the employee receive better or worse compensation?
“An interim report issued by the DHS task force last year laid out a number of data points that could be useful in sniffing out supply chain threats, such as information around counterfeit parts, malicious code inserted into software and tips about insider threats or physical attacks on participants or products in the chain. It also found that intelligence around this area was ‘unique’ and that ‘actionable information often requires a level of specificity which may create sensitivities about how it is shared’ that lead to ‘a range of legal considerations that ICT stakeholders must navigate.’”
Recall Kaspersky Labs anti-virus product, and the door it opened to inspect a machine for AV diagnosis (see https://catless.ncl.ac.uk/Risks/30/48#subj10.1). Deployment risks from Huawei and ZTE network products, etc.
Risk: Vendor identity disclosure from suspected product without sufficient evidence can be libelous.
>> Competition between car makers to see who can provide us the most >> distraction moves the industry in exactly the wrong direction!
Especially like, as in our car, the entertainment system is buggy, so the driver spends far too much FIXING the system's screw-ups when they should be concentrating on driving …
> but turn indicators are manual, and apparently still considered optional > by whole tribes of road users.
That's assuming it isn't the car at fault - a light touch on the indicator stalk will cause it to flash three times to indicate a lane change, but I regularly approach a junction, indicate, and it might flash ONCE before auto-canceling! If I'm concentrating on an unfamiliar junction and an uncooperative sat-nav, I don't need the additional grief of an auto-cancel mechanism that keeps killing the indicators. I know on many occasions, driving in a roughly straight line, I've had to indicate four or five times approaching the junction because the indicator just won't stay on!
In saying that: “It is not even possible to brake without brake lights flaring ”, Peter Houppermans ignores the potential to brake using only the handbrake, which does not (in the vast majority of vehicles) cause the brake light(s) to illuminate.
Whilst I happily argue that too many designers have taken the versatility of modern materials to the point that form trumps function, I do feel that the modern “growing” turn indicator light is more reliable at indicating the intended direction, especially at night and given the greater brilliance of other exterior lights.
The above assumes that drivers use their indicators in sufficient time; however many seem to use them solely to remind themselves what they just did, rather than to advise other drivers of their intentions. When I did my Class 1 driver training with Greater Manchester Police Driving School it was impressed upon me that hand signals, indicators, brake lights and car positioning were all about communicating one's intentions to other road users. Of course that assumes others actually pay sufficient attention, which increasingly seems to be less the case than when I started driving.
As to the drivers of some cars not using indicators, it has been reported that Audi and BMW dealers no longer stock replacement bulbs because of a lack of demand.
Possibly one of the most trivial posting to RISKS for a while… My problem is that many cars have parking/tail/turn signal/back-up/rear fog lights crammed into small light clusters, so if the driver brakes and signals at the same time, which happens quite often, it can be difficult to see the flashing signal light against the steady bright brake light. Some buses have LED lights which just show colourless white when off so there's little contrast between off and on indications. I haven't driven overseas very much, but at least with the American system of a big red rear brake/signal light at each side it's less ambiguous, though if only one side of the vehicle is visible (e.g., in a line of traffic), then it's not immediately obvious if the driver has tapped the brakes or has started signaling. There's the same problem with 4-way emergency hazard flashers, as if (again) the vehicle is only visible at one side, it's not clear if the hazard flashers or turn signals are indicating.
>A Chrome feature is creating enormous load on global root DNS servers
Someone hasn't been paying attention. The ICANN Name Collision report written seven years ago in 2013 said the exact same thing:
See section 5.4.3 on page 48. At that point the Chrome random names were 46% of all root server traffic (see table 12 on the previous page.)
[firstname.lastname@example.org responded to PH on this item:]
> It would be interesting to know exactly what the “risks to the public in > computers and related systems” are perceived to be in this item.
Easy. Three arguments:
The benefit of the RISKS mailing list has been for many years that it brings together professionals in all walks of life sharing experiences which may not always fall inside the original defined purpose, but which have a connection which may even be at best tangential. It allows people to widen their perspective.
The items in question show a SYSTEM (hey, look, another subject matter hit) seeks to repair itself as the pesky humans involved still try to do the right thing. That's educational and instructive, hence another argument for its inclusion.
For the record, one of the reasons I try to read every RISKS is exactly because it has remained diverse.
I,for one, hope it remains that way.
Maybe Godwin's Law should be updated: “as an online discussion grows longer, the probability of it deteriorating into a pro-Trump / anti-Trump tirades approaches 1”
In a polarized society, the bureaucrats who operate the machinery of democracy are taking flak from all sides. More than 20 have resigned or retired since March 1, thinning their ranks at a time when they are most needed.
> Richard Stein wonders what will become of Florida's release > of a genetically engineered mosquito intended to combat Dengue Fever.
Cheap shots are fun, but in this case the costs of doing nothing are substantial.
The Aedes mosquitoes they are targeting are dangerous to people. They spread yellow fever, dengue, chikungunya, and zika. While you may not be familiar with these diseases, people all over the tropics are. The keys have had outbreaks of dengue, which is always miserable and at its worst crippling or fatal. A friend of mine who lives in central America had chikungunya, one of the less serious ones, and it made him unable to work for the better part of a year. You don't want to know what yellow fever is like.
At this point the mosquito treatment is to spray pesticides into breeding areas. We know what the consequences of that are, killing other desirable insects and polluting the shallow waters around the Keys.
The mosquitoes they'll be releasing are males (only females bite) that produce offspring that die before they mature unless they have tetracycline in their diet which in the wild they don't. They've been released in Brazil and other places and knocked down Aedes populations by 95%.
While it certainly possible that there is some effect that nobody has noticed yet, it's a lot more likely that they'll do what's expected, kill mosquitoes and prevent disease without toxic pesticides.
Looking at the reports about local opposition, I don't see anything beyond genetically engineered == scary == bad along with some garbled complaints about the way the mosquitoes were created.
Seems like maybe The History Channel should start reruns of “Life Without People,” the two year series that explored a world in which all people just suddenly disappear. The story explicitly says it does not give a reason for why we vanished, just the aftermath.
“Welcome to earth: Population: zero.”
The show examines possible results of a year, five years, ten, twenty, a hundred, and so on to as far away as 10,000 years from now. The buildings turning into rust and crumbling, the refinery explosions, nuclear power plant meltdowns. family pets trapped indoors, sometimes dying, domesticated animals going extinct because they needed humans to breed them, and the cities becoming jungle as grass and other fauna and flora overtake them.
In the end, the world will erase every trace of our existence. The show even explores this by visiting abandoned cities and settlements after 20 to 50 years, and the process of decomposing is already well along.
The show was created after the 2008 special of the same name did extremely well.
> [With Greenland undergoing massive irreversible glacier melt, we can > expect a corresponding effect of fiddling while Nome burned. PGN]
Massive? No. Large size, but not in relation to the whole.
Willis Eschenbach, 3 Aug 2019 https://wattsupwiththat.wpcomstaging.com/2019/08/03/greenland-endures/
From that data, we find that the 1981 to 2010 thirty-year average mass balance for the Greenland ice sheet was a net loss of 103 billion tonnes. Again, this is a very large number, it seems like a big deal that would demand our attention—but is it really?
In order to ask the question “How big is 103 billion tonnes?”, we have to ask a related question:
Compared to what?
In this case, the answer is, “Compared to the total amount of ice on Greenland”.
Here's one way of looking at that. We can ask, if Greenland were to continue losing ice mass at a rate of 103 billion tonnes per year, how long would it take to melt say half of the ice sheet? Not all of it, mind you, but half of it. (Note that I am NOT saying that extending a current trend is a way to estimate the future evolution of the ice sheet—I'm merely using it as a way to compare large numbers.)
To answer our question if 103 billion tonnes lost per year is a big number, we have to compare the annual ice mass loss to the amount of ice
in the Greenland ice sheet. The Greenland ice sheet contains about 2.6E+15 (2,600,000,000,000,000) tonnes of water in the form of snow and ice.
So if the Greenland ice sheet were to lose 103 billion tonnes per year into the indefinite future, it would take about twelve thousand five hundred years to lose half of it.
In other terms, the Ice Cap is losing 3.96**-5 of its mass every year, or .00396% per year. Scary number, that is.
The effects of this are best shown graphically: Greenland Mass Balance and Greenland Total Mass. It is the latter which is the reality. See attached.
Irreversible? No. In fact, the sign of the change changes. As recently as 3,500 BP the Greenland Ice Cap was much smaller than at present.
From University of Buffalo: https://wattsupwiththat.com/2013/11/22/study-greenland-ice-sheet-was-smaller-3000-5000-years-ago-than-today/
And recently, the Jakobshavn Glacier has been found to be growing again. https://wattsupwiththat.com/2019/06/19/if-greenland-is-catastrophically-melting-how-do-alarmists-explain-nasas-growing-greenland-glacier/
Both Robinson (RISKS-32.22) and Mathisen (RISKS-32.23) seem to have forgotten that midnight is not the only possible time for problems to occur.
The version suggested by Terje Mathisen (RISKS-32.23) is one that I have used.
I started working at a monitoring site many years ago that had a program that needed to do some data logging once per minute (on the minute). The programming environment did not have a function to return date and time, so data and time needed to be obtained independently. The logic in the program was:
Yes, they really did do multiple DATE and TIME calls. Probably easier (lazier?) to program year(now), month(now), day(now) than to create another variable to store (now) and do year, month and day on the stored value.
Someone may have thought “the time it takes to get the date and time is so small the chances of them not matching can be ignored.” They were wrong (or never even thought about it.) Yes, in any trip through the loop,. the chances are small, but in most trips through the loop, the minute isn't going to change and nothing is done. The only loop that triggers action is the small fraction of loops where the minute DOES change. When the minute has changed, it could have happened any time between steps 1 to 5 or going from step 5 back to 1, and the probability of any one of those five intervals is probably about equal.
So, the chances that the minute changed between requesting the hour and requesting the minute is about 20%. And because you are doing things every minute of the day, and every hour has a minute 59, there is a 20% chance that at the end of the hour the routine is going to mess up. And you could see that in the data it requested. four or five times a day, you saw a sequence like this:
The replacement was as Terje suggested:
Step 3 avoids the date rollover. Using a single TIME call avoids the hour roll-over.
I never saw a request for hour-old data again.
> For non-techies, physical randomization may seem more secure than > computer-generated. But if the dice are not extremely well made, they'll > be a bit less random than theory suggests.
No matter how well made the dice are, as they are used they will collide with each other and slowly (or quickly, depending upon the material) become more and more deformed. This means they will become less random, and each set of dice will become less random in a different way.
Please report problems with the web pages to the maintainer