An Apple employee who died after his Tesla car hit a concrete barrier was playing a video game at the time of the crash, investigators believe.
The US National Transportation Safety Board (NTSB) said the car had been driving semi-autonomously using Tesla's Autopilot software.
Tesla instructs drivers to keep their hands on the wheel in Autopilot mode.
But the NTSB said more crashes were foreseeable if Tesla did not implement changes to its Autopilot system.
The authority has published the results of a two-year investigation, following the crash in March 2018.
Tesla's Autopilot software steered the vehicle into the triangular ‘gore area’ at a motorway intersection, and accelerated into a concrete barrier.
Darwin wins again.
As discovered in the first beta of iOS 13.4, Apple is working on a new ‘CarKey’ feature that will allow an iPhone or an Apple Watch to unlock, lock, and start NFC-compatible vehicles.
Computer security experts continue to express doubts that expensive new voting machines are reliable, considering them almost as risky as earlier discredited electronic systems. Called ballot-marking devices, the machines have touchscreens for registering voter choices and print out paper records scanned by optical readers. South Carolina voters will use the systems, which are at least twice as expensive as the hand-marked paper ballot option, in Saturday's primary. Daniel Lopresti, a computer scientist at Lehigh University and a South Carolina election commissioner, said, “What we worry is, what happens the next time if there's a programming bug, or a hack or whatever, and it's done in a way that's not obvious?” Said University of South Carolina's Duncan Buell, “I don't know that we've ever seen an election computer, a voting computer, whose software was done to a high standard.”
Microsoft has come up with a new electronic voting system, called ElectionGuard.
(Yes, OK, that Microsoft. But it does sound possible.)
First off, this is not online or remote voting. This is a vote tabulation system. You vote on a device, a memory card is read and counted, and you get a paper record of your vote. The individual votes are encrypted using homomorphic encryption (probably a version of Rivest's Three Ballot algorithm). https://en.wikipedia.org/wiki/ThreeBallot
ElectionGuard is open source, so I imagine that electronic voting researchers will be looking under the hood. I'd like to know how you prevent election officials from reading the printouts that voters receive (but that's more a matter of training and process). I'd like to know how many random challenges you make, taking real votes and checking to see if they've been tabulated properly. (There are likely some legal issues in that regard.)
But it does sound promising.
Feds say defendant used Amazon servers to wage DDoS attacks that cost the rival campaign.
Story by Donie O'Sullivan, CNN Business Video by Richa Naik and Craig Waxman
Updated 1257 GMT (2057 HKT) February 28, 2020
Andrew Walz calls himself a proven business leader and a passionate advocate for students. Walz, a Republican from Rhode Island, is running for Congress with the tagline, “Let's make change in Washington together,” or so his Twitter account claimed.
Earlier this month, Walz's account received a coveted blue checkmark from Twitter as part of the company's broader push to verify the authenticity of many Senate, House and gubernatorial candidates currently running for office. Twitter has framed this effort as key to helping Americans find reliable information about politicians in the leadup to the 2020 election.
But there's just one problem: Walz does not exist. The candidate is the creation of a 17-year-old high school student from upstate New York, CNN Business has learned.
The student, who CNN Business spoke to with the permission of his parents and has agreed not to name as he is a minor, said he was ‘bored’ over the holidays and created the fake account to test Twitter's election integrity efforts.
Not long ago, curator Natalie Luvera began to worry about the strangest item in the National Atomic Testing Museum's collection of artifacts — a tiny 1920s device designed to restore lost manhood by irradiating the manliest of human body parts.
Was the gold-plated scrotal radiendocrinator still dangerous after nearly a century? Luvera tested it with a Geiger counter, got a worrisome reading and called in a radioactivity response team to double-check. “They came down and said, ‘Nope, you shouldn't have that here.’ ” […]
The device was the brainchild of an extraordinary quack named William J.A. Bailey, who liked to describe radiation as eternal sunshine. He also hawked bottles of Radithor — certified radioactive water — that were touted as a cure-all for disorders such as impotence and fatigue.
…that's a great museum, BTW.
It works over a longer distance and without the need to be in line-of-sight.
Researchers have discovered a new means to target voice-controlled devices by propagating ultrasonic waves through solid materials in order to interact with and compromise them using inaudible voice commands without the victims' knowledge.
Called SurfingAttack, <https://surfingattack.github.io/papers/NDSS-surfingattack.pdf> the attack leverages the unique properties of acoustic transmission in solid materials — such as tables—to “enable multiple rounds of interactions between the voice-controlled device and the attacker over a longer distance and without the need to be in line-of-sight.”
In doing so, it's possible for an attacker to interact with the devices using the voice assistants, hijack SMS two-factor authentication codes, and even place fraudulent calls, the researchers outlined in the paper, thus controlling the victim device inconspicuously.
The research was published by a group of academics from Michigan State University, Washington University in St. Louis, Chinese Academy of Sciences, and the University of Nebraska-Lincoln.
The results were presented at the Network Distributed System Security Symposium (NDSS) on February 24 in San Diego.
How Does the SurfingAttack Work? […]
On Thursday, 20 February 2020 in Washington state EAS units were compromised at WAVE Broadband and sent at least 3 unapproved EAS alerts to 3000+ cable subscribers.
At least one family took the warning to heart. A viewer wrote to KING 5 and said, “We experienced an hour of pure terror. We evacuated our house with our dogs and drove to Sequim to my parents. Wondering when and if we would die.”
“A lot of problems happen when these are first put in because there's a default password and if somebody knows the default password and there hasn't been time for an organization to change the default password, those can easily be hacked,” Nealey said.
Hackers don't break in, they log in.
That mantra, often repeated by security experts, represents a rule of thumb: The vast majority of breaches are the result of stolen passwords, not high-tech hacking tools.
These break-ins are on the rise. Phishing scams—in which attackers pose as a trustworthy party to trick people into handing over personal details or account information—were the most common type of Internet crime last year, according to a recent FBI report <https://www.fbi.gov/news/pressrel/press-releases/fbi-releases-the-internet-crime-complaint-center-2019-internet-crime-report>. People lost more than $57.8 million in 2019 as the result of phishing, according to the report, with over 114,000 victims targeted in the US.
And as phishing becomes more profitable, hackers are becoming increasingly sophisticated in the methods they use to steal passwords, according to Tanmay Ganacharya, a principal director in Microsoft's Security Research team.
“Most of the attackers have now moved to phishing because it's easy. If I can convince you to give me your credentials, it's done. There's nothing more that I need,” Ganacharya told Business Insider.
Ganacharya monitors phishing tactics in order to build machine-learning systems that root out scams for people using Microsoft services, including Windows, Outlook, and Azure, Microsoft's cloud computing service. This week, Microsoft announced <https://blogs.microsoft.com/blog/2020/02/20/delivering-on-the-promise-of-security-ai-to-help-defenders-protect-todays-hybrid-environments/> that it will begin selling its threat-protection services for platforms including Linux, iOS, and Android.
Ganacharya spoke to Business Insider about the trends in phishing that his team has observed. Many of the tactics aren't new, but he said attackers are constantly finding new ways to work around defenses like Microsoft's threat protection. Here's what he described…
Researchers say the vulnerability impacts virtually all smartphones on the market*
A security vulnerability in LTE can be exploited to sign up for subscriptions or paid website services at someone else's expense, new research suggests.
According to researchers <https://news.rub.de/english/press-releases/2020-02-17-lte-vulnerability-attackers-can-impersonate-other-mobile-phone-users> from Ruhr-Universitaet Bochum, the flaw exists in the 4G mobile communication standard and permits smartphone user impersonation, which could allow attackers to “start a subscription at the expense of others or publish secret company documents under someone else's identity.”
The research, titled IMP4GT: IMPersonation Attacks in 4G NeTworks, is the work of David Rupprecht, Katharina Kohls, Thorsten Holz, and Christina Pöpper.
*See also: Honeywell, Verizon partner on integrating LTE, smart meters, lay groundwork for 5G
The IMP4GT attack <https://imp4gt-attacks.net/> impacts “all devices that communicate with LTE,” which includes virtually all smartphones, tablets, and some Internet of Things (IoT) devices.
Software-defined radios are a key element of IMP4GT. These devices are able to read the communications channels between a mobile device and base station, and by using them, it is possible to trick a smartphone into considering the radio is the base station—and dupe the network into treating the radio as the mobile phone.
Once this channel of communication is compromised, it is time to start manipulating data packets being sent between an LTE device and base station.
“The problem is the lack of integrity protection: data packets are transmitted encrypted between the mobile phone and the base station, which protects the data against eavesdropping. However, it is possible to modify the exchanged data packets. We don't know what is where in the data packet, but we can trigger errors by changing bits from 0 to 1 or from 1 to 0.”
These errors can then force a mobile phone and base station to either decrypt or encrypt messages, converting information into plaintext or creating a situation in which an attacker is able to send commands without authorization. […]
The White House's recent efforts to chart a national artificial intelligence (AI) policy are welcome and, frankly, overdue. Funding for AI research and updating agency IT systems is a good start. So is guidance for agencies as they begin to regulate industry use of AI. But there's a glaring gap: The White House has been silent about the rules that apply when agencies use AI to perform critical governance tasks.
This matters because, of all the ways AI is transforming our world, some of the most worrying come at the intersection of AI and the awesome power of the state. AI drives the facial recognition police use to surveil citizens. It enables the autonomous weapons changing warfare. And it powers the tools judges use to make life-changing bail, sentencing and parole decisions. Concerns about each have fueled debate and, as to facial recognition in particular, new laws banning use.
Sitting just beyond the headlines, however, is a little-known fact: AI use already is pervasive in government. Prohibition for most uses is not an option, or at least not a wise one. Needed instead is a frank conversation about how to give the government the resources it needs to develop high-quality and fairly deployed AI tools and build sensible accountability mechanisms around their use.
We know because we led a team of lawyers and computer scientists at Stanford and New York universities to advise federal agencies on how to develop and oversee their new algorithmic toolkit.
Our research <https://law.stanford.edu/education/only-at-sls/law-policy-lab/practicums-2018-2019/administering-by-algorithm-artificial-intelligence-in-the-regulatory-state/acus-report-for-administering-by-algorithm-artificial-intelligence-in-the-regulatory-state/#slsnav-report> shows that AI use spans government. By our estimates, half of major federal agencies have experimented with AI. Among the 160 AI uses we found, some — such as facial recognition—are fueling public outcries. But many others fly under the radar. The Securities and Exchange Commission (SEC) uses AI to flag insider trading; the Centers for Medicare and Medicaid Services uses it to ferret out health care fraud. The Social Security Administration is piloting AI tools to help decide who gets disability benefits, and the Patent and Trademark Office to decide who gets patent protection.
Still other agencies are developing AI tools to communicate with the public, by sifting millions of consumer complaints or using chatbots to field questions from welfare beneficiaries, asylum seekers and taxpayers.
Our research also highlights AI's potential to make government work better and at lower cost. AI tools that help administrative judges spot errors in draft decisions can shrink backlogs that leave some veterans waiting years <https://www.militarytimes.com/news/2018/09/10/watchdog-report-the-va-benefits-backlog-is-higher-than-officials-say/> (sometimes, close to a decade) for benefits. AI can help ensure that the decision to launch a potentially ruinous enforcement action does not reflect the mistakes, biases, or whims of human prosecutors. And AI can help make more precise judgments about which drugs threaten public health.
But the picture is not all rosy. […]
New Mexico's attorney general sued Google on Thursday, saying the tech giant used its educational products to spy on the state's children and families.
Google collected a trove of students' personal information, including data on their physical locations, websites they visited, YouTube videos they watched and their voice recordings, Hector Balderas, New Mexico's attorney general, said in a federal lawsuit.
“The consequences of Google's tracking cannot be overstated: Children are being monitored by one of the largest data mining companies in the world, at school, at home, on mobile devices, without their knowledge and without the permission of their parents,” the lawsuit said.
Over the last eight years, Google has emerged as the predominant tech brand in American public schools <https://cdn.vox-cdn.com/uploads/chorus_asset/file/19734145/document_5.pdf>, outpacing rivals like Apple and Microsoft by offering a suite of inexpensive, easy-to-use tools.
Today, more than half of the nation's public schools—and 90 million students and teachers globally—use free Google Education apps like Gmail and Google Docs. More than 25 million students and teachers also use Chromebooks, laptops that run on the company's Chrome operating system, the lawsuit said.
In September, Google agreed to pay a $170 million fine to settle federal and New York State charges that it illegally harvested the personal data <https://www.nytimes.com/2019/09/04/technology/google-youtube-fine-ftc.html> of children on YouTube.
The new lawsuit, filed in U.S. District Court for the District of New Mexico, claimed that Google violated the federal Children's Online Privacy Protection Act. The law requires companies to obtain a parent's consent before collecting the name, contact information and other personal details from a child under 13.
The lawsuit also said Google deceived schools, parents, teachers and students by telling them that were no privacy concerns with its education products when, in fact, the company had amassed a trove of potentially sensitive details on students' online activities and locations. […]
Cybersecurity researchers today uncovered a new high-severity hardware vulnerability residing in the widely-used Wi-Fi chips manufactured by Broadcom and Cypress—apparently powering over a billion devices, including smartphones, tablets, laptops, routers, and IoT gadgets.
Dubbed ‘Kr00k’ and tracked as CVE-2019-15126, the flaw could let nearby remote attackers intercept and decrypt some wireless network packets transmitted over-the-air by a vulnerable device.
The attacker does not need to be connected to the victim's wireless network and the flaw works against vulnerable devices using WPA2-Personal or WPA2-Enterprise protocols, with AES-CCMP encryption, to protect their network traffic.
“Our tests confirmed some client devices by Amazon (Echo, Kindle), Apple (iPhone, iPad, MacBook), Google (Nexus), Samsung (Galaxy), Raspberry (Pi 3), Xiaomi (RedMi), as well as some access points by Asus and Huawei, were vulnerable to Kr00k,” ESET researchers said.
According to the researchers <https://www.eset.com/int/kr00k/>, the Kr00k flaw is somewhat related to the KRACK attack <https://thehackernews.com/2017/10/wpa2-krack-wifi-hacking.html>, a technique that makes it easier for attackers to hack Wi-Fi passwords <https://thehackernews.com/2018/08/how-to-hack-wifi-password.html> protected using a widely-used WPA2 network protocol.
First, Learn What Kr00k Attack Doesn't Allow: […]
Ukraine's security service said the fake email that was supposedly from the Ministry of Health had actually been sent from outside the country.
OK, first off, to let you know that I know what I'm talking about, I put myself through university by working in the medical field, first as a practical nurse (I spent considerable time working in an isolation ward), and later as an industrial first aid attendant. (My required non-physics elective at university was medical physiology.) I've also been an emergency management volunteer for a couple of decades.
Now I've talked about security theatre in regard to COVID-19, and we are discussing other issues related to the coronavirus. But one of the things that has bugged me ever since it started hitting the news is the masks.
Masks won't keep you from getting COVID-19, or any other droplet bourne virus. (At least, they don't reduce your risk very much.) The paper face masks provide next to no protection in this regard, and the N95 masks aren't much better. Droplet bourne viruses will still get on your skin, on your face, and into your eyes, and simple daily activities make you touch your skin and face and mouth and eyes and provide the viruses a path inside. You don't need to inhale the virus to get it, and, if you do get COVID-19, it probably will be from some other pathway than inhaling it. This is why frequent (very frequent) handwashing is important. (Hand sanitizer is good, too. If you use it frequently.)
Masks are useful, if you have the virus, in preventing you giving it to other people. (Not a complete prevention, mind, but useful.) So, if you are wearing a face mask in public during this epidemic, you are making one of two statements: 1) I AM INFECTED WITH THE COVID-19 VIRUS!! or 2) I AM STUPID AND IGNORANT!!
This advice, by the way, applies to influenza as well. Which brings up another point: if you are worried about the COVID-19 virus, and still haven't yet gotten a flu shot, you are stupid and ignorant. Even in China, you are much, much more likely to get the flu than COVID-19. Even in China, the likelihood that the next person you meet will have COVID-19 is about .0001. (Probably somewhat less.) But if you go out into a crowd (if you can find a crowd in China these days), you are likely to encounter somebody with the flu. Having a flu shot probably doesn't reduce your risk of getting COVID-19, but it does reduce your risk of getting the flu. If you get the flu, then you may have to get tested for COVID-19, and that puts that much more demand on the system and resources.
Wash your hands.
If you haven't got a flu shot, get one.
Don't panic buy, horde, or misuse masks and gloves. If you need them, you'll get them. (If other people haven't been panic buying and hoarding.) https://lite.cnn.com/en/article/h_cd175447b3f892d7adcb7c196b0b7316
Now go wash your hands.
Singapore prioritizes public health and civility. Unwise to violate these orders, especially in a time of elevate pandemic conditions.
Thousands ordered masks that let them unlock their phones during outbreaks. But this viral art project doesn't just work with surveillance technology—it works against it, too.
Two weeks ago, Danielle Baskin had an idea for a tongue-in-cheek art project. Now, she's suddenly big in China.
While talking with friends about the coronavirus outbreak <https://www.technologyreview.com/s/615290/how-to-prepare-for-the-coronavirus-covid19/>, Baskin, an artist in San Francisco, realized that people using face masks to protect themselves from infection would have trouble unlocking phones that use facial recognition. (This has indeed been a problem <https://www.abacusnews.com/tech/facial-recognition-fails-china-people-wear-masks-avoid-coronavirus/article/3048006>.) She quickly created a prototype of a mask printed with a face—not your face, but rather unique faces of imaginary people generated using artificial intelligence <https://www.thispersondoesnotexist.com/>—and posted her idea on Twitter <https://twitter.com/djbaskin/status/1228798382598000640>: “Protect people from viral epidemics while still being able to unlock your phone.”
The demand was immediate. Those interested in the idea include cancer patients who want to customize their masks, doctors who work in children's hospitals and don't want to scare kids—and people in China. Her invention was picked up by Chinese media, and now her waiting list has over 2,000 people on it, many of them with Chinese email accounts. And it's not just a request for one or two masks each: one potential customer requested 10,000 masks. Eight people asked if they could be her distributor. Baskin won't be fulfilling these orders for a while—there's a global mask shortage right now—but the masks do work, as long as you set FaceID to recognize you when you're wearing it.
“I think these are so cool as a social object and art object,” says Robert Furberg, a researcher who studies biometrics in health care. “It's the fusion of something threatening and protective at the same time, and I just find that so compelling.” He is one of those who reached out to Baskin; his wife is a nurse and has complained about the inconvenience of masks and FaceID. For him, the demand itself is a form of social commentary: “It's just so 2020.”
But while most people are simply concerned about being able to use their phones while wearing a mask, they may discover a surprising bonus. Baskin says there's an element of anti-surveillance built in. “[The mask] appears to be working with facial recognition, but it will never actually be your face,” she says. It's tricking the technology and protecting your biometric information: “The image is something your friends could identify as you but that machine learning can't, and it shows that face recognition has errors.” Art against surveillance
Arty anti-surveillance devices and techniques have become more popular in recent years, from anti-facial-recognition face paint to an invisibility cloak <https://arxiv.org/abs/1910.14667> that can block object detectors; from the Adversarial Fashion line that confuses automated license plate readers <https://www.technologyreview.com/f/614175/a-new-clothing-line-confuses-automated-license-plate-readers/> to the simple face masks that protesters in Hong Kong and India have used to hide their face from cameras. The media reports breathlessly <https://www.businessinsider.com/clothes-accessories-that-outsmart-facial-recognition-tech-2019-10#images-from-echizens-lab-shows-how-the-visor-blocks-ais-ability-to-detect-a-face-6> on each advance, but for the most part, they are more political commentary than useful tactics for the average person <https://slate.com/technology/2019/08/facial-recognition-surveillance-fashion-hong-kong.html>. Those projects, in fact, might be less helpful if they went mainstream, because wide adoption could lead to an arms race that enables the invasive technology to route around defenses. […]
To counter the plans to use face recognition on campus 400 photos of staff and athletes were run through a facial recognition system (Amazon's) comparing to a mugshot database with the result that 58 of them were incorrectly matched. The majority of the incorrect matches were people of colour.
“This style of technology could also follow babies beyond the crib. The electronics firm ViewSonic said last month that it was building a whiteboard-mounted 'mood sensing' device that could monitor students and alert teachers as to how engaged a class may be. The company's chief technology officer, Craig Scott, said in a statement that the system was still in early development but was being designed to ‘improve class performance.’
“But this level of computer-aided surveillance, Brooks said, can also have a corrosive effect on parents' sense of self-worth and state of mind. The devices, she said, send the message that parents have failed if they don't watch their baby at every turn.
“We have this mind-set, this mentality, that when kids are involved, we don't have to be rational. Any risk mitigation is worth the cost we have to pay,” Brooks said. But the system “undermines parents' feelings of basic competence: that they can't trust themselves to take care of their babies without a piece of $500 equipment.”
I'm feeling safer already: Cradle-to-grave surveillance built for a surveillance economy. This baby monitor stirs paranoia like “fluoride in childrens' ice cream.” (Per General Jack D. Ripper of “Dr. Strangelove.”)
They scored $80 million by tricking a network into routing funds to Sri Lanka and the Philippines and then using a money mule to pick up the cash.
The bills are called supernotes. Their composition is three-quarters cotton and one-quarter linen paper, a challenging combination to produce. Tucked within each note are the requisite red and blue security fibers. The security stripe is exactly where it should be and, upon close inspection, so is the watermark. Ben Franklin's apprehensive look is perfect, and betrays no indication that the currency, supposedly worth one hundred dollars, is fake.
Most systems designed to catch forgeries fail to detect the supernotes. The massive counterfeiting effort that produced these bills appears to have lasted decades. Many observers tie the fake bills to North Korea, and some even hold former leader Kim Jong-Il personally responsible, citing a supposed order he gave in the 1970s, early in his rise to power. Fake hundreds, he reasoned, would simultaneously give the regime much-needed hard currency and undermine the integrity of the US economy. The self-serving fraud was also an attempt at destabilization.
The Fido Alliance, an organization committed to eliminating the need for passwords, received a big boost last week when Apple signed up as a board member. Fido stands for Fast IDentity Online.
Apple apparently wasn't ready to announce its support immediately, as tweets from a Fido Alliance conference were quickly deleted, but as of today, the news is official.
French site MacG spotted a now-deleted tweet that had a photo (below) of a conference slide showing the Apple logo and the text ‘New Board Member’.
While that tweet didn't stay up for long, Apple has today been added to the official website as a board-level member, alongside such tech companies as Amazon, Arm, Facebook, Google, Intel, Microsoft, and Samsung. A number of big-name finance companies are also board members, including American Express, ING, Mastercard, Paypal, Visa, and Wells Fargo.
The case highlights issues that have plagued 911 phone systems across the country since the advent of smartphones. Cellphone privacy settings and outdated dispatch mapping systems continue to frustrate first responders when they can't find callers.
Landline numbers are much easier for these systems to pinpoint. But over 80 percent of the calls to the nation's 911 centers are from cellphones, The Washington Post has previously reported.
The Federal Communications Commission has required cellphone carriers to improve the transfer of information to 911 centers. The carriers have until 2021 to make sure transmitted locations are within 50 yards 80 percent of the time.
Some injuries prevent precise location disclosure. Geolocation exactitude is a requirement for first-responder timeliness. There are cracks in the surveillance economy: a foreign registered cellphone, used domestically (in the US, for now at least), does not possess a locally resolvable name or resident address.
Mike Williams, Rice University, 18 Feb 2020
Researchers at Rice University have developed a technique to improve security for Internet of Things (IoT) devices significantly, while using far less energy. The new technique is a hardware solution based on the power management circuitry found in most central processing chips. The method leverages power regulators to muddle information leaked by the power consumption of encryption circuits. A breakthrough last year by the team generated paired security keys based on fingerprint-like defects unique to every computer chip. “This year, the story is similar, but we are not generating keys,” said Rice's Kaiyuan Yang. “We are looking at defending against a new type of attack that is specifically for IoT and mobile systems.”
Quoting in part from TechCrunch:
Robinhood, the startup with a stock trading app …, suffered one of its worst outages on one of the busiest trading days of the year.
As the Dow Jones Industrial Average enjoyed the single biggest point-gain in the history of the index, Robinhood's application fell prey to an error that locked users out of the service for the duration of Monday's trading.
One potential cause of the outages could just be the high trading volumes that have accompanied highly volatile markets over the past month. While there were some early reports that the bug was caused by a Leap Day bug, the company has denied that a February 29th error was at fault.
The company's mistake could cost its users lots of money as they sought to trade on stocks that were hit in last week's string of losses due to investor worries over the impact the novel coronavirus, COVID-19, would have on the global economy.
The company said “We don't have an estimate when the issue will be resolved but all of us at Robinhood are working as hard as we can to resume service.”
I became aware of this because of a friend who had successfully bet (via options), last week, that the market would go down significantly over virus fears. When he went to sell his options today he could not because of the Robinhood failure. I do not want to make light of his pain, but it would be ironic if he suffered this loss because of a virus.
[See also https://gizmodo.com/stock-trading-app-robinhood-experiences-widespread-outa-1842042516 ]
Personal accountability for failure to prevent phishing assault is a common problem in industry, government, and non-profit organizations.
Employment laws prevent penalties: demotion, fines, dismissal for cause though the brand outrage arising from these incidents can be severe.
The essay raises important questions about repeat offenders—those individuals who neglect to practice IT hygiene for lack of competence, professionalism, or incautious actions.
Given that phishing is unlikely to decay in frequency, education appears to be the only means to suppress it. If the CEO activates a phished assault, the mess gets cleaned up and communication lockdown is enforced—until it leaks to the press. If general slave #6 initiates it, what do most organizations do? Promote the individual?
Risk: Weak organizational deterrence against IT threats.
Clearview AI, a startup that compiles billions of photos for facial recognition technology, said it lost its entire client list to hackers. The company said it has patched the unspecified flaw that allowed the breach to happen.
In a statement, Clearview AI's attorney Tor Ekeland said that while security is the company's top priority, “Unfortunately, data breaches are a part of life. Our servers were never accessed.” He added that the company continues to strengthen its security procedures and that the flaw has been patched.
Clearview AI continues “to work to strengthen our security,” Ekeland said.
Too late, maybe?
The author writes:
In my January column, I described the influence of feng shui on the Chinese real estate market. Although it would be hard to match the pervasive influence of traditional Chinese superstition in real estate and other areas of commerce, the Chinese are not alone. One of the most interesting survey results I've ever come across is a 2007 Gallup poll that showed 13 percent of American adults would be bothered if given a hotel room on the thirteenth floor (Carroll 2007). Thirteen percent. Furthermore, nine percent of respondents said they would be bothered enough to ask for a different room. As is the case for many traditional superstitions, the majority of those who said they would be bothered were women.
The risk? At best (and not very good):
We're hard-wired to connect dots. When Thing 1 happens, and then Thing 2 happens, we humans are very likely to conclude that Thing 1 caused Thing 2, even if they're completely unrelated; it's a phenomenon psychologists call the illusion of causality.
From a friend… I guess Virginians lose. For those image-challenged, Hilton offers, “Some regional, national, state laws confer certain rights relating to personal data.” But answers request from Virginia, “We're sorry! Only certain states afford rights relating to personal data to their residents.”
Security analyst John Strand had a contract to test a correctional facility's defenses. He sent the best person for the job: his mother.
The risk? Mom.
No backups; open and under appeal cases affected: “The computer did it!”
All right now, how many people reading this:
Personally, I realized about 20 minutes ago, and am going to set it back now.
The skull breaker challenge is, somehow, not even the most terrifying thing happening on this app.
“Code for America saw an opportunity: To help clear the backlog of some 220,000 cases, the organization developed an algorithm to identify which residents qualify to have their records cleared or reduced. Now, district attorneys across the state are crediting the group with expediting an otherwise slow and tedious process.”
Mass exoneration or mass incarceration. Batch processing saves individual adjudication costs. Trust that the algorithm doesn't overlook an innocent case. Data fallout/dropout is a common occurrence in big business. This situation certainly exemplifies the situation. Albeit, it is one-off usage.
Risk: Mass exoneration by algorithmic fiat.
Would the personnel that trained or coded the robot that manufactures the steak, and their families, consume it for a few months before the public bought it? Can a 3D steak printing robot offer a bias-free taste-test opinion? Will it always answer, “What's the beef about the printed beef?”
Risk: Sanitation, nutrition, and safety of 3D printed foods and components sold for human consumption.
A murder case raises the question: Is it OK for police to lie to get an innocent person's DNA?
Christopher Gavin, The Boston Globe, 24 Feb 2020 https://www.boston.com/news/local-news/2020/02/24/error-taxes-taunton
A seemingly small line error has created a major problem for Taunton's assessors ” and it's going to cost taxpayers. Officials were forced to essentially reboot their billing process after a software upgrade meant that local public school property was added to the list of taxable properties, they say.
The snafu came when the non-profit Head Start building, adjacent to Taunton High School, was added to the system as a taxable property, which generated invoices for all of the school buildings at the site, Assessor Richard Conti told the City Council last week.
The assessed value of Taunton's commercial and industrial properties shot up by $136,846,200, at least on paper. The school property was then logged as being on the hook for $4.2 million in taxes for what is nontaxable property, Conti said.
The oversight was only caught when the school superintendent sent the bills back to the assessor's office. “This all happened as a result of a perfect storm of errors that went into sequence that no one has ever experienced before,” Conti said during the Feb. 18 meeting. “This happened in a manner that none of our peers, none of the people in the Department of Revenue would have caught because of the software.”
A pet-sitter's career remains safe from redundancy as long as Internet-based pet feeders are purchased.
Watching films and listening to music online produces more greenhouse gas emissions than many realise.
Risks Digest has correctly quoted the ZDNet article, which says that 64-bit Linux runs out of seconds in the year 29,227,702,659.
But I believed that we have about ten times longer to wait, and that the true S2^63 instant is about AD 292,277,026,596-12-04 Sun 15:30:08 GMT (Gregorian) .
ZDNet dropped the final 6 of the year count.
But I now see that my date/time above, which the ZDNet author might have seen a copy of, cannot be quite right; 1970 and …6596 are manifestly in different phases of the 400-year cycle of the secular Gregorian Calendar, and therefore the value 365.2425 is not precisely suitable.
The moral is that a reader should, whenever possible, check any printed figure to see whether it is, at least, perhaps right.
Is basic maritime navigation no longer taught to merchant crew? I've never navigated in open water, but I still know some of the basics, like how to read a compass, to leave green navigation markers to port and red to starboard, etc.
As far as other vessels go, they should be clearly marked and lit — red light on the port side, green on the starboard, white light on the stern and I believe at the top of the mast, and the rules of the road clearly state to which side you should leave the other vessel if your courses appear to intersect. Calling out NATO and Russian warships specifically is a form of scare words—they should be marked and lit like any other vessel, unless operating under wartime conditions, in which case it's incumbent on them to avoid collisions.
I'm not saying that losing your GPS-based navigation is trivial, but any ocean-going vessel and its crew should already be equipped to at least have a reasonable chance of avoiding a navigation-related catastrophe.
Please report problems with the web pages to the maintainer