Gee, I foresee this as a great innovation with no downsides at all. I can't wait for phase 3, when I convert my kitchen to a SCIF.
No time to check for dead recipients—what could go wrong?
Kenton Varda gets dozens of messages a day from Spanish-speakers around the world, all thanks to a Gmail address he registered 16 years ago.
Two weeks ago, longtime software engineer Kenton Varda got an email that wasn't meant for him. It was from AT&T Mexico to a customer named Jorge, whose most recent phone bill was attached. You've probably gotten an email intended for someone else at least once. But then Varda got another AT&T Mexico bill for Gloria. And then a third for Humberto, who is overdue on paying more than 6,200 pesos, about $275.
To Varda, the incident wasn't a surprise. As the owner of the email account email@example.com, he gets dozens of messages a day from Spanish-speakers around the world, all sent by people who thought they could use his address as a dummy input: “Temporal” translates to “temporary.” Varda says he frequently receives private documents, even medical bills and collection notices. Many of the most sensitive emails contain legal notices that the messages are confidential and should not be disclosed to other parties aside from the intended recipient. Varda doesn't speak Spanish, but he uses Google Translate when possible to understand what's going on and reply to senders saying they have the wrong address.
“Recently I had a few people send me what appeared to be photographs of handwritten notes. Maybe notes from a class?” Varda says. “Also, I received several job evaluations of one Jose Gomez, who appears to be a janitor. And a pretty good one!”
Mobilewalla gathered cellphone data from Black Lives Matter protesters in four cities.
If you marched in recent Black Lives Matter protests in Atlanta, Los Angeles, Minneapolis or New York, there's a chance the mobile analytics company Mobilewalla gleaned demographic data from your cellphone use. Last week, Mobilewalla released a report detailing the race, age and gender breakdowns of individuals who participated in protests in those cities during the weekend of May 29th. What is especially disturbing is that protesters likely had no idea that the tech company was using location data harvested from their devices. <https://www.mobilewalla.com/about/press/new-report-reveals-demographics-of-protests>
Mobilewalla observed a total of 16,902 devices (1,866 in Atlanta, 4,527 in Los Angeles, 2,357 in Minneapolis and 8,152 in New York). <https://f.hubspotusercontent40.net/hubfs/4309344/Mobilewalla Protester Insights Methodology.pdf> As BuzzFeed News explains, Mobilewalla buys data from sources like advertisers, data brokers and ISPs. It uses AI to predict a person's demographics (race, age, gender, zip code, etc.) based on location data, device IDs and browser histories. The company then sells that info <https://www.mobilewalla.com/about> to clients so they can “better understand their target customer.” <https://www.buzzfeednews.com/article/carolinehaskins1/protests-tech-company-spying>
“This report shows that an enormous number of Americans—probably without even knowing it—are handing over their full location history to shady location data brokers with zero restrictions on what companies can do with it,” Senator Elizabeth Warren told BuzzFeed News. “In an end-run around the Constitution's limits on government surveillance, these companies can even sell this data to the government, which can use it for law and immigration enforcement.”
Mobilewalla CEO Anindya Datta told BuzzFeed that the company produced the report to satisfy its employees' curiosity. Supposedly, Mobilewalla doesn't plan to share info about whether specific individuals attended the protests with clients or law enforcement.
But the incident is a reminder that data brokers have access to massive amounts of data from unassuming individuals. There's a chance that data could be used by law enforcement or be leaked—as we've seen happen in past data breaches. <https://www.engadget.com/2018-06-28-exactis-leak-340-million-records.html> Some fear that individuals concerned about their data being swiped might avoid protests, so in effect, the practices of collecting data may suppress free speech. […]
The FBI has issued a security alert warning K12 schools of the “ransomware threat” during the COVID-19 pandemic.
The US Federal Bureau of Investigation sent out on Tuesday a security alert to K12 schools about the increase in ransomware attacks during the coronavirus (COVID-19) pandemic, and especially about ransomware gangs that abuse RDP connections to break into school systems.
The alert, called a Private Industry Notification, or PIN, tells schools that “cyber actors are likely to increase targeting of K-12 schools during the COVID-19 pandemic because they represent an opportunistic target as more of these institutions transition to distance learning.”
Schools are likely to open up their infrastructure for remote staff connections, which in many cases would mean create Remote Desktop Protocol (RDP) accounts on internal school systems.
Over the past two-three years, many ransomware gangs have utilized brute-force attacks or vulnerabilities in RDP to breach corporate networks and deploy file-encrypting ransomware. […] https://www.zdnet.com/article/fbi-warns-k12-schools-of-ransomware-attacks-via-rdp/
Malware targeted UK vendor starting to do business in China Cybersecurity firm said it has briefed FBI on its discovery
When a U.K.-based technology vendor started doing business in China, it hired a cybersecurity firm to proactively hunt for any digital threats that could arise as part of doing business in the country. The firm discovered a problem, one with such major implications that it alerted the FBI.
A state-owned bank in China had required the tech company to download software called Intelligent Tax to facilitate the filing of local taxes. The tax software worked as advertised, but it also installed a hidden back door that could give hackers remote command and control of the company's network, according to a report published Thursday by the SpiderLabs team at Chicago-based Trustwave Holdings Inc. <https://www.bloomberg.com/quote/TWAV:US> (The cybersecurity firm declined to identify the bank).
“Basically, it was a wide-open door into the network with system-level privileges and command and control server completely separate from the tax software's network infrastructure,” Brian Hussey, vice president of cyber-threat detection and response at Trustwave, wrote in a blog post <https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/the-golden-tax-department-and-the-emergence-of-goldenspy-malware/>, also published Thursday. The malware, which Trustwave dubbed GoldenSpy, isn't downloaded and installed until two hours after the tax software installation is completed, he said.
Trustwave researchers determined that the malware connects to a server hosted in China.
It isn't known how many other companies downloaded the malicious software, nor is the purpose of the malware clear or who is behind it, according to the report. Trustwave said it disrupted the intrusion at the tech company in the early stages. “However, it is clear the operators would have had the ability to conduct reconnaissance, spread laterally and exfiltrate data,” according to the report, adding that GoldenSpy had the characteristics of an Advanced Persistent Threat campaign. Such efforts are often associated with nation-state hacking groups. […]
Printers are leaking device names, locations, models, firmware versions, organization names, and even WiFi SSIDs.
An information technology specialist at the Federal Emergency Management Agency (FEMA) was arrested this week on suspicion of hacking into the human resource databases of University of Pittsburgh Medical Center (UPMC) in 2014, stealing personal data on more than 65,000 UPMC employees, and selling the data on the dark web.
The DOJ’s opposition to Facebook and Google's 8,000-mile cable to Hong Kong highlights how physical infrastructure is as contentious as the virtual world.
Starting today, the search giant will make a previously opt-in auto-delete feature the norm.
Google already announced security and privacy upgrades to Android 11 earlier this month. But Wednesday's changes focus on the data that Google services like Maps and YouTube can access—and how long they keep it for.
Pichai wrote in a blog post: “We’re guided by the principle that products should keep information only for as long as it’s useful to you. Privacy is personal, which is why we're always working to give you control on your terms.”
Google has been criticized for collecting and retaining data that users don't even realize it has. A year ago, the company added auto-delete controls that allowed you to set your Google account to delete history — like Web and App Activity and location—every three months or 18 months. Such a mechanism was long overdue, but Google would still collect this data indefinitely by default. You had to find the right toggle in your settings to set the auto-delete in motion.
Google's announcements on Wednesday flips this policy around. Newly formed Google accounts will auto-delete activity and location every 18 months by default. YouTube history will delete every 36 months. Existing accounts, though, will still need to proactively turn on the feature, as Google doesn't want to force a change on users who, for whatever reason, want the company to maintain a forever-record of their activity. (You can find our complete guide to limiting Google's tracking here.) As soon as you do, the company will nuke your accumulated activity and location data that's 18 months or older, and continue to do so going forward. Google will also push notifications and email reminders to get existing customers to review their data retention settings.
The NYT article cited by Monty Solomon was ill-informed. In a nutshell, it confused decision rules with estimation tools.
One of its central examples had to do with the glomerular filtration rate (GFR), an important measure of renal function. To measure the GFR accurately, one infuses a specialized, non-physiological, non-metabolized substance and observes how rapidly it is cleared into the urine. This is a tricky procedure, rarely done outside research laboratories.
Medical decisions are often made on the basis of an estimated GFR (eGFR), obtained by measuring the serum concentration of some physiological solute that is (mostly) eliminated into the urine. The solute most frequently used is creatinine, a byproduct of muscle metabolism. With creatinine data and a body of true GFR data, it is a curve-fitting exercise to see what eGFR formula best predicts the true GFR.
As a matter of empirical fact, the fit is improved by formulas that include age, sex, and self-reported race. Decisions about medical care (for example, when to begin hemodialysis) should be based on the best estimates of patients' physiological state. If GFR were estimated using simpler formulas, blind to sex, age, and race, patient care would be worse.
The conventional eGFR formulas are not restricted to medical systems that, like those of private medical care in the US, have been credibly charged with providing poor service to racial minorities and to women.. The same formulas are used in socialized systems, including that of the US military and, of course, those of developed countries around the world.
Robert R. Fenichel, M.D.: http://www.fenichel.net
> In what may be the first known case of its kind, a faulty facial > recognition >match led to a Michigan man's arrest for a crime he did not > commit.
If you read the article, you will find that the headline doesn't match what actually happened:
After Ms. Coulson, of the state police, ran her search of the probe image, the system would have provided a row of results generated by NEC and a row from Rank One, along with confidence scores. Mr. Williams's driver's license photo was among the matches. Ms. Coulson sent it to the Detroit police as an Investigative Lead Report.
“THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION. IT IS AN INVESTIGATIVE LEAD ONLY AND IS NOT PROBABLE CAUSE FOR ARREST.” [The file says this in bold capital letters at the top.]
This is what technology providers and law enforcement always emphasize when defending facial recognition: It is only supposed to be a clue in the case, not a smoking gun. Before arresting Mr. Williams, investigators might have sought other evidence that he committed the theft, such as eyewitness testimony, location data from his phone or proof that he owned the clothing that the suspect was wearing.
In this case, however, according to the Detroit police report, investigators simply included Mr. Williams's picture in a *6-pack photo lineup* they created and showed to Ms. Johnston, Shinola's loss-prevention contractor, and she identified him. (Ms. Johnston declined to comment.)
The photo match algorithm indeed did a lousy job, but the people who used the picture did a worse job. False identification from photo lineups has been a problem for a very long time. There are some well known mitigations that they didn't use here, in particular showing the pictures one at a time rather than in a group. The latter tends to make people pick the closest match even if the match isn't close at all.
Gloria belongs to a quilting group and an embroidery group. Neither group is meeting right now. The church where both groups normally meet is giving them a break on rent, because of the public health restrictions on meetings, but there are still some ongoing expenses. In addition, with no meetings going on, some members are starting to question their membership and dues.
They aren't alone. This article focuses on charities, but a number of small groups are in serious trouble over the pandemic. Many amateur sports leagues are already collapsing.
Our industry and technical groups are facing related issues. We may be in a slightly different situation, since most of us have the technical chops to set up virtual meetings, but getting people to attend these meetings is surprisingly difficult. (Apparently if nobody is providing free coffee and donuts, we won't go.)
We need contacts. We need to get ideas from peers. We need to bounce ideas off each other. We need to mentor, even if informally, the newcomers to our profession (and recruit students in technical areas into our profession).
Support your local chapter, LUG, SIG, meetup or whatever.
Tickets, Sun, 5 Jul 2020 at 11:45 AM | Eventbrite
Session to share our insights with World Intellectual Property Organization on IP protection for AI-generated and AI-assisted work
About this Event
We are hosting this session to share our insights with the World Intellectual Property Organization on IP protections for AI-generated and AI-assisted works drawing from our diverse perspectives and experience and having done so before for various other public consultations. Given that this will be a shorter session and focused on providing concrete recommendations, we encourage you to read the document beforehand and frame your contributions in line with the questions.
Link to the reading: https://www.wipo.int/edocs/mdocs/mdocs/en/wipo_ip_ai_2_ge_20/wipo_ip_ai_2_ge_20_1_rev.pdf
Questions that we will cover in the session:
1. Should the law require that a human being be named as the inventor or should the law permit an AI application to be named as the inventor?
New horizons in AI planning… AI is a tool; naming it as inventor seems to make as much sense as naming the computer on which a patent application is typed.
Please report problems with the web pages to the maintainer