The RISKS Digest
Volume 31 Issue 16

Saturday, 6th April 2019

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

DoD AI's to monitor "Top Secret" employees
Defense One
WikiLeaks: "Don't Be Evil!" was Google's "Warrant Canary"
Henry Baker
Half of Industrial Control System Networks have Faced Cyberattacks, Say Security Researchers
ZDNet
Hackers reveal how to trick a Tesla into steering towards oncoming traffic
Charlie Osborne
Tesla cars keep more data than you think, including this video of a crash that totaled a Model 3
FTC via Geoff Goodfellow
What AI Can Tell From Listening to You
WSJ
Can we stop AI outsmarting humanity?
The Guardian
AI is flying drones—very, very slowly
NYTimes
New Climate Books Stress We Are Already Far Down The Road To A Different Earth
TPR
Are We Ready For An Implant That Can Change Our Moods?
npr.org
Researchers Find Google Play Store Apps Were Actually Government Malware
Motherboard
Office Depot Pays $25 Million To Settle Deceptive Tech Support Lawsuit
Bleeping Computer
Why Pedestrian Deaths Are At A 30-Year High
NPR
More on the RISKS.ORG Newcastle certificate issue
Lindsay Marshall
Insurers Creating a Consumer Ratings Service for Cybersecurity Industry
WSJ
Another Gigantic Leak
PGN
Nokia phones caught mysteriously sending data to Chinese servers
BGR
IBM + Flickr + facial recognition + privacy
Fortune via Gabe Goldberg
Brits: Huawei's code is a steaming pile...
Henry Baker
More on the Swiss electronic voting experiment
Post—Swiss
'The biggest, strangest problem I could find to study'
bbc.com
Black-box data shows anti-stalling feature engaged in Ethiopia crash
WashPost
The emerging Boeing 737 MAX scandal, explained
Vox
Re: How a 50-year-old design came back...
David Brodbeck
Re: How Google's Bad Data Wiped a Neighborhood off the Map
Dan Jacobson
Re: Tweet by Soldier of FORTRAN on Twitter
Dan Jacobson
Re: Unproven declarations about healthcare
Martin Ward
Wol
Re: Is curing patients, a sustainable business model?
Dmitri Maziuk
According to this bank, password managers are bad
Sheldon Sheps
"Privacy and Security Across Borders"
Jen Daskel via Marc Rotenberg
Info on RISKS (comp.risks)

DoD AI's to monitor "Top Secret" employees (Defense One)

Henry Baker <hbaker1@pipeline.com>
Fri, 29 Mar 2019 08:32:22 -0700
  [As this is very near April 1st, RISKS may need a special 'April First
  Really Really Real' edition aka 'You can't make this stuff up' edition for
  items that would otherwise have been thought to be April Fool jokes.  HB]

Wouldn't it be cheaper/simpler/faster to simply outsource this DoD
monitoring (called 'Project Snowden', perhaps?) to a Chinese company, since
they already have the SCS software, and—due to the Chinese having hacked
all of the Form 86's—they already have all the data, too?

"For serious offenders, ... switching the person's ringtone, which
could begin with the wail of a police siren"—China's SCS

Once again, the US is falling behind China in AI technology!

Patrick Tucker, Technology Editor, *Defense One*, 26 Mar 2019
The US Military Is Creating the Future of Employee Monitoring
https://www.defenseone.com/technology/2019/03/us-military-creating-future-employee-monitoring/155824/

A new AI-enabled pilot project aims to sense "micro changes" in the behavior
of people with top-secret clearances.  If it works, it could be the future
of corporate HR.

The U.S. military has the hardest job in human resources: evaluating
hundreds of thousands of people for their ability to protect the nation's
secrets.  Central to that task is a question at the heart of all labor
relations: how do you know when to extend trust or take it away?

The office of the Defense Security Service, or DSS, believes artificial
intelligence and machine learning can help.  Its new pilot project aims to
sift and apply massive amounts of data on people who hold or are seeking
security clearances.  The goal is not just to detect employees who have
betrayed their trust, but to predict which ones might—allowing problems
to be resolved with calm conversation rather than punishment.

If the pilot proves successful, it could provide a model for the future of
corporate HR.  But the concept also affords employers an unprecedented
window into the digital lives of their workers, broaching new questions
about the relationship between employers, employees, and information in the
age of big data and AI.

The pilot is based on an urgent need.  Last June, the Defense
Department took over the task of working through the security
clearance backlog—more than 600,000 people.  Some people—and the
organizations that want to hire them—wait more than a year,
according to a September report from the National Background
Investigations Bureau.  Those delays stem from an antiquated system
that involves mailing questionnaires to former places of employment,
sometimes including summer jobs held during an applicant's
adolescence, waiting (and hoping) for a response, and scanning the
returned paper document into a mainframe database of the sort that
existed before cloud computing.

In addition to being old-fashioned, that process sheds light on an
individual only to the degree that past serves as prologue.  As an indicator
of future behavior, it's deeply wanting, say officials.

This effort to create a new way to gauge potential employees' risk is being
led by Mark Nehmer, the technical director of research and development and
technology transfer at DSS' National Background Investigative Services.

This spring, DSS is launching what they describe as a "risk-based user
activity pilot."  It involves collecting an individual's digital footprint,
or "cyber activity," essentially what they are doing online, and then
matching that with other data that the Defense Department has on the person.
Since "online" has come to encompass all of life, the effect, they hope,
will be a full snapshot the person.  "We anticipate early results in the
fall," a DSS official said in an email on Tuesday.

The Department of Defense already does some digital user activity
monitoring.  But the pilot seeks a lot more information than is currently
the norm.  "In the Department of Defense, user activity monitoring is
typically constructed around an endpoint.  So think of your laptop.  It's
just monitoring activity on your laptop.  It's not looking at any other
cyber data that's available"—perhaps 20 percent of the available digital
information on a person, Nehmer said at a November briefing put on by
company, C3, a California-based technology company serving as a partner on
the pilot.  [...]

  [Very long item pruned for RISKS.  PGN]


WikiLeaks: "Don't Be Evil!" was Google's "Warrant Canary"

Henry Baker <hbaker1@pipeline.com>
Sun, 31 Mar 2019 12:09:10 -0700
  April 1, 2019 [Note: Despite Henry's noting that the previous items might
  be perceived as April Fools's items, this one is a genuine April Fools'
  item ("real fake news"), submitted too late for the previous RISKS issue.
  PGN]

London, UK—Documents released by WikiLeaks today show that Google's
use of the motto "Don't Be Evil!" was actually a warrant canary.

 "A warrant canary is a method by which a [company] aims to inform its users
 that the [company] has been served with a secret government subpoena
 despite legal prohibitions on revealing the existence of the subpoena.  The
 warrant canary typically informs users that there has *not* been a secret
 subpoena as of a particular date. ... [I]f the warning is removed, users
 are to assume that the host has been served with such a subpoena.  The
 intention is to allow the [company] to warn users of the existence of a
 subpoena passively, without disclosing to others that the government has
 sought or obtained access to information or records under a secret
 subpoena." —Wikipedia

"We at Google never wanted to be NSA's evil stooge, but the FISA Court made
us do it," said a person close to the Google founders.  "We knew all hope
was lost when air traffic control designated our Google 767 jet as 'Air
Force 666' while landing at Andrews [AF Base outside Washington, DC]."

The WikiLeaks documents show mostly unwitting collusion between the NSA and
Google from the very beginning in 1998, but the pressure that triggered the
warrant canary came to a head after Edward Snowden's disclosures and
increased NSA pressure for Google to move back into China.

NSA's budget had suffered from the end of the Soviet Union, just at the time
the Internet was taking off.  NSA couldn't keep pace with the torrid
technology trends, and also couldn't hire the best talent.  However, the NSA
could ride the coattails of Silicon Valley startups like Google which would
gather all the intel data, and NSA could subsequently force them to disgorge
it via the Third Party Doctrine.

  "The third-party doctrine ... holds that people who voluntarily give
  information to third parties--such as [Google]--have "no reasonable
  expectation of privacy."  A lack of privacy protection allows the United
  States government to obtain information from third parties without a legal
  warrant and without otherwise complying with the Fourth Amendment
  prohibition against search and seizure without probable cause and a
  judicial search warrant.—Wikipedia

"Basically, [ex-NSA official] William Binney was 100% correct; the NSA's
'Trailblazer' system never worked, but Trailblazer was a smokescreen for the
NSA's covert access to all of Google's world-wide data.  NSA no longer has
to keep any databases of its own, as it has outsourced all of its
data-gathering to Google and AWS.  For example, the NSA's huge facility in
Bluffdale, UT, is a hoax intended to fool Russian and Chinese satellites --
it is the equivalent of Patton's [WWII] Ghost Army," according to the
WikiLeaks spokesperson.

"Yes, NSA Bluffdale uses a lot of electricity, but that's primarily for
mining Bitcoin most likely used to fund illegal CIA operations, a la Contra"
she speculated.

While Brin and Page developed the Google search algorithm on their own, the
WikiLeaks documents show that the shadowy CIA venture fund In-Q-Tel then
pressured Google into developing cellphones and home surveillance devices
such as routers, cameras and 'thermostats' [wink!  wink!  microphones,
ahem!].

"The WikiLeaks documents also show that Google's subsequent name 'Alphabet'
was a paean to all of the 'three letter agencies' (TLA's) that Google had
been forced to work with over the years," she added.


Half of Industrial Control System Networks have Faced Cyberattacks, Say Security Researchers (ZDNet)

ACM TechNews <technews-editor@acm.org>
Mon, 1 Apr 2019 11:45:06 -0400
Danny Palmer, ZDNet, 27 Mar 2019 via ACM TechNews, 1 Apr 2019

Kaspersky Lab's "Threat Landscape for Industrial Automation Systems" report
found that almost 50% of industrial systems display evidence of attackers
attempting malicious activity--in most cases, detected by security software.
The statistics, which are based on anonymized data submitted to the
Kaspersky Security network by the company's customers, show that the main
attack vector for these systems is via the Internet, with hackers on the
lookout for unsecured ports and systems to gain access to; this method
accounted for 25% of identified threats. The configuration used by many
industrial networks leaves them open to self-propagating campaigns that can
easily find them. Removable media was identified as the second most-common
threat to industrial networks, following by email-based phishing attacks.
The Kaspersky researchers recommend regularly updating operating systems and
software on industrial networks and applying security fixes and patches
where available.

https://orange.hosting.lsoft.com/trk/click?ref=znwrbbrs9_6-1f213x21b4fbx070496&


Hackers reveal how to trick a Tesla into steering towards oncoming traffic (Charlie Osborne)

Gene Wirchenko <genew@telus.net>
Tue, 02 Apr 2019 11:25:10 -0700
Charlie Osborne for Zero Day (2 Apr 2019)
A root vulnerability and a few stickers were all it took.

https://www.zdnet.com/article/hackers-reveal-how-to-trick-a-tesla-into-steering-towards-oncoming-traffic/

A team of hackers has managed to trick the Tesla Autopilot feature into
dive-bombing into the wrong lane remotely through root control and a few
stickers.

By applying small, inconspicuous stickers to the road, the system failed to
notice that the fake lane was directed towards another lane—a scenario
the team says could have serious real-world consequences.

The vulnerability and security weaknesses found by Tencent were reported to
Tesla and have now been resolved. The findings were shared with attendees of
Black Hat USA 2018.


Tesla cars keep more data than you think, including this video of a crash that totaled a Model 3

geoff goodfellow <geoff@iconia.com>
Sun, 31 Mar 2019 04:17:21 -0700
  —Crashed Tesla vehicles, sold at junk yards and auctions, contain
   deeply personal and unencrypted data including info from drivers' paired
   mobile devices, and video showing what happened just before the accident.
  —Security researcher GreenTheOnly extracted unencrypted video,
   phonebooks, calendar items and other data from Model S, Model X and Model 3
   vehicles purchased for testing and research at salvage.
  —Hackers who test or modify the systems in their own Tesla vehicles are
   flagged internally, ensuring that they are not among the first to receive
   over-the-air software updates first.

EXCERPT:

If you crash your Tesla, when it goes to the junk yard, it could carry a
bunch of your history with it.

That's because the computers on Tesla vehicles keep everything that drivers
have voluntarily stored on their cars, plus tons of other information
generated by the vehicles including video, location and navigational data
showing exactly what happened leading up to a crash, according to two
security researchers.

One researcher, who calls himself GreenTheOnly, describes himself as a
white-hat hacker and a Tesla enthusiast who drives a Model X. He has
extracted this kind of data from the computers in a salvaged Tesla Model S,
Model X and two Model 3 vehicles, while also making tens of thousands of
dollars cashing in on Tesla bug bounties in recent years. He agreed to speak
and share data and video with CNBC on the condition of pseudonymity, citing
privacy concerns.

Many other cars download and store data from users, particularly information
from paired cellphones, such as contact information. The practice is
widespread enough that the US Federal Trade Commission has issued advisories
to drivers warning them about pairing devices to rental cars
<https://www.consumer.ftc.gov/blog/2016/08/what-your-phone-telling-your-rental-car>,
and urging them to learn how to wipe their cars' systems
<https://www.consumer.ftc.gov/blog/2018/08/selling-your-car-clear-your-personal-data-first>
clean before returning a rental or selling a car they owned.

But the researchers' findings highlight how Tesla is full of contradictions
on privacy and cybersecurity. On one hand, Tesla holds car-generated data
closely
https://www.consumeraffairs.com/news/tesla-blames-drivers-who-wreck-its-cars-but-wont-hand-over-crash-data-without-a-court-order-053018.html
and has fought customers in court to refrain from giving up vehicle data.
https://www.plainsite.org/dockets/3hd2fpwvp/supreme-court-of-the-state-of-new-york-nassau-county/wang-jing-vs-tesla-inc/
Owners must purchase $995 cables and download a software kit from Tesla to
get limited information out of their cars via event data recorders there,
should they need this for legal, insurance or other reasons.

At the same time, crashed Teslas that are sent to salvage can yield
unencrypted and personally revealing data to anyone who takes possession of
the car's computer and knows how to extract it.

The contrast raises questions about whether Tesla has clearly defined goals
for data security, and who its existing rules are meant to protect. [...]

https://www.cnbc.com/2019/03/29/tesla-model-3-keeps-data-like-crash-videos-location-phone-contacts.html


What AI Can Tell From Listening to You (WSJ)

Monty Solomon <monty@roscom.com>
Tue, 2 Apr 2019 10:29:43 -0400
Artificial intelligence promises new ways to analyze people's voice—and
determine their emotions, physical heath, whether they are falling asleep at
the wheel and much more.

https://www.wsj.com/articles/what-ai-can-tell-from-listening-to-you-11554169408


Can we stop AI outsmarting humanity? (The Guardian)

Monty Solomon <monty@roscom.com>
Sun, 31 Mar 2019 19:10:51 -0400
The spectre of superintelligent machines doing us harm is not just science
fiction, technologists say—so how can we ensure AI remains *friendly* to
its makers?

https://www.theguardian.com/technology/2019/mar/28/can-we-stop-robots-outsmarting-humanity-artificial-intelligence-singularity


AI is flying drones—very, very slowly (NYTimes)

Monty Solomon <monty@roscom.com>
Wed, 27 Mar 2019 22:31:28 -0400
https://www.nytimes.com/2019/03/26/technology/alphapilot-ai-drone-racing.html

Artificial intelligence has bested top players in chess, Go and even
StarCraft. But can it fly a drone faster than a pro racer? More than $1
million is on the line to find out.


New Climate Books Stress We Are Already Far Down The Road To A Different Earth (TPR)

geoff goodfellow <geoff@iconia.com>
Sat, 30 Mar 2019 06:40:01 -0700
It was a telling moment: David Wallace-Wells, author of the new book The
Uninhabitable Earth, was making an appearance on MSNBC's talk show Morning
Joe.  He took viewers through scientific projections for drowned cities,
death by heat stroke and a massive, endless refugee crisis—due to climate
change.  As the interview closed, one of the show's hosts, Willie Geist,
looked to Wallace-Wells and said, "Let's end on some hope."

The disconnect speaks volumes about where we are now relative to climate
change. With his new book, which has quickly become a bestseller,
Wallace-Wells wants to be the firefighter telling you your house is going up
in flames right now. The Uninhabitable Earth: Life After Warming's
perspective can be neatly summed up through its opening line: "It's worse,
much worse, than you think." Geist, standing in for all of us, seems stunned
by the scale and urgency of the problem and wants to hear something that
will make him feel better.

Feeling better is definitely not what's going happen if you read The
Uninhabitable Earth or a second new book on climate change, Losing Earth: A
Recent History by Nathaniel Rich. But that doesn't mean you shouldn't read
both of them. We humans, and our project of civilization, are entering new
territory with the climate change we've driven—and both books offer
valuable perspectives if we're committed to being adult enough to face the
future.

When climate scientists use their models to project forward, they see a
spread of possible changes in the average temperature of the planet. Over
the next century or so, the predicted temperature increase ranges from
about two degrees to an upper limit of about eight degrees. Which path
Earth takes depends on its innate sensitivity to the carbon dioxide we're
dumping into the atmosphere combined with—and most important—our own
decisions about how much more carbon dioxide to add.

In Losing Earth, Rich wants us to understand how policymakers learned of,
and then ignored, the grave risks these paths represent for our future. In
The Uninhabitable Earth, Wallace-Wells wants us to understand just how bad
that future may get.

The point for humanity is that with every degree of warming, we get further
from the kind of world we grew up in. For Wallace-Wells this is not just a
matter of where you can go skiing in 2040. The Uninhabitable Earth focuses
on the potent cascades that flow through the entirety of the complex
human-environmental interaction we call "civilization." So, when
Wallace-Wells talks of economic impacts, he cites a study linking 3.7
degrees of warming to over $550 trillion of climate-related damage. Since
$550 trillion is twice today's global wealth, the conclusion is that
eventually rebuilding from the "n-th" superstorm will stop. We'll just
abandon our cities or live within the ruin. The Uninhabitable Earth also
gives us similar visions of rising hunger and conflict. If today's refugee
problems are straining political systems (the Syrian crisis created 1
million homeless people), Wallace-Wells asks us to imagine a global politics
when more than 200 million climate refugees are on the move (a UN projection
for 2050).

The picture The Uninhabitable Earth paints is unsparingly bleak. But is it
correct?

Prediction is difficult, as Yogi Berra noted, especially about the future.
One criticism of the book is that it favors worst-case scenarios. Indeed,
when it comes to extrapolating the human impacts of climate change,
researchers must rely on separate models of the planet, its ecosystems and,
say, human economic behavior. Each has its uncertainties and each yields
not one river-like line for the future but, instead, a spreading delta of
possibilities. When the models are combined, the uncertainties compound,
making risk-assessment a difficult task. For a scientist like myself, that
means we have more possible futures than the one described in The
Uninhabitable Earth... [...]

https://www.tpr.org/post/new-climate-books-stress-we-are-already-far-down-road-different-earth


Are We Ready For An Implant That Can Change Our Moods? (npr.org)

Richard Stein <rmstein@ieee.org>
Sun, 31 Mar 2019 07:54:31 +0800
https://www.npr.org/sections/health-shots/2019/03/29/707883163/are-we-ready-for-an-implant-that-can-change-our-moods

"The idea of changing the brain for the better with electricity is not new,
but deep brain stimulation takes a more targeted approach than the
electroconvulsive therapy introduced in the 1930s. DBS seeks to correct a
specific dysfunction in the brain by introducing precisely timed electric
pulses to specific regions. It works by the action of a very precise
electrode that is surgically inserted deep in the brain and typically
controlled by a device implanted under the collarbone. Once in place,
doctors can externally tailor the pulses to a frequency that they hope will
fix the faulty circuit."

Recall the book "The Danger Within Us: America's Untested, Unregulated
Medical Device Industry and One Man's Battle to Survive It" by Jeanne Lenzer
which discusses vagus nerve stimulator implant failure. See
http://catless.ncl.ac.uk/Risks/30/53#subj1.1

Without a randomized control trial to validate device efficacy, a cranial
implant faces significant obstacles to achieve regulatory approval, gain
widespread acceptance, and become commercially viable.  Volunteers will be
difficult to attract.


Researchers Find Google Play Store Apps Were Actually Government Malware (Motherboard)

the keyboard of geoff goodfellow <geoff@iconia.com>
March 30, 2019 at 09:41:01 EDT
Security researchers have found a new kind of government malware that was
hiding in plain sight within apps on Android's Play Store. And they appear
to have uncovered a case of lawful intercept gone wrong.

Hackers working for a surveillance company infected hundreds of people with
several malicious Android apps that were hosted on the official Google Play
Store for months, Motherboard has learned.

In the past, both government hackers and those working for criminal
organizations have uploaded malicious apps to the Play Store. This new case
once again highlights the limits of Google's filters that are intended to
prevent malware from slipping onto the Play Store. In this case, more than
20 malicious apps went unnoticed by Google over the course of roughly two
years.

Motherboard has also learned of a new kind of Android malware on the Google
Play store that was sold to the Italian government by a company that sells
surveillance cameras but was not known to produce malware until now. Experts
told Motherboard the operation may have ensnared innocent victims as the
spyware appears to have been faulty and poorly targeted. Legal and law
enforcement experts told Motherboard the spyware could be illegal.

The spyware apps were discovered and studied in a joint investigation by
researchers from Security Without Borders, a non-profit that often
investigates threats against dissidents and human rights defenders, and
Motherboard. The researchers published a detailed, technical report of their
findings on Friday.

“We identified previously unknown spyware apps being successfully uploaded
on Google Play Store multiple times over the course of over two years. These
apps would remain available on the Play Store for months and would
eventually be re-uploaded,'' the researchers wrote.

Lukas Stefanko, a researcher at security firm ESET, who specializes in
Android malware but was not involved in the Security Without Borders
research, told Motherboard that it's alarming, but not surprising, that
malware continues to make its way past the Google Play Store's filters.

“Malware in 2018 and even in 2019 has successfully penetrated Google Play's
security mechanisms. Some improvements are necessary, Google is not a
security company, maybe they should focus more on that.''

MEET EXODUS

In an apparent attempt to trick targets to install them, the spyware apps
were designed to look like harmless apps to receive promotions and marketing
offers from local Italian cellphone providers, or to improve the device's
performance...

https://motherboard.vice.com/en_us/article/43z93g/hackers-hid-android-malware-in-google-play-store-exodus-esurv


Office Depot Pays $25 Million To Settle Deceptive Tech Support Lawsuit (Bleeping Computer)

Gabe Goldberg <gabe@gabegold.com>
Sun, 31 Mar 2019 00:10:49 -0400
https://www.bleepingcomputer.com/news/security/office-depot-pays-25-million-to-settle-deceptive-tech-support-lawsuit/


Why Pedestrian Deaths Are At A 30-Year High (NPR)

Monty Solomon <monty@roscom.com>
Sun, 31 Mar 2019 19:05:29 -0400
https://www.npr.org/2019/03/28/706481382/why-pedestrian-deaths-are-at-a-30-year-high


More on the RISKS.ORG Newcastle certificate issue

Lindsay Marshall <Lindsay.Marshall@ncl.ac.uk>
Tue, 2 Apr 2019 09:05:36 +0000
The certificate expiration issue for catless at Newcastle was a little more
complicated than it might appear. catless.ncl.ac.uk exists only on a gateway
that machine that forwards all calls to another machine that is not visible
to the outside world. (This causes it's own problems (e.g. logging), but
they  are not relevant here.) This gateway machine is not under my control
and so I am out of the loop wrt things like certificates. Certificate
expiration should not be a huge a problem though for the risks site though
as it does not really need an HTTPS connection for safe operation.  However,
the RISKS site is set up to be highly cacheable and too use a variety of
other security features, as I use it in my lectures to demonstrate these
features. (See https://redbot.org/?uri=https://catless.ncl.ac.uk/risks if
you want the gory details). Recently I added the use of the HSTS
Strict-Transport-Security header, and, as recommended in various places, set
a long expiry date—after all it is not as if I were going to change my
mind.  This does mean though that if your certificate expires, browsers will
not allow  you to get to the site using HTTPS to HTTP, which is indeed what
happened—they do not provide useful error messages when this happens
either. In the end I used lynx to browse to the site and got a sensible
`certificate expired' error message, and put in a ticket for a new
certificate.  Currently HSTS is not enabled, though many of your browsers
will be remembering that it is for a long time. It's a difficult call to
know whether to re-enable it with a short expiry time, go back to what it
was and keep an eye on the certificate or just turn it off.


Insurers Creating a Consumer Ratings Service for Cybersecurity Industry (WSJ)

Monty Solomon <monty@roscom.com>
Wed, 27 Mar 2019 07:35:46 -0400
Collaborative effort led by Marsh & McLennan would score best products for
reducing hacking risk

https://www.wsj.com/articles/insurers-creating-a-consumer-ratings-service-for-cybersecurity-industry-11553592600


Another Gigantic Leak

"Peter G. Neumann" <neumann@csl.sri.com>
Thu, 28 Mar 2019 17:12:01 PDT
  [Courtesy of Steve Cheung)

Yet another gigantic data leak. what will the companies ever learn to
protect our data?

https://nakedsecurity.sophos.com/2019/03/12/researchers-disagree-on-volume-of-exposed-verificationsio-records/


Nokia phones caught mysteriously sending data to Chinese servers

geoff goodfellow <geoff@iconia.com>
Fri, 22 Mar 2019 12:29:12 -0700
Nokia fans waited for years for the first Nokia-Android handsets to arrive,
and it finally happened two years ago, when HMD Global unveiled its first
Nokia 6 handset, after acquiring the right to use the brand. Since then, HMD
unveiled a variety of Nokia handsets, culminating with the Nokia 9 PureView
a few weeks ago.

However, the old Nokia has nothing to do with the Nokia phones we're seeing
today, and all these devices are made in China by Foxconn. This brings us
to HMD's first China-related issue, as some Nokia phones have apparently
sent data to servers in the region without consent from users.

A Reuters report says that Finland will investigate the HMD phones, looking
at whether they breached data rules. It all started with Norwegian public
broadcaster NRK, which reported the breach on Thursday. A Nokia 7 Plus
owner was told that his phone contacted a particular server, sending data
packages in an unencrypted format.

According to NRK, Nokia had admitted that “an unspecified number of Nokia 7
Plus phones had sent data to the Chinese server,'' without disclosing who
owned the server. [...]

https://bgr.com/2019/03/21/nokia-data-breach-nokia-7-plus-sent-data-to-chinese-servers/


IBM + Flickr + facial recognition + privacy (Fortune)

Gabe Goldberg <gabe@gabegold.com>
Thu, 28 Mar 2019 20:16:14 -0400
The recent news that *IBM* used more than a million photos posted on
*Flickr* to train its facial recognition A.I. software set off alarm bells
among privacy advocates. But that incident may be just the tip of the
iceberg. /Fortune's/ Jeff John Roberts takes a deep dive into the facial
recognition software industry
https://click.newsletters.fortune.com/?qs=1f0ecb70e95268eea724df168d469d274ab14567e21a860ab397e6d4d4517b14e839788d7e2e310de3acd3c59a09a6f21b6bf53348fbd781
where startups created photo sharing apps for smartphones to lure consumers
into sharing their pictures.

  "We have consumers who tag the same person in thousands of different
  scenarios. Standing in the shadows, with hats-on, you name it," says Doug
  Aley, the CEO of Ever AI, a San Francisco facial recognition startup that
  launched in 2012 as EverRoll, an app to help consumers manage their
  bulging photo collections. Ever AI, which has raised $29 million from
  Khosla Ventures and other Silicon Valley venture capital firms, entered
  NIST's most recent facial recognition competition, and placed second in
  the contest's "Mugshots" category and third in "Faces in the Wild." Aley
  credits the success to the company's immense photo database, which Ever AI
  estimates to number 13 billion images.


Brits: Huawei's code is a steaming pile...

Henry Baker <hbaker1@pipeline.com>
Thu, 28 Mar 2019 08:09:19 -0700
In short, Huawei's SW is just as crappy as everyone else's, because it was
developed by coders who learned by copying the crappy coding practices they
found in earlier versions of Unix/Linux, and who were highly selected
through programming tests which could only be passed by adhering to these
same practices (Google early Microsoft programming tests, e.g.).  Yes,
better practices are now being developed in select universities and
companies, but there are still lots of textbooks out there which teach
unsafe coding styles.

I'm not trying to excuse Huawei, but I'm not certain that any other device
vendor could pass muster, either.  For example, what sort of coding style is
going to protect against Rowhammer?  Spectre?

We need to develop safer HW & SW technologies, and then we need to
completely rewrite several *generations'* worth of bad software.

"There were over 5000 direct invocations of 17 different safe memcpy()-like
functions and over 600 direct invocations of 12 different unsafe
memcpy()-like functions.  Approximately 11% of the direct invocations of
memcpy()-like functions are to unsafe variants."

"There were over 1400 direct invocations of 22 different safe
strcpy()-like functions and over 400 direct invocations of 9 different
unsafe strcpy()-like functions.  Approximately 22% of the direct
invocations of strcpy()-like functions are to unsafe variants."

"There were over 2000 direct invocations of 17 different safe sprintf()-like
functions and almost 200 direct invocations of 12 different unsafe
sprintf()-like functions.  Approximately 9% of the direct invocations of
sprintf()-like functions are to unsafe variants."

https://www.theregister.co.uk/2019/03/28/hcsec_huawei_oversight_board_savaging_annual_report/

Huawei savaged by Brit code review board over pisspoor dev practices

HCSEC pulls no technical punches in annual report

By Gareth Corfield 28 Mar 2019 at 12:44

Britain's Huawei oversight board has said the Chinese company is a threat to
British national security after all—and some existing mobile network
equipment will have to be ripped out and replaced to get rid of said threat.

"The work of HCSEC [Huawei Cyber Security Evaluation Centre]... reveals
serious and systematic defects in Huawei's software engineering and cyber
security competence," said the HCSEC oversight board in its annual report,
published this morning.

HCSEC—aka The Cell—based in Banbury, Oxfordshire, allows UK spy crew
GCHQ access to Huawei's software code to inspect it for vulns and backdoors.

The oversight folk added: "Work has continued to identify concerning issues
in Huawei's approach to software development bringing significantly
increased risk to UK operators, which requires ongoing management and
mitigation."

While the report itself does not identify any Chinese backdoors, which is
the current American tech bogeyman du jour, it highlights technical and
security failures in Huawei's development processes and attitude towards
security for its mobile network equipment.

https://www.gov.uk/government/publications/huawei-cyber-security-evaluation-centre-oversight-board-annual-report-2019

https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/790270/HCSEC_OversightBoardReport-2019.pdf

"In some cases, remediation will also require hardware replacement (due to
CPU and memory constraints) which may or may not be part of natural operator
asset management and upgrade cycles...  These findings are about basic
engineering competence and cyber security hygiene that give rise to
vulnerabilities that are capable of being exploited by a range of actors."

Even though Huawei has talked loudly about splurging $2bn on software
development, heavily hinting that this would include security fixes, HCSEC
scorned this.  Describing the $2bn promise as "no more than a proposed
initial budget for as yet unspecified activities", HCSEC said it wanted to
see "details of the transformation plan and evidence of its impact on
products being used in UK networks before it can be confident it will drive
change" before giving Huawei the green light.

The report's findings had been telegraphed long in advance by British
government officials, who have been waging war with Huawei through the
medium of press briefings.

Amateurs in a world desperately needing professionals

One key problem highlighted by the HCSEC oversight board was "binary
equivalence", a problem Huawei has been relatively open about.  HCSEC
testers had previously flagged up problems with not knowing whether the
binaries they were inspecting for Chinese government backdoors were
compilable into firmware equivalent to what was deployed in live production
environments.  Essentially, the concern is that software would behave
differently when installed in the UK's telecoms networks than it did during
HCSEC's tests.

In today's report, the Banbury centre team said: "Work to validate them by
HCSEC is still ongoing but has already exposed wider flaws in the underlying
build process which need to be rectified before binary equivalence can be
demonstrated at scale."

"Unless and until this is done it is not possible to be confident that the
source code examined by HCSEC is precisely that used to build the binaries
running in the UK networks."

HCSEC also highlighted something The Register exclusively revealed precise
details of this morning, saying: "It is difficult to be confident that
vulnerabilities discovered in one build are remediated in another build
through the normal operation of a sustained engineering process."

It also criticised Huawei's "configuration management improvements",
pointing out that these haven't been "universally applied" across product
and platform development groups.  Huawei's use of "an old and soon-to-be out
of mainstream support version" of an unnamed real time operating system
(RTOS) "supplied by a third party" was treated to some HCSEC criticism, even
though Huawei bought extended support from the RTOS's vendor.

HCSEC said: "The underlying cyber security risks brought about by the single
memory space, single user context security model remain," warning that
Huawei has "no credible plan to reduce the risk in the UK of this real time
operating system."

OpenSSL is used extensively by Huawei—and in HCSEC's view perhaps too
extensively:

"In the first version of the software, there were 70 full copies of 4
different OpenSSL versions, ranging from 0.9.8 to 1.0.2k (including one from
a vendor SDK) with partial copies of 14 versions, ranging from 0.9.7d to
1.0.2k, those partial copies numbering 304.  Fragments of 10 versions,
ranging from 0.9.6 to 1.0.2k, were also found across the codebase, with
these normally being small sets of files that had been copied to import some
particular functionality."

Even after HCSEC threw a wobbly and told Huawei to sort itself out pronto,
the Chinese company still came back with software containing "code that is
vulnerable to 10 publicly disclosed OpenSSL vulnerabilities, some dating
back to 2006."

Huawei also struggles to stick to its own secure coding guidelines' rules on
memory handling functions, as HCSEC lamented:

"Analysis of relevant source code worryingly identified a number
pre-processor directives of the form

'#define SAFE_LIBRARY_memcpy(dest,destMax,src,count) memcpy(dest,src,count)',

which redefine a safe function to an unsafe one, effectively removing any
benefit of the work done to remove the unsafe functions."

"This sort of redefinition makes it harder for developers to make good
security choices and the job of any code auditor exceptionally hard," said
the government reviewers.

In a statement issued this morning Huawei appeared not to be overly bothered
about these and the other detailed flaws revealed by NCSC, saying that it
"understands these concerns and takes them very seriously".  It added: "A
high-level plan for the [software development transformation] programme has
been developed and we will continue to work with UK operators and the NCSC
during its implementation to meet the requirements created as cloud,
digitization, and software-defined everything become more prevalent."

Commenting on the NCSC's vital conclusion that none of these cockups were
the fault of the Chinese state's intelligence-gathering organs, Rob
Pritchard of the Cyber Security Expert told The Register: "I think this
presents the UK government with an interesting dilemma—the HCSEC was set
up essentially because of concerns about threats from the Chinese state to
UK CNI (critical national infrastructure).  Finding general issues is a good
thing, but other vendors are not subject to this level of scrutiny.  We have
no real (at least not this in depth) assurance that products from rival
vendors are more secure."


More on the Swiss electronic voting experiment (Post—Swiss)

"Peter G. Neumann" <neumann@csl.sri.com>
Fri, 29 Mar 2019 9:58:25 PDT
https://www.post.ch/fr/notre-profil/entreprise/medias/communiques-de-presse/2019/la-poste-suspend-l-exploitation-de-son-systeme-de-vote-electronique-pour-une-duree-determinee


'The biggest, strangest problem I could find to study' (bbc.com)

Richard Stein <rmstein@ieee.org>
Wed, 27 Mar 2019 18:28:02 +0800
https://www.bbc.com/news/technology-47158067

Discusses Andrew Morris' efforts to profile cybertheft intrusion patterns
using honeypots. Tallyho!

"In 2018, Mr Morris's network was hit by up to four million attacks a day.
His honey-pot computers process between 750 and 2,000 connection requests
per second - the exact rate depends on how busy the bad guys are at any
given moment.

"His analysis shows that only a small percentage of the traffic is benign.

"That fraction comes from search engines indexing websites or organisations
such as the Internet Archive scraping sites. Some comes from security
companies and other researchers.

"The rest of the Internet's background noise—about 95%—is malicious."


Black-box data shows anti-stalling feature engaged in Ethiopia crash (WashPost)

Monty Solomon <monty@roscom.com>
Fri, 29 Mar 2019 23:46:04 -0400
https://www.washingtonpost.com/local/trafficandcommuting/black-box-data-shows-anti-stalling-feature-was-engaged-in-ethiopia-crash/2019/03/29/2d231ebc-5238-11e9-88a1-ed346f0ec94f_story.html


The emerging Boeing 737 MAX scandal, explained (Vox)

Drew Dean <drew.dean@sri.com>
Fri, 29 Mar 2019 18:58:04 +0000
https://www.vox.com/business-and-finance/2019/3/29/18281270/737-max-faa-scandal-explained


Re: How a 50-year-old design came back... (Burton, RISKS-31.13)

David Brodbeck <david.m.brodbeck@gmail.com>
Fri, 29 Mar 2019 18:58:00 -0700
> I also understand that the Stealth Bomber is such a complex shape that it
> can only be flown by software.

This is true of most fighter aircraft designed since the mid-70s, although
it doesn't exactly have to do with shape complexity. Civilian transport
aircraft have aerodynamic features that make them dynamically stable—this
allows humans to fly them directly, because any divergence from straight and
level happens on a time scale humans can react to. However, those same
aerodynamic features make them less maneuverable, which is undesirable in a
fighter.

The solution is to let a computer fly the airplane, because it can react
fast enough to stabilize it. The human is then actually maneuvering a
synthetic "flight model" in the computer, which the computer attempts to
make the real airplane match.

The F-16 was the first fighter to use this kind of "relaxed stability"
system. It originally used a quadruply-redundant analog system.

Prior art would be birds, which for efficiency reasons are dynamically
unstable, especially in pitch and yaw. They've had a lot more development
time to work out the bugs, however. ;)


Re: How Google's Bad Data Wiped a Neighborhood off the Map (Medium) (RISKS-31.14)

Dan Jacobson <jidanni@jidanni.org>
Thu, 28 Mar 2019 19:15:03 +0800
Well I bet China's name is still not back on OpenStreeMap,
https://www.openstreetmap.org/#map=3/34.05/93.16
by the time the RISKS reader reads this, despite
https://github.com/gravitystorm/openstreetmap-carto/issues/3725
https://github.com/openstreetmap/chef/issues/184


Re: Tweet by Soldier of FORTRAN on Twitter (RISKS-31.14)

Dan Jacobson <jidanni@jidanni.org>
Thu, 28 Mar 2019 19:19:54 +0800
> you're right! They changed the password to `********'

See also https://crbug.com/924903
"Password filler learns the asterisks version of the password"


Re: Unproven declarations about healthcare (Black and Douglass, RISKS-31.14)

Martin Ward <martin@gkc.org.uk>
Mon, 1 Apr 2019 12:16:58 +0100
>  Are there studies to support

There are many studies:

On average, other wealthy countries spend about half as much per person on
health than the US spends:

https://www.healthsystemtracker.org/chart-collection/health-spending-u-s-compare-countries/

But the US generally lags behind comparable countries in prevention and
other measures of quality, and has by far the highest rates of cost-related
access problems:

https://www.healthsystemtracker.org/brief/measuring-the-quality-of-healthcare-in-the-u-s/

Medical bills were the biggest cause of U.S. bankruptcies:

https://www.thebalance.com/medical-bankruptcy-statistics-4154729

A few minutes with Google will uncover many more studies.

> For instance, "... the more sick people there are (especially those that
> need expensive treatments), the more profit there is to be made." For the
> same premiums, insurance companies *far* prefer healthy clients to sick
> ones.

Doctors and hospitals make more money from sick people, and insurance
companies only prefer healthy clients if they are prevented from raising
premiums for people with pre-existing conditions. Take away this pesky
Government intervention, and insurance companies will also prefer sick
people: since they can charge higher premiums and make more profit per
person.

> "Managing symptoms is more profitable than curing a disease;" Really?
> Perhaps Big Pharma makes little on cough medicine, but has a tidy margin on
> treatments for TB.

The total sales value of OTC cough, cold and sore throat treatments reached
460 million British pounds in 2018. Not to be sneezed at!  There were 5,664
TB cases in England in 2016.  The average cost to treat drug-susceptible TB
was about 7,200 pounds: so the TB cure costs less than 1/10th of the cost of
cough medicine symptom management.

But we should compare like with like: before antibiotics were discovered and
TB could be cured, symptom management involved a prolonged stay in an
expensive sanitorium in the Swiss Alps: which obviously costs a lot more in
the long term than a course of antibiotics.

> "Expensive drugs are more profitable than, for example, recommending simple
> changes to diet ..." Sadly, few Americans follow recommendations to change
> their diet. Americans *will* take pills.

And there is a vast advertising and lobbying system in place,
costing billions of dollars per year, to ensure that it stays this way!

> "... encouraging unhealthy habits is beneficial to a healthcare company."
> My insurance company and the mailers I get from hospitals and doctors all
> encourage me to have healthy habits.

Well, they feel obliged to pay lip service to "healthy habits".  As I said:
it be seen as a bit *too* obviously cynical to heavily advertise and
subsidise tobacco. But they *did* manage to heavily advertise and
over-prescribe opioids (which are far more dangerous and more addictive than
tobacco), resulting in the current "opioid crisis".
https://www.drugabuse.gov/drugs-abuse/opioids/opioid-overdose-crisis the
treatment for which involves: prescribing more of these expensive opioids to
patients who would otherwise be healthy.

The Centers for Disease Control and Prevention estimates that the total
"economic burden" of prescription opioid misuse alone in the United
States is $78.5 billion a year.

https://www.drugabuse.gov/related-topics/trends-statistics/overdose-death-rates

> Government-run medicine is no panacea. The U.S. federal government has been
> incredibly wasteful and has not always picked winners, for instance, the
> Tuskegee Syphilis Study and the Enron scandal.

On 26/03/19 23:07, Toby Douglass wrote:
> All patients -must- pay (taxation) and if the service is no good, there is
> nowhere else for them to go

In a country which has some form of democracy, the public have the means
to pressurise the Government to improve the health care system.
On the other hand, if a company has a monopoly on a particular drug
or treatment, then they can charge "whatever the market will bear".
There is nowhere else for the sufferer to go.

The best way to get good health care is to take people who
are passionate about caring for others (fortunately there are
many such people to be found) and give them the freedom
to do what they love doing. People who are motivated primarily
by money do not necessarily make the best doctors and nurses.
A public healthcare system is at least *supposed* to put the care
of the public as first priority. A for-profit system necessarily
*must* put the maximisation of profit as first priority.
These two priorities often clash, as many studies have shown.


Unproven declarations about healthcare (Re: Black, RISKS-31.15)

Wols Lists <antlists@youngman.org.uk>
Wed, 27 Mar 2019 14:37:30 +0000
On 26/03/19 23:03, RISKS List Owner wrote:
> "... encouraging unhealthy habits is beneficial to a healthcare company."
> My insurance company and the mailers I get from hospitals and doctors all
> encourage me to have healthy habits.

And how do you define healthy habits? The standard advice for people
with type II diabetes is to eat little and often, but my medical
research has convinced me that eating little and often *causes* type II
diabetes.

The original study on fats in diets is now widely recognised as flawed,
and indeed all the early "eat margarine not butter" campaigns ended up
with people dosing themselves very heavily with trans-fats, which is now
recognised as being very *un*healthy.

The problem is that much of what we are led to believe is "fake news"
from the media (as mentioned elsewhere in this digest!) where
journalists who have no real grasp of the subject grab a snippet of
news, run with it, and watch it take on a life of its own that bears no
resemblance to reality. Doctors and insurance companies are not immune
to being taken in.

What's that quote? "A lie can make it half way round the world before
the truth can get its boots on"? People believe what they want to
believe, and actually it's extremely hard to spot when reason and your
own prejudices clash. When fed stuff that matches your prejudices, you
will normally believe it without thinking, and I'm convinced much
"healthy habits advice" is old wives tales ...


Re: Is curing patients, a sustainable business model? (Douglass, RISKS-31.14)

Dmitri Maziuk <dmaziuk@bmrb.wisc.edu>
Wed, 27 Mar 2019 09:56:37 -0500
One small problem with competition is that once your populace is no
longer constrained by oceans and absence of information, you have to
compete with e.g., these guys:
https://www.treatmentabroad.com/destinations/ukraine/why-choose-ukraine

And these guys:
https://news.co.cr/need-know-dental-tourism-costa-rica/68797/

And of course these guys: https://en.wikipedia.org/wiki/Sicko


According to this bank, password managers are bad

Sheldon Sheps <sheldon10101@gmail.com>
Thu, 28 Mar 2019 20:03:13 -0400
Hard to believe but true.

   —------- begin -------

Canada's banking system has a few big banks. One of them is the Bank of
Montreal (BMO). I have a credit card with them. Recently, I got an email
from them on keeping your account secure online.

They suggested that you change your password every 6 months. I wrote back
suggesting that was a bad idea and the bank, which supplied IBM's Trusteer
service for free, consider providing a password manager.  Amazingly, I got a
reply.

Here is part of their reply, edited for space.

  I appreciate your concern about being prompted to change your password.

I can advise that it is important that you create an online password that
adequately protects your account and personal information. A longer, more
complex password is less susceptible to being compromised and will provide
you with greater security...

I  can also advise that there are several programs and browser options that
can store your Internet passwords and user identifications for you.   BMO
Bank of Montreal does not recommend this feature, as it poses a potential
security risk. Passwords are confidential and as a security measure we
suggest that you do not save them.

The Keepass password manager master password I use for credit cards and
banking info is 26 characters long. It isn't written down anywhere.  BO
wants me to memorize a strong password that should be different from all my
other strong passwords and one I have to change every 6 months.

I think that is ridiculous.

   —------- end -------

  [The entire correspondence (PGN-pruned) is illuminating, but much too long
  for RISKS.  Contact Sheldon if you are interested.  PGN]


"Privacy and Security Across Borders" (Jen Daskel)

Marc Rotenberg <rotenberg@epic.org>
Mon, 1 Apr 2019 16:01:50 -0400
Jen Daskel, Yale Law Journal, 1 Apr 2019

Abstract: Three recent initiatives—by the United States, European Union,
and Australia—are opening salvos in what will likely be an ongoing and
critically important debate about law enforcement access to data, the
jurisdictional limits to such access, and the rules that apply. Each of
these developments addresses a common set of challenges posed by the
increased digitalization of information, the rising power of private
companies delimiting access to that information, and the cross-border nature
of investigations that involve digital evidence. And each has profound
implications for privacy, security, and the possibility of meaningful
democratic accountability and control.

Please report problems with the web pages to the maintainer

x
Top