The RISKS Digest
Volume 30 Issue 93

Saturday, 1st December 2018

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Belfast plane incident could have been 'catastrophic'
BBC News
Indonesian JT610 Flight Data
Robert Dorsett
China Copied This Russian Jet Fighter (And It Has All Sorts of Problems
Yahoo
Medical device rules need 'drastic change' to protect patients
BBC
Marriott discloses massive data breach affecting up to 500M guests
WashPost
The US Postal Service exposed data of 60 million users
TechCrunch
Constructive software engineering?
Tom Van Vleck
Israeli artificial intelligence company improves highway safety in Las Vegas
The Times of Israel
Potentially Disastrous Rowhammer Bitflips Can Bypass ECC Protections
Dan Goodin
Climate Change and the Savage Human Future
NYTimes
Now it's Office's turn to have a load of patches pulled
Ars Technica
Windows 10 October 2018 Update is back, this time without deleting your data
Ars Technica
E-commerce site is infected not by one, but two card skimmers
Ars Technica
The Snowden Legacy, part one: What's changed, really?
Ars Technica
Christmas spirit triumphs over data law
CNN
Apple pitches 9M VA medical records on iPhone format
Fortune
A Clearer Message on Cochlear Implants
NY Times
This new weapon alerts police as soon as it's fired
WashPost
How The Wall Street Journal is preparing its journalists to detect deepfakes
NiemanLab.Org
Huron Daily Tribune reporter Brenda Battel fired over voicemail for Republican candidate
John James
WashPost
You snooze, you lose: Insurers make the old adage literally true
Ars Technica
GMail's spam filter is getting vicious? (Rob Slade
????
FCC Launches New Offensive Against Scam, Robo Calls
EWeek
Who lives with you? Facebook seeks to patent software to figure out profiles of households
Los Angeles Times
This bill includes prison for CEOs who fail to take consumer privacy seriously
Los Angeles Times
Can The Police Remotely Drive Your Stolen Car Into Custody?
Slashdot
Free Software Messiah Richard Stallman: We Can Do Better Than Bitcoin
CoinDesk
Mobile Application/Social Media Addiction Freedom Experiment
TechCrunch.com and The Economist
China Creating Gene-Edited Babies
MIT Technology Review
British Parliament seizes internal Facebook documents by threatening to jail a different CEO
Rob Slade
The Dangerous Junk Science of Vocal Risk Assessment
The Intercept
Can The Police Remotely Drive Your Stolen Car [or you?] Into Custody?
Slashdot
LinkedIn used 18 million non-user e-mails to target Facebook ads
The Verge
Study: Smart Speakers Make Passive Listeners
Melanie Lefkowitz
Re: 670 ballots in a precinct with 276 voters
David Tarabar
Re: Russia suspected of jamming GPS signal in Finland
Henry Baker
Re: Japan cybersecurity minister admits he has never used a computer
Attila the Hun
Re: Tesla
Attila the Hun
Re: Awful AI is a curated list to track current scary usages of AI
Amos Shapir
Re: The Cleaners' Looks At Who Cleans Up The Internet's Toxic Content
NPR
Book review: EFF's The End of Trust
David Strom
Info on RISKS (comp.risks)

Belfast plane incident could have been 'catastrophic' (BBC News)

Jose Maria Mateos <chema@rinzewind.org>
Thu, 22 Nov 2018 07:00:12 -0500
https://www.bbc.com/news/uk-northern-ireland-46297710

The report stated that an outside air temperature of -52C was mistakenly
entered into the Flight Management Computer by a crew member, instead of the
actual temperature of 16C.

"This, together with the correctly calculated assumed temperature thrust reduction of 48C, meant the aircraft engines were delivering only 60% of their maximum rated thrust," continued the report.

The plane took off from the airport with "insufficient power to meet regulated performance requirements" and struck the light.

Crew on the flight did not recognise the issue until they reached the end of the runway.


Indonesian JT610 Flight Data

Robert Dorsett <rdd@dorsett.us>
Sun, 25 Nov 2018 22:23:33 -0600
https://www.satcom.guru/2018/11/first-look-at-jt610-flight-data.html%3Fm%3D1


China Copied This Russian Jet Fighter (And It Has All Sorts of Problems) (yahoo.com)

Richard Stein <rmstein@ieee.org>
Fri, 23 Nov 2018 16:59:39 +0800
https://www.yahoo.com/news/china-copied-russian-jet-fighter-125900746.html

Risk: "Copy and paste" a fighter jet is similar to "copy and paste" for
software: new defects can emerge.


Medical device rules need 'drastic change' to protect patients (bbc.com)

Richard Stein <rmstein@ieee.org>
Mon, 26 Nov 2018 13:24:40 +0800
https://www.bbc.com/news/health-46337937

A cautionary and balanced essay on medical device risks and regulatory
reform within the EU.


Marriott discloses massive data breach affecting up to 500M guests

Lauren Weinstein <lauren@vortex.com>
Fri, 30 Nov 2018 09:20:58 -0800
via NNSquad

https://www.washingtonpost.com/business/2018/11/30/marriott-discloses-massive-data-breach-impacting-million-guests/

  Security experts also questioned the extent and quality of the encryption
  used by Marriott. The news release specified that the company used
  encryption to protect credit card numbers, but the company did not specify
  whether other personally identifiable information --including names,
  addresses, phone numbers, email addresses and passport numbers—was
  protected in this way, as security experts recommend.  The company did not
  immediately respond to a request for comment as to whether all of the data
  had been encrypted when accessed by the hackers.  The company
  acknowledged, however, a possible failing in the encryption security it
  had for credit card numbers, saying that it could not "rule out the
  possibility" that encryption keys were taken by hackers, allowing access
  to massive troves of data.


The US Postal Service exposed data of 60 million users

Lauren Weinstein <lauren@vortex.com>
Mon, 26 Nov 2018 08:54:57 -0800
via NNSquad [Grade-school level coding error]

https://techcrunch.com/2018/11/26/the-us-postal-service-exposed-data-of-60-million-users/

  A broken US Postal Service API exposed from over 60 million users and
  allowed a researcher to pull millions of rows of data by sending wildcard
  requests to the server. The resulting security hole has been patched after
  repeated requests to the USPS.  The USPS service, called InformedDelivery,
  allows you to view your mail before it arrives at your home and offered an
  API to allow users to connect their mail to specialized services like
  CRMs. We profiled in the service in 2017.  The anonymous researcher showed
  that the service accepted wildcards for many searches, allowing any user
  to see any other users on the site. Brian Krebs has a copy of the API on
  his site.


Constructive software engineering?

Tom Van Vleck <thvv@multicians.org>
Fri, 23 Nov 2018 13:09:51 -0500
Think of a past disaster you've been a part of, a project that failed.
Can we learn from it?

We used to call these events "tanker collisions."  The idea was, they
were slow motion disasters; everyone could see that something terrible
was inevitable, but it was too late to do anything.

Ask yourself: was it the people, were they too dumb?  Usually the
answer is no, they were fine people, as good as you can hire.  Maybe
they weren't all geniuses, but they should have been good enough.

How about the tools: did they cause the failure?  Lots of people
complain about their tools.  But we've seen groups with really fancy
tools fail to produce, and other projects succeed with very imperfect
tools.  And "it's a poor workman who blames his tools."

Was it management?  Yeah!  Ask anybody, and they'll tell you it was
management's fault.  "Management blew it.  The project was in the weeds
and management was counting paperclips.  They didn't act in time.
They flew the plane right into the mountain."

It seems to be very hard to think about management problems.  Often,
when we decide something is a management problem, that's shorthand for
"unsolvable, not gonna go there."  As soon as the trail leads into
that thicket, we abandon it and look elsewhere for ways to make things
better.

When I look back at failed projects I know about, many seem to have
had major management problems.  But when I look at future plans, we
seem to spend our planning time on technical issues.  We don't
anticipate management problems or do anything to prevent them, no
matter how often we've had them in the past.

[We have names for a few kinds of management problems, but we have no
taxonomy or principle of enumeration.  That is, we don't know how many
ways management could go wrong, and if there is a management problem,
everybody will have a different name for it.]

Each new project sets out with the basic plan of doing new things,
using new tools, and managing things in the same way that didn't work
last time. If management is the cause of many of our problems, can we
talk about changing how we manage?

We could start by listing some approaches that won't work, and
giving them entertaining names and descriptions.

Cuisinart Management: I love metrics, when I can use them to convince
people to do the right thing.  At the same time, I worry that metrics
may become a goal in themselves, that we may spend time getting good
numbers instead of getting good quality.  The basic idea in measuring
a process is that one can add data about two different events
together.  But every bug is different, every line of code unique.  We
don't order software by the cubic yard.  And mincing all the programs,
or bugs, or tests, or whatever up in a grinder and then counting the
semicolons, or basic blocks, or paths, can lose sight of the code, and
the way it runs, and the way bugs get into the code.

Dumbo Management: Suppose the Circus Engineering Institute does a
study and determines that all the elephants that can fly are holding
little feathers.  Then it proposes to give all the big elephants
feathers too, so they'll be able to fly.  This is the problem with
process evaluations.  A good organization will (often) get a good
assessment score.  Often it is possible to change a terrible
organization to get a better score without really improving the
quality of its output.  Some organizations with organized processes
can produce good products.  The inference that the good product is
caused by the organized process needs support, in the form of an
explanation of how particular good or bad features are caused.  (Other
organizations have many rules and procedures, and still fail to
produce good products.)  Remember my story of Andre, who wrote perfect
code in pencil?  Don't buy everybody a pencil and expect perfect code.

New Communication Tool: Sometimes an organization will mandate a new
tool, hoping that this will produce better products.  Some caution is
advisable.  Management tools may focus on neatness, on "doing
everything the same way," rather than on quality.  I have worked on
projects where the development progress recording tools were so slow
and hard to use that product productivity was trashed.

Throw the Management Out: After a disaster, sometimes even part way
through one, it's common to replace the management, and permute the
organization chart.  The troops know that this rarely helps.  Why
should we expect the new managers or new structure to work any better?
Change alone may get people interested in new approaches to the
problem for a while, but there are other effects of opposite sign,
such as the cost to educate newcomers.  It's like throwing out your
pencil when you make a spelling error.

read Parnas and Clements, "A rational design process: how and why to fake it" (IEEE TOSE, Feb 1986)
https://www.researchgate.net/publication/225524076_A_rational_design_process_how_and_why_to_fake_IT


Israeli artificial intelligence company improves highway safety in Las Vegas (The Times of Israel)

Gabe Goldberg <gabe@gabegold.com>
Fri, 23 Nov 2018 19:18:40 -0500
Waycare startup platform uses in-vehicle information and municipal traffic
data to understand road conditions in real time; year-long test reduced
traffic accidents by 17%

https://www.timesofisrael.com/israeli-artificial-intelligence-company-improves-highway-safety-in-las-vegas/

Maybe addressing risks? Will it scale? Will it be hackable? We'll see.


Potentially Disastrous Rowhammer Bitflips Can Bypass ECC Protections

ACM TechNews <technews-editor@acm.org>
Mon, 26 Nov 2018 11:46:00 -0500
Dan Goodin, Ars Technica, 21 Nov 2018
via ACM TechNews, Monday, November 26, 2018

Researchers at Vrije University Amsterdam in the Netherlands found a way to
circumvent an error-correcting code (ECC) patch in high-end DDR3 memory
chips thought to prevent exploitation by the Rowhammer hack. ECC adds
sufficient redundancy to repair single bitflips in a 64-bit word, and when
two bitflips occur in a word, it causes the underlying program to crash;
when three bitflips occur in the right places, ECC can be bypassed. The team
found a timing side channel by measuring the amount of time it took to
execute certain processes to extract granular details about bitflips
occurring within the silicon. Said the researchers, "Armed with this
knowledge, we then proceeded to show that ECC merely slows down the
Rowhammer attack and is not enough to stop it." Although they acknowledged
the new exploit presents no immediate threat, the researchers said these
findings show that Rowhammer is continuously evolving and should not be
discounted.

https://orange.hosting.lsoft.com/trk/click%3Fref%3Dznwrbbrs9_6-1d4b6x218c55x070404%26


Climate Change and the Savage Human Future (NYTimes)

geoff goodfellow <geoff@iconia.com>
Sun, 25 Nov 2018 16:15:26 -1000
Homo Sapiens Was The First Species To Alter The Environment That Sustained
Us—To The Point That It Might Not Sustain Us Anymore.
https://www.nytimes.com/interactive/2018/11/16/magazine/tech-design-nature.html

Long after the last print copy of the King James Bible has disintegrated and
the Venus de Milo has gone to powder, the glory of our civilization will
survive in misshapen, neon-flecked rocks called plastiglomerate: compounds
of sand, shells and molten plastic, forged when discarded wrappers and
bottle caps burn in beach campfires. Additional clues about the way we lived
will be found in the ubiquity of cesium-137, the synthetic isotope produced
by every nuclear detonation, and in the glacial ice (should any glaciers
remain) that will register a spike of atmospheric carbon dioxide beginning
in the Industrial Revolution. Future anthropologists might not be able to
learn everything there is to know about our culture from these geological
markers, but they will be a good start.

In the beginning, human beings tended to view nature as a mortal enemy --
with wariness, dread and aggression. The closer we were to the other
animals, the more threatened we were by their proximity—geographical and
behavioral. `Wilderness': from the Old English -ness + wild + deor, `the
place of wild beasts.' In the Old and New Testaments, `the wilderness' is a
godless, hostile domain, the anti-Eden; Samuel Johnson defined it as “a
tract of solitude and savageness''; William Bradford, a founder of Plymouth
Colony, reacted to the untrammeled New World with horror, calling it
“hideous & desolate ... full of wild beasts & wild men.''

These examples come from Roderick Nash's totemic history, `Wilderness and
the American Mind' (1967). Nash describes how, in the 19th century, the
terms of humanity's relationship with nature flipped. It was no longer
possible to take seriously the premise that nature was a threat to
civilization; civilization, it was understood, was a threat to nature. This
observation, developed by Alexander von Humboldt and successors like George
Perkins Marsh (who worried that `climatic excess' might lead to the
extinction of the human species) and John Muir (who sought to protect
America's natural cathedrals from human defilement), helped inspire the
birth of the American environmental movement. It took decades for a new
conception of wilderness—sacred, virginal, innocent of human influence --
to take hold, and it may take decades more before it is widely understood to
be a myth.  [PGN-truncated]

https://www.nytimes.com/interactive/2018/08/01/magazine/climate-change-losing-earth.html


Now it's Office's turn to have a load of patches pulled

Monty Solomon <monty@roscom.com>
Wed, 21 Nov 2018 08:00:00 -0500
Now it's Office's turn to have a load of patches pulled

Two patches pulled altogether; another is known to cause crashes but should
be used anyway.

https://arstechnica.com/gadgets/2018/11/now-its-offices-turn-to-have-a-load-of-patches-pulled/


Windows 10 October 2018 Update is back, this time without deleting your data

Monty Solomon <monty@roscom.com>
Wed, 21 Nov 2018 08:03:54 -0500
Microsoft is opening up about some of its testing procedures, too.

https://arstechnica.com/gadgets/2018/11/windows-10-october-2018-update-is-back-this-time-without-deleting-your-data/


E-commerce site is infected not by one, but two card skimmers

Monty Solomon <monty@roscom.com>
Wed, 21 Nov 2018 08:10:14 -0500
Rival crime gangs race against each other to steal consumers' personal data

https://arstechnica.com/information-technology/2018/11/sign-of-the-times-payment-card-skimmers-go-head-to-head-on-e-commerce-site/


Monty Solomon <monty@roscom.com>
Wed, 21 Nov 2018 08:12:02 -0500
Engineers were reportedly encouraged to limit "bad experiences" to one per
trip.

https://arstechnica.com/cars/2018/11/report-uber-self-driving-team-was-preparing-for-ceo-demo-before-fatal-crash/


The Snowden Legacy, part one: What's changed, really?

Monty Solomon <monty@roscom.com>
Wed, 21 Nov 2018 08:09:12 -0500
In our two-part series, Ars looks at what Snowden's disclosures have wrought politically and institutionally.

https://arstechnica.com/tech-policy/2018/11/the-snowden-legacy-part-one-whats-changed-really/


Christmas spirit triumphs over data law (CNN)

Jim Reisert AD1C <jjreisert@alum.mit.edu>
Wed, 21 Nov 2018 07:37:24 -0700
Tara John and Nina Avramova, CNN, November 21, 2018
https://www.cnn.com/2018/11/21/europe/germany-christmas-gdpr-grm-scli-intl/

  (CNN) A German town managed to revive a children's Christmas tradition
  after European data protection laws very nearly scrapped it.  In previous
  years up to 4,000 wishes to Father Christmas were placed on a tree at a
  Christmas market in the southern town of Roth, according to German
  newspaper Die Welt.  The city council would then attempt to fulfill those
  wishes, which included the names and addresses of the children who wrote
  them.

  Previous requests granted included trips to the fire station, books and
  visits to the mayor. The festive event was seen as a major highlight for
  local kids.  But the popular activity had to stop in 2016 because of
  Germany's data privacy legislation, *Die Welt* reports.  Roth found a
  workaround—putting the wishes in a locked box—but that was made
  redundant in May when the European Union's General Data Protection
  Regulation (GDPR) came into force.

  That legislation states that parents of minors have to provide consent to
  the use of their kids' data. Organizations that fail to comply face big
  financial penalties.  Providing proof of this was deemed too onerous by
  the council and the city decided against festive wish lists for 2018.
  [...]

  Local radio station Antenne Bayern found a solution.  It created a wish
  list, which included a parental consent disclaimer, which can be printed
  from their website and put in the wishing box at the Christmas market.


Apple pitches 9M VA medical records on iPhone format (Fortune)

Gabe Goldberg <gabe@gabegold.com>
Wed, 21 Nov 2018 15:42:37 -0500
As part of its push into healthcare, *Apple* has pitched the *Department of
Veterans Affairs* on incorporating the medical records of the 9 million vets
currently in the federal system into the company's portable, iPhone-based
format.  No word on whether the project will come to fruition, or even if
talks are still active.
<https://click.email.fortune.com/%3Fqs%3D924a6ff868f5934d43c6a7d1612ce671c6520ab3d8f7e20f7e3265cec8cda34666fa2c1f17d0465e025d67a0dba9eeee308c2f8f02f9c0f2>


A Clearer Message on Cochlear Implants (NY Times)

Richard Stein <rmstein@ieee.org>
Thu, 22 Nov 2018 14:12:19 +0800
https://www.nytimes.com/2018/11/21/opinion/deaf-cochlear-implants-sign-language.html

"A cochlear implant isn't inherently bad, but it isn't inherently good,
either; it is a neutral piece of technology, a tool, like a hammer.
Expecting an implant to cure deafness or magically generate speech is to
await the moment the hammer will fly out of one's hand and build a house on
its own. The value of the tool lies only in the skill of its user, and for
the cochlear implant user, that skill is learned with much effort. To
suggest otherwise is to give a disingenuous prognosis to potential patients
and their parents, and discounts the hard work successful C.I. users do to
communicate in a way the hearing world deems acceptable."

A worthy lesson to heed for all technology, including toothpicks.


This new weapon alerts police as soon as it's fired (WashPost)

Richard Stein <rmstein@ieee.org>
Thu, 22 Nov 2018 21:03:56 +0800
https://www.washingtonpost.com/technology/this-new-weapon-alerts-police-as-soon-as-its-fired/2018/11/21/1474aa12-e7b2-484e-bfe6-c872a8876419_story.html

"Electronic weapons rarely work all the time," Ron Martinelli, a forensic
criminologist, told CNN in 2015, noting that incapacitation can hinge upon
where and how both electrical probes strike the body.  "Historically, they
tend to be about 60 percent effective."

Risk: Easy to imagine someone hacking the auto-dial phone number to order
pizza and beer.


How The Wall Street Journal is preparing its journalists to detect deepfakes (NiemanLab.Org)

Richard Stein <rmstein@ieee.org>
Fri, 23 Nov 2018 11:25:14 +0800
http://www.niemanlab.org/2018/11/how-the-wall-street-journal-is-preparing-its-journalists-to-detect-deepfakes/

This essay reveals techniques that can be applied to generate and detect
"deepfake" content—disguised mashup video, audio, etc. that misrepresents
speech, sows controversy, and spreads disinformation. A "fine eye" is needed
to scrutinize the content to look for telltale signs of in-authenticity.

Risk: Over-reliance on video frame analysis may lead to incorrect
determination of authenticity. A time-consuming process is required to
correlate multiple sources (speech transcripts, news articles, etc.) to
determine speech attribution and origin. Unchecked disinformation circulates
in the wild until a net is thrown over it.


Huron Daily Tribune reporter Brenda Battel fired over voicemail for Republican candidate (John James, WashPost)

Gabe Goldberg <gabe@gabegold.com>
Fri, 23 Nov 2018 13:51:02 -0500
The Monday afternoon call was innocuous at first.

Brenda Battel, a staff writer for the Huron Daily Tribune in rural Michigan,
was seeking a chance to speak with Republican Senate candidate John James on
Wednesday after the election.

Battel left a voice-mail message with the James campaign, and alerted it to
a potential follow-up email to further discuss his campaign against
Sen. Debbie Stabenow (D).

Then Battel hung up the phone — or so she believed, she later said.

https://www.washingtonpost.com/politics/2018/11/06/reporter-unwittingly-left-voicemail-gop-candidate-she-was-fired-what-she-said/

The risk? User error. More broadly, it's the "hot mic".


You snooze, you lose: Insurers make the old adage literally true (Ars Technica)

Gabe Goldberg <gabe@gabegold.com>
Fri, 23 Nov 2018 14:03:31 -0500
Why insurers spy on sleep apnea sufferers via connected CPAP machines.

https://arstechnica.com/science/2018/11/you-snooze-you-lose-insurers-make-the-old-adage-literally-true/

Let's put everything online; what could go wrong...


GMail's spam filter is getting vicious?

Rob Slade <rmslade@shaw.ca>
Fri, 23 Nov 2018 12:01:03 -0800
I send Gloria an awful lot of email, for somebody who lives in the same room.

In particular, I send her notifications of appointments I've made.  I send
these to her GMail account, since she can get that on her phone, and
therefore it's easier for her to transfer the appointments to her calendar.
(No, not the calendar app.  A real, honest-to-goodness book-with-paper-pages
appointment calendar.  It usually sits on the kitchen table.)

Gloria doesn't use her GMail account much, so she doesn't get much spam.  So
when she noticed an entry in her spam folder, she checked it out.  Lo and
behold, it was a message from me.  Subsequent messages from me, over the
next few days, also went into the spam folder.

I'd sent a message to my baby brother and he hadn't responded.  I know his
email domain is hosted through GMail, so I mentioned it to him in a phone
call.  He checked, and my messages to him were being sent to spam.

So I did one of my sporadic forays into the spam folder in my own GMail
account, and, yes, my messages from me were being sent to spam.  I also
found a few messages from friends in there, but most of the wrongly filtered
messages were from me.

I don't know what I've done to offend GMail ...


FCC Launches New Offensive Against Scam, Robo Calls

Gabe Goldberg <gabe@gabegold.com>
Fri, 23 Nov 2018 15:07:16 -0500
http://www.eweek.com/networking/fcc-launches-new-offensive-against-scam-robo-calls


Who lives with you? Facebook seeks to patent software to figure out profiles of households (Los Angeles Times)

Richard Stein <rmstein@ieee.org>
Sun, 25 Nov 2018 13:52:00 +0800
https://www.latimes.com/business/technology/la-fi-tn-facebook-patent-family-photos-20181116-story.html

"This is what I would call a classic case of secondary use," said Pam Dixon,
founder and executive director of the World Privacy Forum.  "Someone is
signing up to Facebook, or Instagram for that matter, to post photos or
maybe keep in touch with old college friends. I don't think people intend to
have all their relational outlines queried and mapped by Facebook and used
for purposes that people aren't expecting."

The marketplace for family demographics may entice insurance companies
especially when coupled to environmental, health, or geographic information
systems. Politicians might exploit the analysis to assist with election or
jurisdiction (gerrymandering) activities.

Too much to ask for a profile opt-in "secondary use" selection?


This bill includes prison for CEOs who fail to take consumer privacy seriously (Los Angeles Times)

Richard Stein <rmstein@ieee.org>
Sun, 25 Nov 2018 15:54:50 +0800
https://www.latimes.com/business/lazarus/la-fi-lazarus-data-privacy-prison-for-ceos-20181116-story.html

"It's gotten to the point that there are so many data breaches, people can
find it hard to work up a sense of outrage over their privacy being violated
again and again and again.

"The business world is counting on such breach fatigue to keep meaningful
privacy safeguards at bay.

"Consumers shouldn't hand them such a huge victory."

"Under Wyden's bill, any company with revenue topping $1 billion a year, or
that stores data on more than 50 million consumers or consumer devices,
would have to submit an annual 'data protection report' to the FTC detailing
all activities related to keeping people's info under wraps."

The private companies with revenues of over US$ 2B in 2018 can be found here
<https://www.forbes.com/largest-private-companies/list/> There are 230
listed. A guestimate says that there are 2X this number, or ~500 private
companies with revenue over US$ 1B in the US.

The Fortune 1000 (public companies) bottoms out at ~US$ 1.8B (see
https://www.someka.net/excel-template/fortune-1000-excel-list/ for 2018). A
guestimate says there are 2X (~2000) public companies valued at US$ 1B or
more.

So...a maximum of ~2500 potential data breach CEO prosecutions from
negligent infosec practices.

Assuming Wyden's bill passes both houses of Congress, and is signed into
law:

Risk1: Weak FTC regulations and capricious enforcement practice
substantially mitigates deterrence effectiveness. No prosecutions arise from
data breach epidemic.

Conversely: Data breach prosecution becomes more popular than a traffic
ticket. USA Today (see
https://www.usatoday.com/story/money/personalfinance/2014/03/24/20-ways-we-blow-our-money/6826633/
says that ~40M traffic tickets are issued annually in the US per 2014
statistics.

Risk2: Strict enforcement boosts a prison construction boom and a swift
return to filing cabinets and paper, elevating paper company and office
furniture supplier stocks.

The data breach perpwalk: a new dance step for corporate boardroom members
to master.


Can The Police Remotely Drive Your Stolen Car Into Custody? (Slashdot)

Gabe Goldberg <gabe@gabegold.com>
Sun, 25 Nov 2018 14:45:59 -0500
https://yro.slashdot.org/story/18/11/25/0137258/can-the-police-remotely-drive-your-stolen-car-into-custody

A little like Fairfax County VA "bait cars"—cars left as tempting theft
targets. They're rigged to alert if doors are opened, fitted with tracking
gear and remote lock/slow/kill switches. So cops will locate/pursue when
they're stolen, wait until it's safe, then lock/slow/kill car. And chat with
occupants. Quite different, of course, from seizing control of random
private vehicles stolen.


Free Software Messiah Richard Stallman: We Can Do Better Than Bitcoin (CoinDesk)

Gabe Goldberg <gabe@gabegold.com>
Sun, 25 Nov 2018 19:13:25 -0500
And speaking broadly, Stallman continued:

    "If bitcoin protected privacy, I'd probably have found a way to use
    it by now."

https://www.coindesk.com/free-software-messiah-richard-stallman-we-can-do-better-than-bitcoin


Mobile Application/Social Media Addiction Freedom Experiment (TechCrunch.com and The Economist)

Richard Stein <rmstein@ieee.org>
Mon, 26 Nov 2018 11:06:43 +0800
https://techcrunch.com/2018/11/25/nowhere-to-go/
https://www.economist.com/business/2018/11/24/facebook-should-heed-the-lessons-of-internet-history

Profile-driven advertisements fuel Internet-based social media and mobile
device application business profit. A user profile is assessed to target ad
content delivery. Various attributes: age, gender, geographic location,
income level, site content viewing/visit preference/frequency,
etc. contribute to the ad delivery calculus.

Businesses track daily, weekly, monthly usage of their services and
applications to determine the ad billing cost.

Measuring and characterizing certain user access patterns can indicate
addictive predilection.

Social media and application businesses can proactively attempt to dissuade
service/application overuse and taper addictive behavior. This effort
depends on an inversion of profile attributes: what is assigned as an
"appeal to target" must be switched to "repellent to target" as necessary. A
form of aversion therapy. Literary note: In Anthony Burgess' "A Clockwork
Orange," aversion therapy was used as a behavioral cure.

As an example, for ages 8-12, target the audience with ads about retirement
communities, collecting butterflies, coal mine stockpiles, The Dow Jones, or
the importance of eating your spinach. Compilation of equivalent repellent
ad cohorts can be assembled by profile attribute inversion. Wise and mature
editorial oversight is essential to compile these content libraries.

Preventing exposure to violent, horrifying, and other inappropriate or toxic
material is required. Audience/ad mismatch, not "shock, awe and frighten,"
should guide ad population target selection.

Measure the state of addiction before and after the experiment per standard
business performance metrics. Outsource to a trusted and neutral non-profit
to oversee the measurement, compiled statistics, and write the summary
findings.

This experiment may afford one means for mobile applications and social
media business to restore their rapidly tarnishing reputation. The outcome
can provide a forum to discuss public brand addiction and how to best
suppress it.

Risk: Contractual SLA underachievement for target audience ad delivery
fulfillment incurs business finance loss.


China Creating Gene-Edited Babies (MIT Technology Review)

the keyboard of geoff goodfellow <geoff@iconia.com>
Sun, 25 Nov 2018 16:23:50 -1000
Rewriting Life

Chinese scientists are creating CRISPR babies.  A daring effort is underway
to create the first children whose DNA has been tailored using gene-editing.

https://www.TechnologyReview.com/s/612458/exclusive-chinese-scientists-are-creating-crispr-babies/

EXCERPT:

When Chinese researchers first edited the genes of a human embryo in a lab
dish in 2015, it sparked global outcry and pleas from scientists not to make
a baby using the technology, at least for the present.

It was the invention of a powerful gene editing tool, CRISPR, which is cheap
and easy to deploy, that made the birth of humans genetically modified in an
in-vitro fertilization (IVF) center a theoretical possibility
<https://www.technologyreview.com/s/535661/engineering-the-perfect-baby/

Now, it appears it may already be happening.

According to Chinese medical documents posted online this month (here
<http://www.chictr.org.cn/showprojen.aspx%3Fproj%3D32758 and here
<http://www.chictr.org.cn/uploads/file/201811/bb9c5996d8fd476eacb4aeecf5fd2a01.pdf>
a team at the Southern University of Science and Technology, in Shenzhen,
has been recruiting couples in an effort to create the first gene-edited
babies. They planned to eliminate a gene called CCR5 in order to render the
offspring resistant to HIV, smallpox, and cholera.

The clinical trial documents describe a study to employ CRISPR to modify
human embryos, then to transfer them into the uterus of mothers and deliver
healthy children...


British Parliament seizes internal Facebook documents by threatening to jail a different CEO

Rob Slade <rmslade@shaw.ca>
Mon, 26 Nov 2018 10:01:38 -0800
https://boingboing.net/2018/11/25/by-any-means-necessary.html

The British Parliament has been able to obtain internal Facebook documents,
even though Facebook didn't want to give them up, and they were sealed by a
judge in California.

If you don't like Facebook, you can can enjoy the schadenfreude in
Facebook's continuing troubles.  If you do like Facebook, you can take this
as a warning that there is more than one way to skin a cat (or obtain
confidential information).

Regardless of how you feel about Facebook, it's a warning about wandering
around with really sensitive information on your laptop ...


The Dangerous Junk Science of Vocal Risk Assessment (The Intercept)

Gabe Goldberg <gabe@gabegold.com>
Mon, 26 Nov 2018 13:58:27 -0500
_Is it possible_ to tell whether someone is a criminal just from looking at
their face or listening to the sound of their voice? The idea may seem
ludicrous, like something out of science fiction—Big Brother in *1984*
detects any unconscious look “that carried with it the suggestion of
abnormality'—and yet, some companies have recently begun to answer this
question in the affirmative. AC Global Risk, a startup founded in 2016,
claims to be able to determine your level of *risk* as an employee or an
asylum-seeker based not on what you say, but how you say it.

https://theintercept.com/2018/11/25/voice-risk-analysis-ac-global/

  ...joining dowsing rods, polygraphs, homeopathy...


Can The Police Remotely Drive Your Stolen Car [or you?] Into Custody? (Slashdot)

Lauren Weinstein <lauren@vortex.com>
Sun, 25 Nov 2018 08:45:40 -0800
https://yro.slashdot.org/story/18/11/25/0137258/can-the-police-remotely-drive-your-stolen-car-into-custody

Autonomous cars will give police states surveillance and control powers that
they've only dreamed of.  I continue to be flabbergasted how proponents of
autonomous vehicles seem unwilling to discuss how these vehicles can be used
as instruments of individualized and/or mass control and suppression by
governments.


LinkedIn used 18 million non-user e-mails to target Facebook ads (The Verge)

Monty Solomon <monty@roscom.com>
Sun, 25 Nov 2018 11:39:25 -0500
LinkedIn used 18 million non-user e-mails to target Facebook ads
https://www.theverge.com/2018/11/25/18111087/linkedin-ireland-data-protection-commission-18-million-non-user-e-mails-targeted-facebook-ads


Study: Smart Speakers Make Passive Listeners (Melanie Lefkowitz)

ACM TechNews <technews-editor@acm.org>
Wed, 28 Nov 2018 11:54:00 -0500
Melanie Lefkowitz, Cornell Chronicle (NY), 27 Nov 2018 via
  ACM TechNews, 28 Nov 2018

Cornell University researchers investigating the wider ramifications of
content discovery with smart speaker products found people who read choices
online digested information nine times faster and explored at least three
times as much as those who heard them listed by a Siri, Alexa, or similar
product. Recommendation algorithms generally prioritize popular content,
with people who read their recommendations less likely to select the most
popular or top-rated options. Said Cornell's Longqi Yang, "With these
devices becoming more popular and more people adopting them, this kind of
interface becomes very important, because it's one of the major channels for
people to be exposed to information." Yang said these devices could be
redesigned to meet this challenge; his team recommended that smart speakers
offer top-ranked choices that are diverse, personalized, and frequently
changed, giving users access to a broader range of information. These
findings were presented at the ACM Conference on Recommender Systems (RecSys
2018) in Vancouver, Canada.

https://orange.hosting.lsoft.com/trk/click%3Fref%3Dznwrbbrs9_6-1d52cx218d83x070148%26


Re: 670 ballots in a precinct with 276 voters (Douglass, RISKS-30.92)

David Tarabar <dtarabar@acm.org>
Mon, 26 Nov 2018 17:44:52 -0500
There were actually 3,704 registered voters. The 276 number was an error
that was corrected at the Georgia Secretary of State website in August..

But on the web, an error rarely is corrected. This false headline (from
months ago) was circulated during the November election and no doubt will
reappear during future.elections Apparently only the McClatchy news
organization bothered to do a simple fact check of a curious `fact'.


Re: Russia suspected of jamming GPS signal in Finland (BBC)

Henry Baker <hbaker1@pipeline.com>
Thu, 22 Nov 2018 07:15:06 -0800
If GPS jamming was NOT part of *war games*, then NATO would be criminally
negligent, since the first activities in any future war would include
taking out GPS satellites and/or jamming their signals.  Recall that the
Brits took down all of their street signs in the initial days of WWII.

Google "GPS" and "war games".

If the Russians did this jamming (which I highly doubt), they did NATO a
favor by making the games more realistic.


Re: Japan cybersecurity minister admits he has never used a computer (RISKS-30.92)

Attila the Hun <attilathehun1900@tiscali.co.uk>
Thu, 22 Nov 2018 16:51:15 +0000
How many Health Ministers have performed brain surgery?  How many Defence
Ministers have held a Commission in the military?  How many Interior
Ministers have been in the police?  How many Foreign Ministers have been
diplomats?  How many Chancellors of the Exchequer can add up?

Government Ministers are figureheads, it's the Civil Service that has the
knowledge and skills [allegedly].  Search YouTube for `Yes Minister' and
`Yes Prime Minister' for the classic satires on the relationship between
Ministers and Whitehall in Britain.  Made in the early '80s, they're as true
and funny today as they were then.  Loved, apparently, by Margaret Thatcher.


Re: Tesla (RISKS-30.91,93)

Attila the Hun <attilathehun1900@tiscali.co.uk>
Thu, 22 Nov 2018 17:08:52 +0000
I'm afraid that Wols Lists is misinformed about the illegality of remaining
in the outside lane of a British road after completing an overtaking
manoeuvre.

The Highway Code (the driving bible, but largely advisory) states:

Rule 137 On a two-lane dual carriageway you should stay in the left-hand
lane. Use the right-hand lane for overtaking or turning right. After
overtaking, move back to the left-hand lane when it is safe to do so.  Rule
138 On a three-lane dual carriageway, you may use the middle lane or the
right-hand lane to overtake but return to the middle and then the left-hand
lane when it is safe.  Note the `should' in rule 137.  Rules containing the
word `must' are backed by explicit law, those stating `should' are not.
Albeit, they might still be deemed to constitute a generic offence under
section 3 of the Road Traffic Act 1988: “Careless, and inconsiderate,
driving.  If a person drives a mechanically propelled vehicle on a road or
other public place without due care and attention, or without reasonable
consideration for other persons using the road or place, he is guilty of an
offence'') and certain breaches can attract an on-the-spot fixed penalty.


Re: Awful AI is a curated list to track current scary usages of AI (RISKS-30.92)

Amos Shapir <amos083@gmail.com>
Fri, 23 Nov 2018 00:29:41 +0200
Another list of AI (mainly machine learning) failures and strange results
is available in this shared spreadsheet file:
https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml


Re: The Cleaners' Looks At Who Cleans Up The Internet's Toxic Content (npr.org)

Toby Douglass <risks@winterflaw.net>
Fri, 23 Nov 2018 12:30:22 +0200
The issue covered by the article and the submission to RISKS is worthy of
the forum.

I may be wrong, but I would say the *manner* of discussion was not worthy of
the forum.

I am reminded of the bus bombings in London.

A serious event and in some forums, seriously discussed.

In others, such as the tabloid newspapers, a full colour, full-page,
front-page photo (presumably from CCTV) of the actual moment one of the
bombs exploded with people just beginning to be thrown out of their seats,
or in some cases, torn apart, by the blast.

That photo was gruesome.

I am sure there were some or many for whom that photo had no or little
effect, mainly I suspect by desensitization through repeated exposure to
similar material.

I am sure there were at least some for whom it was distressing, as gruesome
material will in the normal case be.

The quote in the submission from the (presumably Facebook) content moderator
was gruesome.


Book review: The End of Trust (EFF)

David Strom via WebInformant <webinformant@list.webinformant.tv>
Mon, 26 Nov 2018 09:02:02 -0600
  [via Gabe Goldberg]

Last week the Electronic Frontier Foundation published an interesting book
called The End of Trust <https://www.eff.org/the-end-of-trust>.  It was
published in conjunction with the writing quarterly McSweeneys, which I have
long been a subscriber and enjoy its more usual fiction short story
collections. This issue is its first total non-fiction effort and it is
worthy of your time.

There are more than a dozen interviews and essays from major players in the
security, privacy, surveillance and digital rights communities. The book
tackles several basic issues: first the fact that privacy is a team sport,
as Cory Doctorow opines—meaning we have to work together to ensure
it. Second, there are numerous essays about the role of the state in a
society that has accepted surveillance, and the equity issues surrounding
these efforts. Third, what is the outcome and implications of outsourcing of
digital trust. Finally, various authors explore the difference between
privacy and anonymity and what this means for our future.

While you might be drawn to articles from notable security pundits, such as
an interview where Edward Snowden explains the basics behind blockchain and
where Bruce Schneier discusses the gap between what is right and what is
moral, I found myself reading other less infamous authors that had a lot to
say on these topics.

Let's start off by saying there should be no `I' in privacy, and we have to
look beyond ourselves to truly understand its evolution in the digital
age. The best article in the book is an interview with Julia Angwin, who
wrote an interesting book several years ago called Dragnet Nation
<https://www.amazon.com/Dragnet-Nation-Security-Relentless-Surveillance/dp/0805098070>
She says “the word formerly known as privacy is not about individual harm,
it is about collective harm. Google and Facebook are usually thought of as
monopolies in terms of their advertising revenue, but I tend to think about
them in terms of acquiring training data for their algorithms. That's the
thing what makes them impossible to compete with.''  In the same article,
Trevor Paglen says, “we usually think about Facebook and Google as
essentially advertising platforms. That's not the long-term trajectory of
them, and I think about them as extracting-money-out-of-your-life
platforms.''

Role of the state

Many authors spoke about the role that law enforcement and other state
actors have in our new always-surveilled society. Author Sara
Wachter-Boettcher <http://www.sarawb.com/d> said “I don't just feel seen
anymore. I feel surveilled.''  Thenmozhi Soundararajan
<http://equalitylabs.org> writes that “mass surveillance is an equity issue
and it cuts across the landscape of rare, class and gender.''  This is
supported by Alvaro Bedoya, the director of a Georgetown Law school think
tank <https://www.law.georgetown.edu/privacy-technology-center/>.  He took
issue about the statement that everyone is being watched, because some are
watched an awful lot more than others. With new technologies, it is becoming
harder to hide in a crowd and thus we have to be more careful about crafting
new laws that allow the state access to this data, because we could lose our
anonymity in those crowds.  “For certain communities (such as LBGTQ),
privacy is what lets its members survive. Privacy is what let's them do what
is right when what's right is illegal. Digital tracking of people's
associations requires the same sort of careful First Amendment analysis that
collecting NAACP membership lists in Alabama in the 1960s did. Privacy can
be a shield for the vulnerable and is what let's those first `dangerous'
conversations happen.''

Scattered throughout the book are descriptions of various law enforcement
tools, such as drones facial recognition systems, license plate readers and
cell-phone simulators. While I knew about most of these technologies,
collected together in this fashion makes them seem all the more insidious.

Outsourcing our digital trust

Angwin disagrees with the title and assumed premise of the book, saying the
point is more about the outsourcing of trust than its complete end.  That
outsourcing has led to where we prefer to trust data over human
interactions. As one example, consider the website Predictim
<https://www.predictim.com/>, which scans a potential babysitter or dog
walker to determine if they are trustworthy and reliable using various
facial recognition and AI algorithms.Back in the pre-digital era, we asked
for personal references and interviewed our neighbors and colleagues for
this information. Now we have the Internet to vet an applicant.

When eBay was just getting started, they had to build their own trust proxy
so that buyers would feel comfortable with their purchases. They came up
with early reputation algorithms, which today have evolved into the
Uber/Lyft star-rating for their drivers and passengers. Some of the writers
in this book mention how Blockchain-based systems could become the latest
incarnation for outsourcing trust.

Privacy vs. anonymity

The artist Trevor Paglen
<https://art21.org/artist/trevor-paglen/%3Fgclid%3DEAIaIQobChMIqrz0_KTy3gIVUL7ACh2JEwG_EAAYASAAEgIAufD_BwE
says, “we are more interested not so much in privacy as a concept but more
about anonymity, especially the political aspects.''  In her essay, McGill
ethics professor Gabriella Coleman says, “Anonymity tends to nullify
accountability, and thus responsibility. Transparency and anonymity rarely
follow a binary moral formula, with the former being good and the latter
being bad.''

Some authors explore the concept of privacy nihilism, or disconnecting
completely from one's social networks. This was explored by Ethan Zuckerman,
who wrote in his essay: “When we think about a data breach, companies tend
to think about their data like a precious asset like oil, so breaches are
more like oil spills or toxic waste. Even when companies work to protect our
data and use it ethically, trusting a platform gives that institution
control over your speech. The companies we  trust most can quickly
become near-monopolies whom we are then forced to trust because they have
eliminated their most effective competitors. Facebook may not deserve our
trust, but to respond with privacy nihilism is to exit the playing field and
cede the game to those who exploit mistrust.''  agree, and while I am not
happy about what Facebook has done, I am also sticking with them for the
time being too.

This notion of the relative morality of our digital tools is also taken up
in a recent NY Times op/ed by NYU philosopher Matthew Liao
<https://www.nytimes.com/2018/11/24/opinion/sunday/facebook-immoral.html Do
you have a moral duty to leave Facebook? He says that the social media
company has come close to crossing a `red line', but for now he is staying
with them.

The book has a section for practical IT-related suggestions to improve your
trust and privacy footprint, many of which will be familiar to my readers
(MFA, encryption, and so forth). But another article by Douglas Rushkoff
goes deeper. He talks about the rise of fake news in our social media feeds
and says that it doesn't matter what side of an issue people are on for them
to be infected by the fake news item. This is because the item is designed
to provoke a response and replicate. A good example of this is one
individual recently mentioned in this WaPost piece who has created his own
fake news business out of a parody website here
<https://www.washingtonpost.com/national/nothing-on-this-page-is-real-how-lies-become-truth-in-online-america/2018/11/17/edd44cc8-e85a-11e8-bbdb-72fdbf9d4fed_story.html%3Ffbclid%3DIwAR0dmdRIv6ShQu_1OiibK0pHQ9EGK_K-rx8Sk7lPc7t8u3l1EFTh-ELJxbU%26noredirect%3Don%26utm_term%3D.6e2ad78f7bad>

Rushkoff recommends three strategies for fighting back: attacking bad memes
with good ones, insulating people from dangerous memes via digital filters
and the equivalent of AV software, and better education about the underlying
issues. None of these are simple.

This morning the news was about how LinkedIn harvested 18M emails
<https://techcrunch.com/2018/11/24/linkedin-ireland-data-protection/> from
to target ads to recruit people to join its social network. What is chilling
about this is how all of these email addresses were from non-members that it
had collected, of course without their permission.

You can go to the EFF link above where you can download a PDF copy or go to
McSweeneys and buy the hardcover book
<https://store.mcsweeneys.net/products/mcsweeney-s-issue-54-the-end-of-trust%3Ftaxon_id%3D1
Definitely worth reading.

Please report problems with the web pages to the maintainer

x
Top