The RISKS Digest
Volume 31 Issue 26

Saturday, 25th May 2019

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

The Bomb Detector That Was a Dud
Now I Know
Tesla fires could dampen electric car sales as industry ramps up production, auto analysts say
CNBC
Whom to Sue When a Robot Loses Your Fortune
Bloomberg
Bluetooth's Complexity Has Become a Security Risk
WiReD
Equifax demise
CNBC
Warning over using augmented reality in precision tasks
bbc.com
"Bestmixer seized by police for washing $200 million in tainted cryptocurrency clean"
ZDNet
Boeing 737 Max Simulators Are in High Demand. They Are Flawed.
NYTimes
First phones, now drones ...
Lite
A Chip in My Hand Unlocks My House. Why Does That Scare People?
NYTimes
Amnesty International sues NSO Group
Naked Security
Facebook to create new cryptocurrency
BBC
RBC customer out of pocket after fraud: What you need to know if you E-transfer money
CBC News
RealTalk speech synthesis
Medium
OECD AI Principles
Janosch Delcker
DWU heptathlon athlete ineligible for nationals due to email error
Keloland
Re: Martin Ward's post in RISKS-31.25
Radoslaw Moszczynski
Amos Shapir
Dimitri Maziuk
Re: "Too proud of my house number"
Gene Wirchenko
Info on RISKS (comp.risks)

The Bomb Detector That Was a Dud (Now I Know)

Gabe Goldberg <gabe@gabegold.com>
Tue, 21 May 2019 12:12:43 -0400
The ADE 651 itself did nothing ” ultimately, it was no more than a stick
with a fancy handle, and no matter how quickly one shuffled his or her feet,
no meaningful amounts of electricity would flow through the device. ATSC had
not only bilked Iraq out of millions of dollars, but it had also put
thousands of Iraqis and others at risk.

http://nowiknow.com/the-bomb-detector-that-was-a-dud/


Tesla fires could dampen electric car sales as industry ramps up production, auto analysts say (CNBC)

Gabe Goldberg <gabe@gabegold.com>
Sun, 19 May 2019 13:57:05 -0400
<https://www.cnbc.com/2019/05/19/analysts-worry-recent-tesla-fires-risk-dampening-sales-for-all-evs.html?__source=iosappshare%7Ccom.apple.UIKit.activity.Mail


Whom to Sue When a Robot Loses Your Fortune (Bloomberg)

Gabe Goldberg <gabe@gabegold.com>
Fri, 17 May 2019 17:59:15 -0400
"People tend to assume that algorithms are faster and better decision-makers
than human traders," said Mark Lemley, a law professor at Stanford
University who directs the university's Law, Science and Technology
program. "That may often be true, but when it's not, or when they quickly go
astray, investors want someone to blame."

https://www.bloomberg.com/news/articles/2019-05-06/who-to-sue-when-a-robot-loses-your-fortune

Do ya think?!

Article continues:

Developed by Austria-based AI company 42.cx, the supercomputer named
K1 would comb through online sources like real-time news and social media to
gauge investor sentiment and make predictions on U.S. stock futures. It
would then send instructions to a broker to execute trades, adjusting its
strategy over time based on what it had learned.

The idea of a fully automated money manager inspired Li instantly. He met
Costa for dinner three days later, saying in an e-mail beforehand that the
AI fund "is exactly my kind of thing."

Over the following months, Costa shared simulations with Li showing K1
making double-digit returns, although the two now dispute the thoroughness
of the back-testing. Li eventually let K1 manage $2.5 billion—$250
million of his own cash and the rest leverage from Citigroup Inc. The plan
was to double that over time.

But Li's affection for K1 waned almost as soon as the computer started
trading in late 2017. By February 2018, it was regularly losing money,
including over $20 million in a single day—14 Feb 2019—due to a
stop-loss order Li's lawyers argue wouldn't have been triggered if K1 was as
sophisticated as Costa led him to believe.

Li is now suing Tyndaris for about $23 million for allegedly exaggerating
what the supercomputer could do. Lawyers for Tyndaris, which is suing Li for
$3 million in unpaid fees, deny that Costa overplayed K1's
capabilities. They say he was never guaranteed the AI strategy would make
money.


Bluetooth's Complexity Has Become a Security Risk (WiReD)

Gabe Goldberg <gabe@gabegold.com>
Mon, 20 May 2019 19:06:21 -0400
Bluetooth is the invisible glue that binds devices together. Which means
that when it has bugs, it affects everything from iPhones and Android
devices to scooters and even physical authentication keys used to secure
other accounts. The order of magnitude can be stunning: The BlueBorne flaw,
first disclosed in September 2017, impacted 5 billion PCs, phones, and IoT
units.

As with any computing standard, there's always the possibility of
vulnerabilities in the actual code of the Bluetooth protocol itself, or in
its lighter-weight sibling Bluetooth Low Energy. But security researchers
say that the big reason Bluetooth bugs come up has more to do with sheer
scale of the written standard—development of which is facilitated by the
consortium known as the Bluetooth Special Interest Group. Bluetooth offers
so many options for deployment that developers don't necessarily have full
mastery of the available choices, which can result in faulty
implementations.

"One major reason Bluetooth is involved in so many cases is just how complex
this protocol is," says Ben Seri, one of the researchers who discovered
BlueBorne and vice president of research at the embedded device security
firm Armis. "When you look at the Bluetooth standard it's like 3,000 pages
long—if you compare that to other wireless protocols like Wi-Fi
https://www.wired.com/story/wpa3-wi-fi-security-passwords-easy-connect/ for
example, Bluetooth is like 10 times longer. The Bluetooth SIG tried to do
something very comprehensive that fits to many various needs, but the
complexity means it's really hard to know how you should use it if you're a
manufacturer."

https://www.wired.com/story/bluetooth-complex-security-risk/


Equifax demise (CNBC)

"Peter G. Neumann" <neumann@csl.sri.com>
Wed, 22 May 2019 17:25:55 PDT
Equifax just became the first company to have its outlook downgraded for a
cyber-attack
https://www.cnbc.com/2019/05/22/moodys-downgrades-equifax-outlook-to-negative-cites-cybersecurity.html


Warning over using augmented reality in precision tasks (bbc.com)

Richard Stein <rmstein@ieee.org>
Tue, 21 May 2019 10:44:26 +0800
"People who use augmented reality headsets to complete complex tasks fare
worse than those with no high-tech help, a small study suggests."  Microsoft
HoloLens (and similar products) diminishes human eye focal controls,
diminishing their suitability for surgical applications or precision part
inspection.

https://www.bbc.com/news/technology-48334457


"Bestmixer seized by police for washing $200 million in tainted cryptocurrency clean" (ZDNet)

Gene Wirchenko <gene@shaw.ca>
Thu, 23 May 2019 10:45:31 -0700
Charlie Osborne for Zero Day | 23 May 2019
Bestmixer.io was known for 'washing' cryptocurrency to make the funds
untraceable.  Bestmixer.io has been seized and shut down by European police
for reportedly laundering over $200 million in cryptocurrency.

https://www.zdnet.com/article/bestmixer-seized-by-eu-police-over-laundering-of-200-million-in-cryptocurrency/


Boeing 737 Max Simulators Are in High Demand. They Are Flawed. (NYTimes)

Monty Solomon <monty@roscom.com>
Fri, 17 May 2019 21:40:27 -0400
https://www.nytimes.com/2019/05/17/business/boeing-737-max-simulators.html

The flight simulators are unable to accurately replicate the difficult
conditions created by a malfunctioning system on the jet, which played a
role in two fatal crashes.


First phones, now drones ... (Lite)

Rob Slade <rmslade@shaw.ca>
Thu, 23 May 2019 11:58:32 -0700
OK, if you haven't been hiding under a rock for the past year, you know that
Huawei is under suspicion of capturing data and feeding it back to the
Chinese government.  Banned from telecom infrastructure, phones not being
updated, that sort of thing.

Now Chinese manufacturer DJI, which makes about 80% of all drones used in
the U.S. and Canada, is suspected (by the U.S. DHS) of doing something
similar, (collecting location and flight data) ...
https://lite.cnn.io/en/article/h_20b05b43af8add3c52675f50eb3572d1

DJI, the Chinese company at the centre of the furor, is promising that its
new drones will come with plane and helicopter "detection" features, to
avoid collisions.  https://www.bbc.com/news/technology-48380500

To *avoid* collisions?  Or for targeting? ...

"First they came for the 5G phones, and I said nothing because I'm not an
early adopter.  Then they came for the drones, and I said nothing because, I
mean, they're just toys, right?  Then they came for the wifi equipped
dildoes, and ... wait ..."


A Chip in My Hand Unlocks My House. Why Does That Scare People? (NYTimes)

Gabe Goldberg <gabe@gabegold.com>
Wed, 22 May 2019 14:51:13 -0400
Implant technology can change the world ” unless politicians give in to the
hysteria against it.

Over the past few decades, microchip implant technology has moved from
science fiction to reality; today hundreds of thousands of people around the
world have chips or electronic transmitters inside them. Most are for
medical reasons, like cochlear implants to help the deaf hear. More
recently, body-modification enthusiasts and technophiles have been
installing microchips in their bodies that do everything from start a car to
send a text message to make a payment in bitcoin.

The market for nonmedical implant technology is virtually unregulated,
despite the fact that thousands of people around the world got chipped in
the past 12 months. That may be about to change: Over the past few years,
calls to heavily regulate or even ban voluntary implants have grown
increasingly loud. There's a place for regulating implants, like any
technology ” but also a need to separate the fear from the reality.

https://www.nytimes.com/2019/05/21/opinion/chip-technology-implant.html


Amnesty International sues NSO Group (Naked Security)

Rob Slade <rmslade@shaw.ca>
Wed, 22 May 2019 11:42:17 -0700
Oh, remember the Whatsapp problem?  NSO Group installing spyware on people?

Well, Amnesty International has taken exception to being targeted, and are
suing NSO Group.
https://nakedsecurity.sophos.com/2019/05/21/amnesty-sues-maker-of-pegasus-the-spyware-let-in-by-whatsapp-zero-day/

While we're at it, why doesn't Pegasus sue NSO Group?  I've been using
Pegasus for decades, and it's great.  No, not NSO's spyware: the Pegasus
Mail program.  http://www.pmail.com/ I'm sure that NSO abusing the Pegasus
name is hurting David Harris's image ...


Facebook to create new cryptocurrency (BBC)

Mark Thorson <eee@dialup4less.com>
Fri, 24 May 2019 14:29:27 -0700
Because we trust Facebook so, so much.

https://www.bbc.com/news/business-48383460


RBC customer out of pocket after fraud: What you need to know if you E-transfer money (CBC News)

Gabe Goldberg <gabe@gabegold.com>
Sun, 19 May 2019 13:31:40 -0400
The bank blamed the theft on Fearnley's email security.

Hoover's security question to her friend was: "Who is my favourite Beatle?"

The fraudster would have had a one in four chance of getting it right ”-
John, Paul, George or Ringo. In a test of RBC's Interac system, Go Public
was given four chances to answer the security question correctly.

https://www.cbc.ca/news/business/rbc-customer-out-of-pocket-after-e-transfer-fraud-1.5128114


RealTalk speech synthesis (Medium)

<steven@klein.us>
Sun, 19 May 2019 14:39:24 -0400
https://medium.com/@dessa_/real-talk-speech-synthesis-5dd0897eef7f

RealTalk: This Speech Synthesis Model Our Engineers Built Recreates a Human
Voice Perfectly

Excerpts:

Today we're excited to announce that our Machine Learning Engineers Hashiam
Kadhim, Joe Palermo and Rayhane Mama have produced the most realistic AI
simulation of a voice we've heard to date.

It's the voice of someone you've probably heard of before—Joe Rogan. (For
those who haven't: Joe Rogan is the creator and host one of the world's most
popular podcasts, which to date has nearly 1300 episodes and counting.)

Here are some examples of what might happen if the technology got into the
wrong hands:

 * Spam callers impersonating your mother or spouse to obtain personal information
 * Impersonating someone for the purposes of bullying or harassment
 * Gaining entrance to high security clearance areas by impersonating a
   government official
 * An `audio deepfake' of a politician being used to manipulate election
   results or cause a social uprising


OECD AI Principles (Janosch Delcker)

"Peter G. Neumann" <neumann@csl.sri.com>
Mon, 20 May 2019 13:20:41 PDT
U.S. to endorse new OECD principles on artificial intelligence, 19 May 2019

PARIS—Donald Trump's administration has finally found an international
agreement it can support.  At an annual meeting on Wednesday, the 36
countries in the Organization for Economic Cooperation and Development
(OECD) plus a handful of other nations are set to adopt a list of guidelines
for the development and use of artificial intelligence.  The agreement, seen
by POLITICO, marks the first time that the United States—home to some of
the world's largest and most powerful tech companies—has endorsed
international guidelines for the emerging technologies.

China, the second global front-runner in the field, is not a member of the
OECD.  Over four pages, the agreement lays out a series of broad principles
designed to ensure that as AI develops, the technology will benefit humanity
rather than harming it, and urges governments to draft policies for such
`responsible stewardship of trustworthy AI'.  [is there any such today?
PGN]

However, the document omits the matter of whether or not binding rules would
be necessary to regulate the technology—a question that divides
policymakers and researchers around the world.  “At this stage, it's
completely premature to know whether and what to regulate when it comes to
AI,'' Anne Carblanc, the head of the OECD's digital economy policy division,
told POLITICO during an interview at the group's headquarters in the French
capital.  Carblanc, a former judge, said that AI affects too many sectors to
be covered by one-for-all rules, and that much of the technology --
including questions of accountability and liability—is already covered by
existing national regulation as well as by international human rights law.
Rather than being a blueprint for hard global rules, the idea behind the
OECD's principles is to “provide a clear orientation to what are the
fundamental values that need to be respected.''

By embracing such principles, countries express their `political commitment'
to implementing them, she added --a process that will be monitored and
reviewed by her group.

The OECD also hopes that the principles will have an impact beyond their own
members.

At this year's G20 summit in Osaka, Japan, the OECD wants to encourage
member countries—which includes non-OECD nations such as China—to
express support for their principles, in one form or another, according to
officials.

Are you a machine?

The guidelines, due to be released on Wednesday, were drafted by a group of
50 experts from the industry, governments, trade unions, the civil society
as well as tech companies.

The final document starts by pledging that AI should be designed to respect
the rule of law, human rights and democratic values.

It adds that AI systems should be safe and transparent, that people should
know whether or not they're dealing with a machine, and that those
developing or deploying AI should be held accountable for their actions.

The OECD also urges governments to boost public and private investment in
AI, set up open datasets for developers and support efforts to share data.

Governments should also review legal frameworks to make it easier to turn
research into market-ready applications, for example by creating deregulated
environments to test technology, the OECD says.

Research into AI goes back to the 1950s. But only in recent years have a
boost in computing power, the emergence of cloud computing and unprecedented
masses of data turned it from blue-sky research into technology that powers
day-to-day applications.

The technologies offer opportunities, from better treatment of cancer
patients to saving energy to tackling climate change, but they also come
with significant risks. Most of today's cutting-edge AI systems, for
example, are prone to mirroring biases from the analog world and to
discriminating against minorities.

AI also poses unprecedented challenges to privacy, as shown by media reports
suggesting that China is using state-of-the-art AI to build an omnipresent
surveillance system targeting vulnerable groups.

Against this backdrop, the European Union released detailed guidelines for
what it calls `trustworthy' artificial intelligence in March—technology
that respects European values and is engineered in a way that prevents it
from causing intentional or unintentional harm.

The EU's push into writing the guiding principles was watched closely by the
administration of U.S. President Donald Trump, who himself called for
regulating AI in an executive order in February.

Alarmed by the fact that the EU's set of sweeping new privacy rules
implemented last year could soon become a global standard for data
protection, U.S. officials reportedly intensified cooperation with the OECD
on the international AI guidelines.

“The U.S. was interested in pursuing this,'' said the OECD's Carblanc, who
oversaw the development of the principles on the working level. “At the
OECD, they're very present on everything digital, so I believe they thought
it was the right place to do something.''

In line with the group's traditional `soft power' approach to exert
influence through peer pressure, the idea for the principles is to influence
practice by serving as a framework for both national governments drafting
legislation and corporations writing up their own guidelines for the
development of AI.

There are several past examples that could serve as a precedent, officials say.

In April, for example, the London Metals Exchange announced that by the end
of 2022, it would allow companies only to trade those goods at its
marketplace that are compliant with the OECD's guidelines on responsible
supply chains for minerals.


DWU heptathlon athlete ineligible for nationals due to email error (Keloland)

Gabe Goldberg <gabe@gabegold.com>
Sun, 19 May 2019 20:40:44 -0400
https://www.keloland.com/sports/dwu-heptathlon-athlete-ineligible-for-nationals-due-to-email-error/2012594362


Re: Martin Ward's post in RISKS-31.25

Radoslaw Moszczynski <radek@bolelut.pl>
Wed, 22 May 2019 00:45:06 +0200
  [Note: After considerable thought on Martin Ward's item in RISKS-31.25, I
  decided to run his message—despite feeling that it bordered on serious
  disinformation, fully expecting that I would be dinged for running it, and
  that there would be blowback.  I thank Radoslaw for rising to the
  occasion.  Ironically, NPR on the evening of 24 May ran a long piece from
  a European reporter, summarizing the extent to which the multination
  elections this weekend for delegates to the EU assembly were massively
  subjected to massive misinformation.  PGN]

That Reddit post claims that “Communism did work extremely well.''  I'm
pretty sure that's the precise reason why hundreds of thousands of people
(members of my family included) did everything they could, often risking
their lives, to escape the clutches of that paradise and settle in the West
instead.
<https://en.wikipedia.org/wiki/Eastern_Bloc_emigration_and_defection>

Family anecdotes aside, I randomly picked a claim from that post (“[USSR]
had zero unemployment'') and followed the source. It turned out to be an
unattributed review of a book. It doesn't give any employment figures, and
specifically it never mentions zero.

Then I tried the source for `Eliminated poverty'. That's an anonymous blog
post, no specific mention of `eliminating poverty' as far as I can see.

I didn't check the other sources. Even if all those claims were true and
supported by concrete data, you would have to look beyond the raw figures.
What's the use of zero unemployment if a lot of the jobs are useless and
maintained only for the sole purpose of keeping the unemployment figures
low?  (See point 2 here:
https://culture.pl/en/article/10-mind-boggling-oddities-of-communist-poland)
And surely you cannot call that Reddit post a `comparison' between U.S.
capitalism and Soviet communism.  It's a biased list of capitalism's
deficiencies juxtaposed with a list of communism's virtues (for a moment
let's disregard the problem of which of those are actually true and
supported by reasonable sources). It's like comparing being free to being in
prison by saying that you have to buy your own food and clothes when you're
free, whereas in prison food and clothes are taken care of for you. Clearly,
being in prison wins.


Re: Martin Ward's post in RISKS-31.25

Amos Shapir <amos083@gmail.com>
Sat, 18 May 2019 11:46:51 +0300
While I agree with most of Martin Ward's points about capitalism, his views
about the USSR are ridiculous and seem to rely mostly on official Soviet
publications.

The USSR's economy really advanced a lot during the first half of the 20th
century, but that's mainly because it had started very low (and there's no
telling if capitalism couldn't do the same, or better).  As for zero
unemployment and homelessness—these are not achievements, it was the law!
The unemployed and homeless were simply rounded up and sent off to
Siberia...

This is shown clearly in the details of Russia's economy immediately after
the collapse of the USSR: Such a large economy cannot deteriorate so much
overnight, it's just that its true situation was suddenly revealed, showing
that it was not so great during the rule of the USSR.


Re: Martin Ward's post in RISKS-31.25

Dimitri Maziuk <dmaziuk@bmrb.wisc.edu>
Fri, 17 May 2019 16:16:22 -0500
There was a novel and the movie called The Russia House, its main plot point
was the revelation that much of the USSR's dreaded military might existed
only on paper in carefully `augmented' reports.

The statistics about USSR economic growth, homelesness, unemployment, etc.,
were made exactly that way too. I've seen enough of it up close to be quite
certain of that.

However, you'd have to learn Russian to read about e.g., article 209 of the
USSR criminal code, the `7/10' decree it has replaced, and all that: English
language sources seem to be either skimpy, or fairly specialized.  But trust
me, citing those ramblings does your argument more harm than good.


Re: Too proud of my house number (RISKS-31.23)

Gene Wirchenko <gene@shaw.ca>
Wed, 22 May 2019 22:42:58 -0700
So some Googlite removed considering the `-6' from `1-6' and caused chaos.
The risk is assuming that things are done the same everywhere.  This is a
dangerous assumption.

This is an interesting read:
https://www.mjt.me.uk/posts/falsehoods-programmers-believe-about-addresses/
Falsehoods programmers believe about addresses

If you look, you can find similar lists for other areas.  Keywords:
  falsehoods programmers believe

Please report problems with the web pages to the maintainer

x
Top