The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 30 Issue 94

Monday 3 December 2018


Ping of Death comes to aircraft avionics
John Clear
Tesla driver asleep at the wheel on automatic
Overtrust as a safety issue: The dangers of Autonomous Vehicles
Don Norman
Israeli Software Helped Saudis Spy on Khashoggi, Lawsuit Says
Sec. Def. Mattis: Putin tried to "muck around" with U.S. midterms
The Hill
How Trump, ISIS, and Russia have mastered the Internet as a weapon
How creative foreign hackers crack into a vulnerable U.S.
John P. Carlin
After a Hiatus, China Accelerates Cyberspying Efforts to Obtain U.S. Technology
Justice Department charges Iranians with hacking attacks on U.S. cities, companies
Deputy AG Rod Rosenstein Is Still Calling for an Encryption Backdoor
DriveSavers claims it can break into any locked iPhone
The Verge
Risks of Airport Wi-Fi
How I changed the law with a GitHub pull request
When the Internet Archive Forgets
Payless prank: Social media influencers thought they were buying Palessi
"Human intelligence is needed." Want to Purge Fake News? Try Crowdsourcing
U.S. Asks, Are You a Terrorist? Scottish Grandfather Gives Wrong Answer
AI thinks like a corporation—and that's worrying
The Economist
Chinese genomics scientist defends his gene-editing research in first public appearance
Be careful how you make DMCA complaints
The Register
How long fumbling with cellphone before monkeys close in?
Dan Jacobson
Chinese businesswoman accused of jaywalking after AI camera spots her face on an advert
The Telegraph
EU data rules have not stopped spam emails, Nesta survey finds
The Telegraph
Re: The Cleaners' Looks At Who Cleans Up The Internet's
Richard Stein
Re: Constructive software engineering?
Toby Douglass
Re: EMV card fraud statistics
Phil Smith III
Re: GMail's spam filter is getting vicious?
Rex Sanders
Inside the futuristic restaurant where a robot has replaced the bartender
A QA engineer walks into a bar...
Gabe Goldberg
Info on RISKS (comp.risks)

Ping of Death comes to aircraft avionics

John Clear <>
Wed, 28 Nov 2018 10:03:29 -0800
FAA Bulletin Addresses Aspen Display Resets

  According to the Special Airworthiness Information Bulletin (SAIB),
  affected systems will repeatedly reset themselves at five- to ten-minute
  intervals, resulting in the temporary loss of all flight display
  information for up to one minute during each reset.

  "The cause of this safety issue is currently under investigation; however,
  preliminary information suggests that the cause of the continuous reset is
  related to the ADS-B In interface" said the FAA.

ADS-B is a data link protocol for weather, traffic and other flight related
information.  It seems that certain Aspen Primary Flight Displays (PFD) and
Multifunction Displays (MFD) have issues with ADS-B data, and are reseting
in flight.

PFDs display attitude, altitude, speed and other flight information.  Loss
of a PFD can lead to loss of control of an aircraft.  MFDs display charts,
weather engine and other information.  Loss of an MFD in cruise is a minor
issue, but during an approach can cause a loss of situational awareness if
on an instrument approch.

Tesla driver asleep at the wheel on automatic

"Peter G. Neumann" <>
Mon, 3 Dec 2018 13:49:11 PST
Very interesting story.  Los Altos Planning Commission chair was asleep at
wheel of his Tesla in auto mode going 70 on 101.  CHP pulled him off the
road by forcing the car to stop by putting enough police cars in front of
and next to him, and slowing down.  He didn't wake up until they all had
stopped.  No accident, no one hurt, but not clear why the autopilot didn't
shut down.

  [PGN-ed From a note from Ray Perrault.]

Overtrust as a safety issue: The dangers of Autonomous Vehicles

Don Norman <>
Mon, 3 Dec 2018 18:47:25 -0800

There are two issues with this event, neither of them particularly new.

*Overtrust*. People often worry a lot about *undertrust*: how do we convince
people to trust a new system. They seldom worry about over trust.  Well, the
recent incidents (e.g., Uber and Tesla indicate that overtrust is a real
danger. See the UrL above).

*System design.* Tesla (and all OEMs) claim to be able to detect when the
driver is not paying attention. Obviously, Tesla failed.

*Safety driver*. The notion of a safety driver is fundamentally flawed, as
the Uber situation demonstrates. The Human-Systems Integration folks (which
includes me) have been demonstrating for many decades now that people can
not take over rapidly enough when there has been nothing to do for many
hours, and where the system has performed quite well for weeks, months or
years.  (My paper on this topic was about 4 decades ago, and I was nit the

The one lesson we have learned from the recent events is that people do not
learn. Each new field of ap[lication ignores all the findings of the
previous other fields.  In my opinion, the levels of automation argument is
fundamentally flawed. Take the 0 - 5 levels described by SAE. (0=fully
manual. 5 = perfect, full-time automation, so no controls are required).
(See page 10 of the PDF).

At best we are today at level 2 for commercial vehicles. (We are at level 5
for special cases, such as transporting materials on factory floors.)

Here are my opinions. We should permit levels 0, 1, and 2; prohibit levels 3
and 4, and allow only level 5.  And place restrictions on advertisements of
vehicle capability.

Makes great scientific sense, but fails politically and in today's
competitive environment, it fails the marketing test.

Autonomous vehicles are rapidly advancing in capability. Their most
dangerous issues will be overtrust once we hit levels of 3 and 4 (we already
see overtrust at level 2).  And the next major problem facing us is the
complexity of the transition when some vehicles that are truly at level 5
intermix with vehicles at level 1 or 2—to say nothing of level 0
vehicles. (Levels 0 and 1 are apt to game the system, assuming that level 5
systems are programmed not to hit them, so they can ignore them. Among the
many RISKS this presupposes is the difficulty of knowing what level of
automation a car is using.

(Caveat I do research for numerous automobile companies on several
continents: however, none of them have been asked to review this email.)

Don Norman, Prof. and Director, DesignLab, UC San Diego

Israeli Software Helped Saudis Spy on Khashoggi, Lawsuit Says (NYT)

Monty Solomon <>
Sun, 2 Dec 2018 22:44:59 -0500

A Saudi dissident based in Canada claims the Saudi government planted
spyware in his phone to eavesdrop on his talks with Jamal Khashoggi.

Sec. Def. Mattis: Putin tried to "muck around" with U.S. midterms (The Hill)

"Peter G. Neumann" <>
Sun, 2 Dec 2018 11:12:59 PST
The Hill quotes Secretary of Defense Mattis that the Russians tried to "muck
around" the U.S. midterm elections.

Mattis: Russia tried to interfere in 2018 midterms
John Bowden, 1 Dec 2018

Defense Secretary James Mattis said Saturday that Russian operatives
attempted to interfere in the 2018 midterm elections, apparently confirming
for the first time that Moscow attempted to meddle in last month's

Mattis spoke of the relationship between the Trump administration and
Russian President Vladimir Putin during an interview Saturday at the Ronald
Reagan Presidential Library in California.

"There is no doubt the relationship has worsened. He tried again to muck
around in our elections this last month," Mattis said. "We are seeing a
continued effort around those lines."

How Trump, ISIS, and Russia have mastered the Internet as a weapon

Monty Solomon <>
Sun, 2 Dec 2018 01:44:59 -0500
Peter Singer and Emerson Brooking explore how harmless apps become an
arsenal of war.

How creative foreign hackers crack into a vulnerable U.S. (John P. Carlin)

Monty Solomon <>
Sun, 2 Dec 2018 01:44:21 -0500
John P. Carlin details the many destructive incursions into U.S. networks.

After a Hiatus, China Accelerates Cyberspying Efforts to Obtain U.S. Technology

Monty Solomon <>
Thu, 29 Nov 2018 09:38:33 -0500

China's practice of breaking into American computers has become a core
grievance of the Trump administration as leaders of the two nations prepare
to meet.

Justice Department charges Iranians with hacking attacks on U.S. cities, companies (WashPost)

Monty Solomon <>
Thu, 29 Nov 2018 02:23:22 -0500
According to a newly unsealed indictment, the targets included the cities of
Atlanta and Newark and the port of San Diego.

Deputy AG Rod Rosenstein Is Still Calling for an Encryption Backdoor (WiReD)

Gabe Goldberg <>
Sun, 2 Dec 2018 22:56:27 -0500
Tension has existed for decades between law enforcement and privacy
advocates over data encryption. The United States government has
consistently lobbied for the creation of so-called backdoors in encryption
schemes that would give law enforcement a way in to otherwise unreadable
data. Meanwhile, cryptographers have universally decried the notion as
unworkable. But at a cybercrime symposium at the Georgetown University Law
School on Thursday, deputy attorney general Rod Rosenstein renewed the call.

"Some technology experts castigate colleagues who engage with law
enforcement to address encryption and similar challenges," Rosenstein
said. "Just because people are quick to criticize you does not mean that you
are doing the wrong thing. Take it from me."

  [The UK and Australians are still barking up this tree, although one of
  them has a caveat that suggests they don't want to weaken the protection.
  Considering that no systems are adequately secure in the first place, the
  Keys Under Doormats report still gets to the heart of the matter.  There
  is really no such thing as a sufficiently secure backdoor that can be used
  *only* by the supposed "good guys". PGN]

DriveSavers claims it can break into any locked iPhone (The Verge)

Gabe Goldberg <>
Wed, 28 Nov 2018 17:14:44 -0500

Risks of Airport Wi-Fi (LATimes)

"Peter G. Neumann" <>
Mon, 3 Dec 2018 15:11:20 PST
  [From Geoff Goodfellow]

Here's what you can do to stop cyber criminals

*Airport Wi-Fi can be a security nightmare. Here's what you can do to stop
cyber criminals*

You may find an evil twin out there—not your own but one that still can
do great harm. That nasty double often awaits you at your airport, ready to
attack when you least expect it.

That's just one of the findings in a report that assesses the vulnerability
of airport Wi-Fi, done not to bust the airports' chops,but to make airports
and travelers aware of the problems they could encounter.

Of the 45 airports reviewed, the report by Coronet said, two we might use
could pose a special risk: San Diego and Orange County's John Wayne, which
rated No. 1 and No. 2, respectively, on the “Top 10 Most Vulnerable

Airports, said Dror Liwer, chief security officer for Coronet, a
cyber-security firm, are a fertile field because there's a concentration of
“high-value assets,'' which include business travelers who may unwittingly
open themselves up to an attack, he said.

That's where the evil twin comes in. Let's say you're sitting in an airport
lounge or maybe right outside the lounge. You see a Wi-Fi network that says,
“FreeAirportWiFi.'' Great, you think. Most airports do have free
Wi-Fi. They may make you watch a couple of commercials (or you may pay a bit
to skip those), but otherwise, the connectivity is there for you.  “I
always say that in the balance between convenience and security, convenience
always wins,'' Liwer said.

And you lose. Because if you take the bait and log in, that evil twin
posing as the airport Wi-Fi then has access to your closely held secrets.

In some cases, Liwer said, the person creating this trap may be sitting next
to you, which means the signal is strong and attractive. It takes only some
inexpensive equipment and know-how for a thief to succeed, and presto,
you're in the cyber-security soup.

“Most attackers are trying to get your credentials, and if they have those,
they have the keys to the kingdom. If I know your password, I own your


It is as sinister as it sounds. Liwer said.  For theives, it's a business,
he said. “What they are looking for is something that will make them

What makes it worse: You're getting on a plane and won't be checking your
bank balance any time soon.

The sites that will do you harm are hard to detect with the naked,
inexperienced eye. How do you protect yourself?  Here are ways to keep your
data safe, with help from Liwer; Vyas Sekar, an assistant professor of
electrical and computer engineering at Carnegie Mellon's College of
Engineering; Jake Lehmann, managing director of Friedman CyZen, a
cyber-security consulting service; and Michael Tanenbaum, executive vice
president North America cyber practice for Chubb Ltd. [...]

How I changed the law with a GitHub pull request (ArsTechnica)

Richard Stein <>
Wed, 28 Nov 2018 17:24:43 +0800

I wonder if Washington DC's git repository is subject to regular audit
against an authenticated reference to ensure content integrity to show that
the revision history aligns with legislative approval/voting processes? Is
there an off-site hardcopy backup in case github suffers a permanent outage?

The Federal Register ( embodies
the official publication of Federal Laws, Presidential Documents,
Administrative Regulations and Notices. When a bill passes the legislative
processes in both houses, and the President signs it, the law becomes
enforceable *after* Federal Register publication.

Technology certainly advances convenience for accessibility: no more treks
to the library or City Hall to look up zoning ordinances, birth
certificates, real estate transactions, etc.

Surreptitious and untraceable modification to regulations or legal guidance
elevates the risk of civil disruption. Strict revision control oversight is
essential create and preserve unrepudiated content integrity.

Risks: Digital storage reliability issues (see ex-legislative system of
record changes (revision log deletion, untraceable provisions inserted or
exceptions appended, etc.) revise laws and regulations to suit special

A soft Constitution is easier to revise than a hard one!

When the Internet Archive Forgets (

Gabe Goldberg <>
Mon, 3 Dec 2018 12:46:49 -0500
On the Internet, there are certain institutions we have come to rely on
daily to keep truth from becoming nebulous or elastic. Not necessarily in
the way that something stupid like Verrit aspired to, but at least in
confirming that you aren't losing your mind, that an old post or article you
remember reading did, in fact, actually exist. It can be as fleeting as
using Google Cache to grab a quickly deleted tweet, but it can also be as
involved as doing a deep dive of a now-dead site's archive via the Wayback
Machine. But what happens when an archive becomes less reliable, and
arguably has legitimate reasons to bow to pressure and remove controversial
archived material?

Payless prank: Social media influencers thought they were buying Palessi (The Washington Post)

Gabe Goldberg <>
Sun, 2 Dec 2018 00:23:57 -0500
But the prank also points to a reality about the human mind: Consumers are
not capable of discerning the quality and value of the things they buy, said
Philip Graves, a consumer behavior consultant from Britain.  Slap a
fancy-sounding European label on $30 shoes, and you have an illusion of
status that people will pay an exorbitant amount of money for. ...

After attendees purchased overpriced shoes – some for $200, $400 and $600 –
they were taken toward the backroom, where the prank was revealed.  "You've
got to be kidding me," said the woman who had gushed about the pair of
floral stiletto heels, her eyes wide as she stared down at the overpriced
shoes in her hands.

...but, of course—could never happen online—people are too cautious
and well-informed.  Wait, what?

"Human intelligence is needed." Want to Purge Fake News? Try Crowdsourcing (NYTimes)

geoff goodfellow <>
Fri, 30 Nov 2018 14:53:36 -1000
*Removing misinformation is too big a job for any single company. Facebook
and others should enlist users to help.*


A recent New York Times investigation described how Facebook has bungled its
response to the misinformation that has proliferated on its platform. Chief
Executive Mark Zuckerberg acknowledged in an interview that the problems his
company is grappling with “Care not issues that anyone company can
address.''  He's right: The problem of fake news has become too big for any
social network to address on its own. Instead, the company should call on
its users for help though crowdsourcing.

Misinformation is rife on Facebook and other social networks: Russia
attempted to interfere in the U.S. midterm elections, the Saudis employ
hundreds of trolls to attack critics, fake activists in Bangladesh have been
promoting nonexistent U.S. women's marches, to sell merchandise, there was a
huge disinformation campaign during last month's general election in Brazil,
and fake news has triggered episodes of violence in countries including
India, Myanmar and Germany.

Facebook has created a War Room, where staffers try to identify
misinformation, but they're clearly outnumbered and unable to keep up with
fake news from the platform. Part of the problem is the team is relying on
artificial intelligence, but, as experts recently explained in *The Times*
keywords often can't effectively identify misinformation. Human intelligence
is needed. To combat fake news, Facebook needs to ask the public for help
identifying false reporting.

The best way to handle a project too large for any one organization is to
ask lots of volunteers to help. That's how the Oxford English Dictionary was
created: The editors asked members of the public to search the books they
owned for definitions of particular words and mail in their findings.
Thousands participated. As James Surowiecki argued in *The Wisdom of Crowds:
Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes
Business, Economies, Societies and Nations*, large groups tend to accurately
answer questions, even if most of the individuals in the group aren't very
rational or well-informed.

In this case, Facebook should add buttons that appear prominently below any
purported news stories posted on its site, asking members of the public to
weigh in on whether an article is true or false. Of course, some people
would report news as fake simply because they disagree with it, while others
might be genuinely duped by false reports. But Facebook has reportedly
already assigned their users internal reputation scores that would help the
company discount false or gullible reporters. And the number of flags on a
truly false story would be expected to rise above the typical number of
complaints that merely polarizing posts engender. Facebook staff would then
monitor and investigate in real time any posts that are being
disproportionately flagged... [...]

  [If you want the huge collection of URLs that I have removed, please go to
  the original.  They completely cluttered up our RISKS ASCII READER.  PGN]

U.S. Asks, Are You a Terrorist? Scottish Grandfather Gives Wrong Answer (NYTimes)

"Bob Frankston" <>
1 Dec 2018 14:07:44 -0500

Putting aside the question of why we ask people if they are terrorists --
when will we design systems that account for human foibles. It's far too
easy to click the wrong box and even worse on touch systems with parallax.
How much worse will these get with AI system that can't explain why they
reach their conclusions?

  [Mark Thorson noticed a similar item at

AI thinks like a corporation—and that's worrying (The Economist)

Richard Stein <>
Fri, 30 Nov 2018 23:11:09 +0800

'David Runciman, a political scientist at the University of Cambridge, has
argued that to understand AI, we must first understand how it operates
within the capitalist system in which it is embedded.  "Corporations are
another form of artificial thinking-machine in that they are designed to be
capable of taking decisions for themselves," he explains.

'"Many of the fears that people now have about the coming age of intelligent
robots are the same ones they have had about corporations for hundreds of
years," says Mr Runciman. The worry is, these are systems we "never really
learned how to control."

'After the 2010 BP oil spill, for example, which killed 11 people and
devastated the Gulf of Mexico, no one went to jail. The threat that Mr
Runciman cautions against is that AI techniques, like playbooks for escaping
corporate liability, will be used with impunity.

'Today, pioneering researchers such as Julia Angwin, Virginia Eubanks and
Cathy O'Neil reveal how various algorithmic systems calcify oppression,
erode human dignity and undermine basic democratic mechanisms like
accountability when engineered irresponsibly. Harm need not be deliberate;
biased data-sets used to train predictive models also wreak havoc. It may
be, given the costly labour required to identify and address these harms,
that something akin to "ethics as a service" will emerge as a new cottage
industry. Ms O'Neil, for example, now runs her own service that audits

Risk: Ethics as a service (EAAS) platforms evolve into profit-seeking
services via corporate acquisition.

EAAS, given sufficient public trust and independent reputation, might serve
to police corporate entities that illegally capture profit by intentionally
exploiting biased data-sets. EAAS can become an autonomous public
arbitration service if proven bias-free.

Data-set bias is a long-standing issue that challenges AI deployment for
profit or specific purpose. Prior technology deployments that hinged on bias
were clumsy, led by carbon, and relatively easy to detect given the volume
of affected subjects: (a) home loans (BofA redlining in Detroit.

(b) Wells Fargo's phony account creation
illustrate two examples.

Note that
discusses credit redlining from neural networks in 1994.

How best to excise data-set bias? How to quickly test and detect AI platform
bias before go-live? Can EAAS reliably detect and characterize data set bias
or an algorithm's bias via access to a commercial website or service (say or using fictitious, but random and bias-free customer
profiles and input data?

To become a trusted arbiter, an EAAS must be demonstrated to be optimally
unbiased to serve as a bias detection reference standard. How does one
create an optimally unbiased baseline standard? A bias-free algorithm
oracle, the equivalent of a standard kilogram, volt, or second is needed for
reference comparison.

It appears that to end data bias, and demonstrate bias-free AI capabilities,
true random data generation capability is required. This requirement has
been a long-standing challenge for cryptography and other fields.

See "Spooky Action" By Ronald Hanson and Krister Shalm, Scientific American,
DEC2018 on mechanisms to generate seed-free, true random numbers using
quantum entangled tests of Bell's Inequality.

Chinese genomics scientist defends his gene-editing research in first public appearance (Re: RISKS-30.93)

Monty Solomon <>
Thu, 29 Nov 2018 02:22:44 -0500
He Jiankui says he is "proud" that his work on genetically altering babies
could help save lives.

Be careful how you make DMCA complaints (The Register)

Mark Thorson <>
Mon, 26 Nov 2018 16:32:28 -0800
There's a right way and a wrong way.
This is an example of the latter.

How long fumbling with cellphone before monkeys close in?

Dan Jacobson <>
Tue, 27 Nov 2018 13:16:51 +0800
THERE I (RISKS reader) WAS, fumbling with my cellphone, as the monkeys
got closer and closer.

I was poking around a trail in the westernmost part of Heping District,
Taichung, Taiwan, when I encountered a group of 30 macaques in the bamboos.

I thought it might be cool to record their grunts, but for some reason I
couldn't find the Sound Recorder app in the Launcher of my cellphone.

As they had come down from the bamboos and were inching closer and closer,
now at about 10 meters from me, I waved my orange folding saw at them while
making some firm sounds, thinking it would buy me some more time to find the

But they only retreated about a meter. OK, I finally found the app and
recorded three minutes before having had enough (they were now in a semi
circle around me. Me in the meadow, they in the bushes, at seven meters,
still inching closer...)

Never letting them know that we humans (merely twice their size) were
actually scared of them (30 / 2 = 15 humans), I closed my cellphone and
retreated with dignity. Phew.

Chinese businesswoman accused of jaywalking after AI camera spots her face on an advert

Chris Drewe <>
Tue, 27 Nov 2018 21:32:40 +0000
*The Telegraph*,25 Nov 2018

Chinese police have admitted to wrongly shaming a famous businesswoman after
a facial recognition system designed to catch jaywalkers mistook an advert
on the side of a bus for her actual face.

  ["Big Brother is always watching you..."]

EU data rules have not stopped spam emails, Nesta survey finds (The Telegraph)

Chris Drewe <>
Tue, 27 Nov 2018 21:32:40 +0000

Hannah Boland, *The Telegraph*, 24 Nov 2018

More than half of Brits think European data regulations have not given them
more control over how many junk emails they receive, with one in five saying
they are getting more spam since General Data Protection Rules were brought
in.  GDPR was rolled out earlier this year as a set of standards for how
companies could gather and use people's data.

Many had hoped the rules, which came into effect across the EU on May 25,
would bring an end to junk emails, as consumers would have to opt in to
receiving marketing emails from companies, whereas previously many
businesses had only given people the option to opt-out.

  [I'd always understood that most junk e-mail comes from fake addresses in
  other parts of the world, i.e., difficult to trace and outside EU (or US or
  wherever) jurisdiction, so regulations wouldn't really help.]

Re: The Cleaners' Looks At Who Cleans Up The Internet's Toxic Content (Douglass/, Risks-30.93)

Richard Stein <>
Mon, 3 Dec 2018 09:08:47 +0800
Toby—By posting that objectionable quote, I intended to elevate attention
to employee hardship, and promote occupational sympathy.

The ghastly imagery mentioned by the content reviewer graphically resonates.
The Cleaners immersive work environment is fraught with severe psychological

Social media services are free to the consumer, but a severe emotional price
is exacted on the employees who attempt to scrub it free of divisive and
horrifying content. Employees experience significant trauma from repeat and
continuous exposure to depraved and inhumane, nihilistic images. Their
effort helps sustain a service and brand that might otherwise drown from
digital content pollution without deliberate intervention.

Employment laws and occupational health and safety rules in the EU and North
America prohibit exposure to toxic content in the workplace. I do not know
if Philippine employment law stipulates mandatory psychological service
assignment in this workplace scenario. Are these employees subsidized to
engage in group therapy to help combat and diminish the emotional toll they
experience? That 'Internet Cleaning' roles are sourced to a location where
strict workplace employment rules are either poorly enforced or overly
tolerant is not surprising.

Corporations are well known for their regulatory arbitrage practices, and
have become especially adept at their exploitation to dispose of toxic
substances: lead, plastic, toxic waste, and now, the objectionable digital
content which threatens a brand's very existence.

Re: Constructive software engineering?

Toby Douglass <>
Mon, 3 Dec 2018 11:45:38 +0200
I may be completely wrong, but I think this is an information problem, and
it is the problem identified by Hayek, namely, the more information is
processed, and the further it moves from its origin, the more misleading the
information is, and the more the interests of the person who will act upon
that information deviate from the interests of those who experience the
consequences of their action.

This problem is inherent and it would appear unavoidable within hierarchical
management structures, such as companies.

A company can be imagined as an information pyramid.

At the base are the ordinary workers, who generate information.

As we climb the pyramid, we ascend ever less populous layers of management,
with ever more executive power.

Inherently, each layer being more populated than that above generates more
information than the layer above can handle.  Information is necessarily
then aggregated on the way up - so we have a team of software developers,
who report to their team lead, who reports to his lead, and so on.

Aggregation qualitatively changes the meaning of information.

Additionally, bad news never travels more than one layer up the pyramid, to
a significant extent from human factors.  How do you tell your boss' boss
that he's incompetent and making not just wrong, but profoundly wrong
decisions?  you do not.

In fact, of course, said boss is an intelligent and sensible man, who given
the qualitatively distorted information he receives, and given the entirely
different set of incentives placed upon him, makes decisions are for him in
his position absolutely rational and correct.

He is competent, but he is effectively incompetent by the structure he is
placed within.

We then must also factor in the law of unintended consequences, which makes
a mockery anyway of all high-level decisions imposed upon complex structures
or organizations.

The hierarchy is invested with executive power, and so there is nothing or
almost nothing those lower down the pyramid can do about this.

In all things, there are factors which encourage, and there are factors
which discourage, and in the end, you get what you get.

In my experience, only very small companies are efficient and effective in
their decision making.  This is then is in larger companies a significant
factor discouraging success.  Such companies however have other factors,
which encourage success, and so they often do well for long periods.

What's needed really is a different form of company.

I suspect they may already exist, it's just they are not common knowledge.
You can only have a form of governance that is understood by those who are
governed by it.

Re: EMV card fraud statistics (Goldberg, RISKS-30.91)

Phil Smith III <>
Thu, 29 Nov 2018 17:56:57 -0500
David Alexander wrote, in part:
>I would just like to point out that, just because a card is EMV enabled, it
>does not mean it cannot be attacked by other means such as compromising the
>POS device.

David's statement is true, but is worthy of expansion. The POS device may or
may not be the terminal, which is the little box where you swipe the
card. The POS may be the actual cash register. In cases like the Target
hack, the POS was what was compromised, not the terminal.

In any case, EMV says nothing about encryption: the card information is NOT
encrypted between the terminal and the POS, nor between the POS and the
processor, unless something else does so.

All EMV protects against is cloned magstripe cards made using stolen
magstripe data (since the CVV on the magstripe does not match the CVV
printed on the card, you can't even clone a magstripe card using a picture
of a card).

Furthermore, fraud has, as expected, shifted from card-present to
card-not-present since EMV was introduced in the U.S., as it has in every
other market.

Was EMV introduction a failure? No, it did what the issuers wanted it to do:
- calmed down consumers
- let them shift liability to the merchants

Did it reduce fraud? Not so much.
Was it expected to? Not so much.

A better way to reduce fraud is to encrypt the data in the terminal, so a
compromised POS is unable to exfiltrate useful data. There are products that
provide this. The POS is relatively immune from compromise, since it's a
relatively dumb device and usually needs physical access for update. Of
course that happens too, but it's typically on a smaller scale (skimmers,
for example).

Re: GMail's spam filter is getting vicious?

Rex Sanders <>
Mon, 3 Dec 2018 10:13:48 -0800
Not only is GMail's spam filter vicious, it doesn't learn from mistakes.

GMail has tagged Risks Digest as spam dozens of times over the last few
years. Just as many times, I've told GMail it's not spam.

Of course, the Digest with Rob Slade's complaint was tagged as spam.

I'm glad Google gave their spam filters a sense of irony. Wish they'd work
on the other problems now.

  [Rob Slade comments: Maybe this is caused by using the name "Rob", the
  spam filter might think it has something to do with robbery...  RS
    [In which case this issue will be spam-filtered as well.  PGN]

  [Toby Douglass added: I have had this problem for a few years.  Filtering
  is variable, over periods on the order of months.  Sometimes for a while
  emails will get through.  Other times, silence - all going to spam, or, I
  speculate, sometimes not being delivered at all.  Linus Torvalds once
  complained about a 30% false positive rate for Gmail on the Linux kernel
  mailing list.  TD]

Inside the futuristic restaurant where a robot has replaced the bartender (WashPost)

Richard Stein <>
Fri, 30 Nov 2018 12:58:47 +0800

Risk: Commiserating with a robot bartender after a tough day at work is bad
for mental health.

A QA engineer walks into a bar...

Gabe Goldberg <>
Sun, 2 Dec 2018 14:23:04 -0500
  [A friend forwarded this:]

A QA engineer walks into a bar.
Orders a beer.
Orders 0 beers.
Orders 99999999999 beers.
Orders a lizard.
Orders -1 beers.
Orders a ueicbksjdhd.

First real customer walks in and asks where the bathroom is.
The bar bursts into flames, killing everyone.

Please report problems with the web pages to the maintainer