The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 29 Issue 89

Monday 31 October 2016


My view on the Mirai DDoS botnet attack exploiting Dyn and others
UK Lottery ticket scanner missing winning tickets
Patrick Cain
Amtrak agrees to $265M settlement in Philadelphia crash that killed eight
The New York Times
Even Anonymous Labels can have adverse meanings
The Atlantic via Bob Gezelter
Man `sells' car to his fake company, to avoid speeding ticket
Chris Drewe
Isn't it time to "Start Over" ... or "Give Up ALL Hope"?
Werner U
Relief at Last for U.S. Owners of Diesel Volkswagens
Australian Red Cross leak
SMH via Dave Horsfall
Broadband Providers Will Need Permission to Collect Private Data
Yahoo invents spying billboard
Google Brain AI develops cryptographic algorithms
Ars Technica
Managing Driving's Many Distractions
Re: Self-driving cars shouldn't have to choose who to protect in a crash
Gary Hinson
SQL injection and Buffer Overflow Risk Assessments
Peter Bright
OCC Notifies Congress of Incident Involving Unauthorized Removal of Information
Jim Reisert
New FCC rules on privacy
The Washington Post
Re: Pittsburgh's new artificially intelligent stoplights could mean no more pointless idling
Dave Horsfall
Re: Undetectable election hacking?
Mark E. Smith
John Colville
Al Mac
Re: Internet becoming unreadable, lighter thinner fonts
Wendy M. Grossman
Re: Unneeded services
Dimitri Maziuk
Re: The Trolley Problem and altruism
Michael Marking
Samsung Holdouts Won't Give Up Their Fire-Prone Galaxy Notes
More Wretched News for Newspapers as Advertising Woes Drive Anxiety
Info on RISKS (comp.risks)

My view on the Mirai DDoS botnet attack exploiting Dyn and others

"Peter G. Neumann" <>
Mon, 31 Oct 2016 11:12:24 PDT
This is my personal summary of the recent DDoS attacks

Mirai is one of two major malware packages for triggering massive DDoS
attacks.  It provides largely automated facilities.

The whole attack using Mirai required very little effort to trigger, and to
re-initiate after it timed out (or otherwise shut down).  Mirai code is now
widely available.

"Anna Senpai" was the posting handle.  "Senpai" translates roughly from
Japanese as "senior" or "superior" or in a loose sense "mentor".

Dyn was apparently the source of only about a fifth of the overall traffic.

So almost anyone could have done this.  The choice of the handle "Anna
Senpai" suggests to me that this might have been done as a lesson to the
community, to remind us how easy it is to disrupt Internet traffic and

   Back on 11 Feb 2014, in an article titled " 'Biggest Ever'?  Massive DDoS
   attach hits EU, US", CloudFlare warned there are “ugly things to come.''
   That article notions the exploitation of the Network Time Protocol (NTP)
   and spoofed IP addresses.

The bigger-picture lessons that have not been learned since the Internet
Worm in early November 1988 and each new "biggest disruption ever" item
since then continue to haunt us 28 years later.  On our present course,
presumably new attacks will continue to haunt us many years to come,
especially as to many of us seem to over-indulgingly overhype and over-endow
the presumed wonderfulness of the Internet of Things, Cloud Storage,
Automation, and other "advances"—without understanding and anticipating
the risks.

UK Lottery ticket scanner missing winning tickets

Patrick Cain <>
Wed, 26 Oct 2016 20:57:03 +0100
Canadian company Camelot is licensed to run the UK's National Lottery and
also UK ticketing for the Europe wide Euromillions Lottery.

A few months back they added a feature to their iPhone/Android app that
scans tickets (via QR code on ticket) after the draw and lets the user know
if they have a winning ticket.  Unfortunately it has been reporting some
winning tickets as losing tickets.  This won't go down well with the many
thousands of users of the app who will have disposed of potentially winning

Amtrak agrees to $265M settlement in Philadelphia crash that killed eight

Monty Solomon <>
Thu, 27 Oct 2016 21:19:52 -0400

The settlement, outlined in a federal judge's order on Thursday, is one of
the largest involving a rail crash, a plaintiff's lawyer said.

Even Anonymous Labels can have adverse meanings

"Bob Gezelter" <>
Thu, 27 Oct 2016 04:11:24 -0700
Since the mid 1980's the common narrative about AIDS/HIV has been that the
virus came to North America via a Canadian flight attendant and spread by is
sexual contacts.

Recent research, both historical and phylogenetic, has essentially refuted
this narrative. The phylogenetic research indicates that HIV entered North
America more than a decade earlier, and this fact is of interest to
epidemiologists, virologists, and others.

Computing types should take more note of the historical research, which
implies that the identification of the flight attendant as "Patient 0" of
HIV in North America is the result of misreading an unfortunate choice of

Patient "0" (zero) was, in actuality Patient "O" (oh), as in "Patient
Outside of California" in the study of a cluster of AIDS patients; the other
patients being local to Los Angeles.  As information from that study spread,
the letter "oh" got misread as "zero".

The moral of the story is that one should be careful when choosing anonymous
identifiers which can be misconstrued.

The full tale recounted in The Atlantic article (and the supporting
hyperlinked papers) makes worthwhile reading:

Bob Gezelter,

   [I have lived through the punch-card era, when it was difficult to
   differentiate between 0 and O. and fonts (and even typewriters) that
   failed to distinguish 1 from l, or l from I, or I from 1.  You may also
   have had problems with URLs that appear genuine, but might have (for
   example) a Cyrillic o where an ASCII o would normally appear, resulting
   in your browser taking you perhaps to a server in Russia.  PGN]

Man `sells' car to his fake company, to avoid speeding ticket

Chris Drewe <>
Sun, 30 Oct 2016 22:08:05 +0000
In the category of 'not funny but I laughed anyway', there was a story in
yesterday's *Telegraph* about a guy who attempted to get out of a speeding
ticket by inventing an imaginary used-car dealership, complete with web
site, and then created loads of bogus documentation to 'prove' that he sold
the car shortly before the ticket was issued:

Pensioner who spent a year building up body of false evidence to get off 60
pounds speeding fine is jailed.  Gordon Lewis, 74, was snapped by a speed
camera that would have incurred a fine of 60 pounds and three points on his
licence. ...  His actions were branded "stupid" by a judge who sentenced him
to eight months in prison.  The joke was that he ended up in jail, which was
a lot worse for him than just paying the fine.

Isn't it time to "Start Over" ... or "Give Up ALL Hope"?

Werner U <>
Fri, 28 Oct 2016 05:42:05 +0200
There is "something" *fundamentally* wrong with hardware and software of our
'electronic toys' ...  and it seems pathetic and pointless to just continue
playing catch-up in a no-win situation...  It would make more sense to start
over with a clean new hardware+software design focusing on data integrity
and privacy in all core design decisions instead of just 'an afterthought'.

Catalin Cimpanu, Softpedia, 27 Oct 2016
Malware Abuses Windows Atom Tables for Novel Code Injection Technique
Microsoft can't patch against AtomBombing technique

*Security researchers from enSilo have discovered a new way to inject
malicious code into legitimate processes, which helps malware bypass
security solutions.*

The technique, named AtomBombinb, revolves around atom tables, a feature of
the Windows operating system. ...Basically, these are shared tables where
apps store information on strings, objects, and other types of data, which
they need to access on a regular basis. Because they're shared tables, all
sorts of apps can access, or alter, data inside those tables.....helps
malware perform Man-in-the-Browser (MitB) attacks, an attack vector often
used by banking trojans...can also take screenshots of the user's screen,
access encrypted passwords or take any other action a whitelisted
application can perform.  AtomBombing affects all Windows versions. The bad
news is that this is a design flaw and not a vulnerability, which means that
Microsoft can't patch it without changing how the entire OS works, an
unfeasible solution.  AtomBombinb joins the list of various code injection
techniques discovered in the past, such as SQL injection, XSS,
*hotpatching*, *code hooking*, and more.  Earlier in the month, Trend Micro
researchers uncovered a PoS malware variant named FastPOS that *abuses the
Windows Mailslots mechanism* to store data before exfiltration from infected


Relief at Last for U.S. Owners of Diesel Volkswagens

Monty Solomon <>
Sun, 30 Oct 2016 13:09:25 -0400
A judge has approved a deal for Volkswagen to buy back or fix vehicles
affected in the emissions-cheating scandal.

Australian Red Cross leak

Dave Horsfall <>
Fri, 28 Oct 2016 17:19:04 +1100 (EST)
The records of over half a million blood donors had been public for around
seven weeks, when the information was accidentally placed upon a public
server by a contractor.  This information included blood type and sexual
history; the Red Cross is busy trying to contact the donors.

More information at:

Broadband Providers Will Need Permission to Collect Private Data

Monty Solomon <>
Fri, 28 Oct 2016 09:29:48 -0400

The new rules require broadband providers to get permission to collect data
on a subscriber's web browsing, app use, location and financial information.

Yahoo invents spying billboard

"Alister Wm Macintyre \(Wow\)" <>
Sun, 30 Oct 2016 18:57:21 -0500
Those billboards in the movie Minority Report, the ones that watch you,
listen as you speak, then address you by name? They're on the drawing board
at Yahoo.

Yahoo, under fire over this week's revelation that it helped the federal
government spy on its users, has applied for two related patents describing
a camera-equipped billboard that can spy on drivers.  The patent
applications, submitted in March 2015 and made public by the U.S.  Patent
and Trademark Office on Thursday, describe a billboard that has sensors
including cameras, microphones and even retina scanners built in or
positioned nearby. [...]

Google Brain AI develops cryptographic algorithms (Ars Technica)

"Peter G. Neumann" <>
Sat, 29 Oct 2016 16:02:03 PDT
"Google Brain has created two artificial intelligences that evolved their
own cryptographic algorithm to protect their messages from a third AI, which
was trying to evolve its own method to crack the AI-generated crypto. The
study was a success: the first two AIs learnt how to communicate securely
from scratch."

Managing Driving's Many Distractions

Monty Solomon <>
Tue, 25 Oct 2016 22:05:46 -0400

Re: Self-driving cars shouldn't have to choose who to protect in a crash (Andrews, RISKS-29.88)

"Gary Hinson" <>
Fri, 28 Oct 2016 18:27:26 +1300
Regarding the issue of autonomous cars attempting to choose the lesser of
two (or more) evils prior in the moment immediately preceding an unavoidable
accident, it's a good example of setting higher standards for AI drivers
than human drivers.

Has there ever been a situation where a human driver has been prosecuted,
after the fact, explicitly for making the wrong snap decision under these
kinds of situations?  Does society, let alone the law, seriously expect
drivers to make such complex value judgments in the milliseconds between
spotting and responding to multiple hazards?

It is unrealistic and unreasonable to expect AI drivers to be perfect.
Better than humans, fair enough, but not perfect, especially at this stage
of the game.  In due course, they will undoubtedly improve but right now I'd
happily settle for "demonstrably safer (less RISKier) than a good driver".

Dr Gary Hinson PhD MBA CISSP CEO of IsecT Ltd., New Zealand 

SQL injection and Buffer Overflow Risk Assessments (Peter Bright)

Werner U <>
Sun, 30 Oct 2016 04:17:25 +0100
(Peter Bright in Ars Technica)

"Where buffer overflows require all sorts of knowledge about processors and
assemblers, SQL injection requires nothing more than fiddling with a URL"

*How security flaws work: SQL injection*  —Peter Bright - Oct 28, 2016
This easily avoidable mistake continues to put our finances at risk.

*A SQL injection example:*  (SQL - Structured Query Language)
    31-year-old Laurie Love is currently staring down the possibility of 99
years in prison.
After being extradited to the US recently, he stands accused of attacking
...allegedly part of the #OpLastResort hack in 2013, which targeted the US
the US Federal Reserve, the FBI, NASA, and the Missile Defense Agency in
over the tragic suicide of Aaron Swartz...

Love is accused of participating in the #OpLastResort initiative through
SQL injection attacks,
SQL injections have recently been detected against state electoral boards,
and these attacks are regularly implicated in thefts of financial info.
Today, they've become a significant and recurring problem.  SQL injection
attacks exist
at the opposite end of the complexity spectrum from buffer overflows.
Rather than manipulating
the low-level details of how processors call functions, SQL injection
attacks are generally used
against high-level languages like PHP and Java, along with the (database)
libraries used.

*One of Microsoft's less valuable innovations*

The earliest description of these attacks probably came in 1998, when
security researcher
Jeff Forristal, writing under the name "rain.forest.puppy," wrote about
various features
of Microsoft's IIS 3 and 4 Web servers in the hacker publication* Phrack.*

IIS came with several extensions that provided ways to generate webpages
based on data
from databases. Then and now, most databases use variants of the
to manipulate their data. Databases using SQL organize data into tables

What Forristal noticed was that the way parameters were combined to build
the query meant
that an attacker could force the database to execute *other* queries of the
attacker's choosing.
This act of subverting the application to run queries chosen by an attacker
is called SQL injection.

As with buffer overflows, SQL injection flaws have a long history and
continue to be widely used
in real-world attacks. But unlike buffer overflows, there's really no
excuse for the continued
prevalence of SQL injection attacks: the tools to robustly protect against
them are widely known.
The problem is, many developers just don't bother to use them.

*How security flaws work: The buffer overflow*  ( Peter Bright - Aug 26,
2015 )

Starting with the 1988 Morris Worm, this flaw has bitten everyone from Linux
to Windows.

The buffer overflow has long been a feature of the computer security
landscape. In fact, the first self-propagating Internet worm's Morris Worm
used a buffer overflow in the Unix finger daemon to spread from machine to
machine. Twenty-seven years later, buffer overflows remain a source of
problems. Windows infamously revamped its security focus after two buffer
overflow-driven exploits in the early 2000s. And just this May
a buffer overflow found in a Linux driver left (potentially) millions of
home and small office routers vulnerable to attack.

At its core, the buffer overflow is an astonishingly simple bug that results
from a common practice. Computer programs frequently operate on chunks of
data that are read from a file, from the network, or even from the
keyboard. Programs allocate finite-sized blocks of memory buffers to store
this data as they work on it. A buffer overflow happens when more data is
written to or read from a buffer than the buffer can hold.

On the face of it, this sounds like a pretty foolish error. After all, the
program knows how big the buffer is, so it should be simple to make sure
that the program never tries to cram more into the buffer than it knows will
fit. You'd be right to think that. Yet buffer overflows continue to happen,
and the results are frequently a security catastrophe.  To understand why
buffer overflows happen—and why their impact is so grave—we need to
understand a little about how programs use memory and a little more about
how programmers write their code. (Note that we'll look primarily at the
stack buffer overflow. It's not the only kind of overflow issue, but it's
the classic, best-known kind.)

OCC Notifies Congress of Incident Involving Unauthorized Removal of Information

Jim Reisert AD1C <>
Sat, 29 Oct 2016 10:29:34 -0600

Contact: Bryan Hubbard  (202) 649-6870

WASHINGTON  The Office of the Comptroller of the Currency (OCC) today
notified Congress and other federal agencies of a major information security
incident, as required by the Federal Information Security Modernization Act

The notifications were made to the Director of Office of Management and
Budget (OMB), the Secretary of Homeland Security, the head of the Government
Accountability Office, and Congress.

The incident reported by the OCC involves a former employee who downloaded a
large number of files onto two removable thumb drives prior to his
retirement and when contacted was unable to locate or return the thumb
drives to the agency.

New FCC rules on privacy

"Peter G. Neumann" <>
Thu, 27 Oct 2016 9:17:20 PDT

Re: Pittsburgh's new artificially intelligent stoplights could mean no more pointless idling (Weller, R-29.88)

Dave Horsfall <>
Thu, 27 Oct 2016 07:29:48 +1100 (EST)
Sydney has had a simpler version of this for over 30 years.  Known as SCATS
(Sydney Coordinated Adaptive Traffic System) it relies upon in-road sensors.
When traffic flows smoothly, you tend to get green lights all the way (and
if you try and beat a red light then don't be surprised if you get reds all
the way), but all it takes is one accident...  It can also be overridden
from the control centre, for emergency vehicles etc.

I think I even mentioned SCATS in an old RISKS issue.
  [I can't find it. PGN]

Dave Horsfall, Unit 13, 79 Glennie St, North Gosford NSW 2250  AUSTRALIA

Re: Undetectable election hacking? (Brodbeck, R-29.88)

"Mark E. Smith" <>
Tue, 25 Oct 2016 16:18:57 -0700
You are correct, David. ATMs are designed to be verifiable and computers
used in US elections are not. However, your later statement is not correct
in many ways:

  "If we didn't have the secret ballot, we could build our voting machines
  like ATMs and verifying votes would be easy, but that's not the way we've
  chosen to structure our democracy."

First, it is possible to have a secret ballot where votes can be verified,
as long as the votes are not kept secret from the voters themselves. Since
the voters know how they voted, what they verify is that their votes were
counted by the computers in the same way they were cast. The only possible
reason for keeping election processes secret from voters is so that they
cannot be certain if their votes were counted accurately or not.  Who
benefits from this system can be learned by looking at who designed it that
way. The following information is taken in great part from No Treason: The
Constitution of No Authority by Lysander Spooner.

The "We" did not choose to structure this system, nor can it be called a
democracy. The first words of the Constitution, "We the People," are a lie.
The people were not involved, those who were to be governed were not asked
for their consent by those who had decided themselves best suited to govern,
and the framers of the Constitution were not elected and did not represent
the people. They were wealthy elites who considered the people a "mob and
rabble" and did not want the people to have any say in governance.  They
didn't even allow state legislatures to vote on the Constitution, which was
required for a Constitution to be adopted, but held their own Constitutional
Conventions and pushed through a Constitution that was not authorized by the
people or by their representatives, which the states had little choice but
to adopt afterward. It was a betrayal of those who had fought the
revolution, for whereas the Declaration of Independence stated clearly that
"All men are created equal," the Constitution decreed that some men would be
counted as less than equal, or as 3/5 of a person. This was not by design of
the people but by design of the oligarchs and plutocrats, almost all of whom
were slaveowners. Ben Franklin also lied, by saying that we had a republic
(if we could keep it) when he was fully aware that we had a plutocracy or
oligarchy. He further betrayed us by failing to present to the Convention
the Abolitionist petition he had been given by those he pretended to

This is not a democracy and we the people had no say whatsoever in how it
was structured. If we had, we would not have kept ballots secret from those
who cast them, any more than we allow ATMs to keep bank account balances
secret from the account owners.

Telling people that they may vote, but that they may not verify that their
votes were counted as cast, is like telling them that they can deposit money
to their bank account but they may not verify that the money was credited to
their account. No sane person would use a bank like that, and no sane person
would vote in elections like that.

  [Once again, comparing banking with voting is like comparing apples and
  orangutangs.  (I use that metaphor because of the inherent type mismatch.)
  No sane person makes that comparison any longer after having been reading
  RISKS, based on the voting systems that exist.  PGN]

  [Who would benefit from being able to prove that a voter had voted for
  particular candidates or issues?  Vote buyers and vote sellers.  Who would
  benefit from being able to verify that their vote was cast correctly
  without revealing how they voted?  Everyone.  There are various
  cryptographic approaches that might some day enable that alternative, but
  none of today's commercial/proprietary paperless systems do, typically
  because there is no assurance that your vote actually is counted as cast.

Re: Undetectable election hacking? ("3daygoaty" R-29.85)

<John Colville>
Saturday, 22 October 2016 18:19
> Australia has begun registering voters automatically.

Only in some jurisdictions, and not federally.

Australia *does* have instant run-off voting but proportional representation
is limited mostly to the federal Senate and the upper houses in most states.
In most lower houses, the electorates return a single member.

The loser parties do not normally have a voice in government.  However, in
our recent federal elections the government was returned with the barest
majority. So it may have to negotiate to get its legislation approved.

Re: Undetectable election hacking (Edwards, RISKS-29.88)

"3daygoaty ." <>
Thu, 27 Oct 2016 10:36:06 +1100
>> Australia has begun registering voters automatically.

>This might be news to the Australian Electoral Commission, the body
responsible for administering the electoral roll and running federal
elections in Australia.

Here is some information about voting and enrollment in Australia for your

The AEC does not automatically enroll voters but the states do.  At least
VIC and NSW that I know of.

VIC: (legislation was changed and then...)  In October 2010, the VEC wrote
to 1,932 students, aged 18 years or older at 30 September 2010 and not
enrolled, advising them that the VEC intended to enroll them on the register
of electors. The students had 14 days to advise the VEC if they were not
entitled to enroll. Fifteen letters were returned marked undeliverable, or no
longer at the address. Advice was received in relation to a further 17
students, who did not understand the significance of enrollment and voting,
and 105 students were enrolled as a result of receiving the notice and 1,795
more were enrolled by the Commission. Of those electors who were
automatically enrolled, 1,557 subsequently voted at the election.

In the 2014 VIC state election this amounted to about 50K young voters
getting auto-enrolled.  Still there may be up to half a million 18-24yo's
not enrolled in Australia.  In some states you can enroll online.

The enrollment databases of state and fed are harmonised at the fed level
and used for council (LGA), state and federal voter mark off.

There have been state based pilots of centralised real time (and offline)
electronic voter mark-off as well.  The offline electronic marks are
reconciled post-hoc as are the paper electoral register marks.

You are required to enroll to vote.

If you're eligible to be on the electoral roll but you haven't enrolled or
updated your enrollment, you may be fined.

VIC (1 Jul 2015): the penalty amount is $152.00

There's an oral history that a "non-enroller" who got to the courtroom steps
but recanted and signed the enrollment form there and then and avoided the
court case.  Apparently there have been no court cases in Victoria for

The fine for not voting in VIC state LGA or state elections is $78.  Not
voting in the fed is a $20 fine.  You can turn up, be marked off, accept the
blank ballots, but then hand them back to the clerk unvoted and walk out.
They are struck and marked as a discard.

Part of the state election count (upper house ballots), and more recently
the fed count are keyed or OCRd and automatically counted.  Scytl software
counts these ballots in the fed.

Technically maybe a few things here for RISKS?

  [Note a slight spelling difference between the Australian enrol and
  others' enroll.  I tried to adjust a few that alternated.  PGN]

Re: Undetectable election hacking?

"Alister Wm Macintyre" <>
Thu, 27 Oct 2016 17:23:43 -0500
Smith: "Diebold's ATM machines are extremely accurate."
Brodbeck: "You might be overestimating how secure the average ATM is."

Y'all might like to review Krebs or Schneier on ATMs, and similar resources.
Here are some samples . there seem to be hundreds more like these.

They report a mountain of stories about skimmers and other technologies for
extracting the info needed to steal our bank accounts, and on ATM networks
lacking good security thinking.

* ATM Skimmer technology is constantly being improved, in a war between
  financial institutions & crooks, much like the endless battle between
  anti-badware vendors, and the creators of the cyber attack software.

* ATMs are like branch banks, containing thousands of $ 20 bills.  Their
  geographical location should have some modicum of physical security to
  lower risk of bank robbery.  Some ATMs get hauled away, in smash and grab
  operations, broken into at some other site.

* ATMs have a lock, for accessing the interior. That lock can be a weak

* Cables connect to ATM, often with no security against some unauthorized
  person inserting any kind of signal tap, so whatever data goes to & from
  the bank, encrypted good or not, also gets into the possession of
  unauthorized persons.

* Camera mounted high above ATM, to capture what is keyed in by anyone not
  doing a good job of covering hand while keying.

* Comments on security sites can also be educational.

* Smart Phone connections to e-banking can include man-in-middle malware.

* Social engineering, at places which hire cheapest labor, gets crooks in
  the door, posing as POS technicians, so they can install their latest
  malware and skimmers.

I suspect cyber security is poorer on similar machines in convenience stores
& gas pumps, than bank ATMs.  Who maintains those machines & do they need
any relevant qualifications to be hired?

In many cases, ATM hacking is undetectable, until the victims find out their
bank accounts have been emptied.

A problem of discussion may be that if you have never been robbed via ATM
breach, and do not know anyone who has been a victim, you then have the
false belief that the ATMs are safe and reliable.  Many people do not backup
their computers, because they have not yet had an experience in which they
need a backup.  How many people would wear seat belts & carry auto
insurance, if it was not required by law?  There is widespread belief that
something is safe, because nothing bad has happened to them yet.

As for accuracy, I have occasionally got an extra bill. like I asked for
three (3 @ $ 20), but actually got 4.

I have heard from other people with a similar experience.

When we go into the bank, to try to return the extra money, the tellers
NEVER believe us.  They are trying to figure out what scam we are trying to

All bank records, including receipt with the money issued, correctly state
what we asked for, not what we actually got.

I wonder how many people to whom this happens fail to report it.

Re: Internet becoming unreadable, lighter thinner fonts (R-29.88)

"Wendy M. Grossman" <>
Fri, 28 Oct 2016 11:52:40 +0100
It's not a solution to the underlying problem, of course, but for Firefox
users there's a toolbar button add-on that lets you toggle to your own
default fonts and color settings, which at least lets you turn these
unreadable pages into readable ones.

Re: Unneeded services (Gezelter, R-29.88)

Dimitri Maziuk <>
Tue, 25 Oct 2016 17:38:55 -0500
> Why, pray tell, was telnet enabled on a embedded devices sold to consumers?

The only thing wrong with telnet is that *if* the bad guys tap into the wire
between you and the device *when* you type in your username and password,
they *may* capture them and use them to get in later.

If the bad guys already know the (factory-default) login and password, who
cares if it's telnet or the most secure quantum cryptography ever imagined:
they're in.

The other ass-umption here is that user's ability to get in and
configure the device is "unneeded".

Re: The Trolley Problem and altruism (Sebes, R-29.87)

Michael Marking <>
Wed, 26 Oct 2016 03:21:26 +0000
So now Stephen Hawking has joined Elon Musk in expressing concern that AI
may pose grave dangers to humanity.  I'm with them and with others that
agree.  We have a potential problem looming, our own trolley problem on a
different level, but I see it more as an impending slow motion train wreck.

As the Kate Griffin link (29.87) pointed out, this is playing out in a
larger context. John Horgan's piece (linked from 29.88) gently and
humourously pointed out that there's a lot of thinking yet to be done.  But
John Sebes (29.87) hit the most important of the nails on the head:

> This is proprietary code that could have even simple bugs that
> accidentally invert settings. And no public view on how much effort the
> manufacturer put into physical tests of the safety algorithms.  This just
> sounds like wishful thinking.

So we haven't defined the problem, let alone the answer, and can't even
verify the systems we have. People are now ending up in jail or not, being
granted or denied credit, being targeted for drone strikes or not, getting
medical treatment or not, and more, so this is existential, not theoretical,
and it will get worse.

Although we have a few rules that say that many trials must be conducted in
public, and there are a few open-meeting laws that require that some
deliberations be accessible to the public, there is no widely accepted
"sunshine" principle that disallows what we, in fact, have: a government
which operates in secret. Now, the situation with business is even worse,
because there are rights to privacy written into the law. As for the
juncture (which is a very, very blurred border) between private entities and
government: corporations, whose shareholders have a legal right to remain
anonymous, can funnel unlimited money to politicians.  What could go wrong?

I don't see much hope for opening the code or development process or design
tradeoff decision process to public view. The only way out is a sea change
in public attitudes, which will require a corresponding revolution in public
awareness. The public, which mostly tolerates this state of affairs, isn't
likely to get worked up over, say, algorithmic inequity, and they're not
going to get the connection to drone strikes and self-driving cars, or the
relationship of these things to the larger world.

I suspect that most of the readers here can see the problem. There hasn't
been much dissent. So, if I can get one point across, it is:

  We have an ethical obligation, both as professionals and as citizens and
  humans who can read the writing on the wall, to educate others regarding
  the gravity of the problem.

As I see it, Hawking and Musk aren't being alarmist or seeking publicity,
they're fulfilling their ethical obligations as respected public figures to
speak out.

One of the first books I read as a child was Asimov's "I, Robot", with his
Three Laws of Robotics. Now, only a half century later, the things I read in
science fiction stories come true around me. It's frightening.

Regardless of what is said at conferences and in journals, we're going to
build these machines in our images, and they will share our de facto ethics,
which are widely at variance with Asimov's Three Laws. They're going to do
what we want them to do, the way we want it done. Never mind asking whether
to hit the bicyclist or the wheelchair, our own ethics—which will inform
the AIs—are that, for example, when confronted with a choice between
killing a human and preserving property rights, it is often better to kill
the human. Our de facto ethics are that it's OK to start wars to "help"
another country (for "humanitarian reasons"), even when the people of the
other country don't want it. We decide what's best for others. Our de facto
ethics are that human misery and death are justified by "free markets". Our
de facto ethics are that laws and justice are different for the rich than
for the poor.

Are we going to someday learn that vulnerabilities were introduced
surreptitiously into the Open Ethics Library? Will the ethics code of
weapons be secret for "national security" reasons? Will the algorithms which
decide whether an alleged offender should be eligible for bail, now asserted
to be confidential as "proprietary", be extended for use in other policy
areas well beyond the criminal justice system? Are we someday going to tune
to the AIR Channel ("AI & Robotics") and listen to a robotic Lesley Stahl
hear from an an "uploaded" Madeleine Albright that killing a half million
children was a proper decision supported by the (secret) ethical

Police now seem to have a right to kill you if they are "afraid", if they
imagine you might be a threat. Let's take all that bodycam imagery and feed
it to an AI developed by mining the brains of experts (the cops themselves!)
and let the AI decide whether it's OK to shoot first and then ask questions
later. Who gets to specify and to oversee that project? (Of course, there's
money to be made there, too.)

Last week, University College of London announced an AI that could predict
the outcome of human rights trials with 79 percent accuracy. I read that in
three different places (one was ACM Tech News 2016.10.24).  The small print:
it worked because it realized that the judge tended to ignore the law and
made decisions based on facts, making him or her a "realist". So we have an
AI that works by ignoring the law. I'm not criticizing the developers, but
what happens when we replace actual human judges and juries with AIs
programmed to ignore the law?

It is too easy for us to fall into this trap. AIs are going to be cheap,
using them will be easier than doing things ourselves, they will give the
appearance of being objective and unbiased, we will have economic
incentives, and the benefits will seem overwhelming.

Sooner or later, these AIs are going to insinuate themselves into the
deepest corners of our lives, and we will have lost control. Therefore, if
our future overlords, the AIs, are to have better ethics than we now have,
we must improve our own behaviour before they attain a more impressionable
and perhaps uncontrollable age.

Will this be a brave new world, or a craven one?

After Asimov's death, Kurt Vonnegut assumed the presidency of the American
Humanist Association. He joked to them that "Isaac is up in heaven
now". (Understand he was talking to a bunch of atheists.) Now, Kurt passed
away nine years ago. Are they both up there in heaven now, and, if so, are
they laughing?

Samsung Holdouts Won't Give Up Their Fire-Prone Galaxy Notes

Monty Solomon <>
Wed, 26 Oct 2016 09:57:22 -0400

More Wretched News for Newspapers as Advertising Woes Drive Anxiety

Monty Solomon <>
Sun, 30 Oct 2016 17:24:09 -0400
News publications continue to be pummeled by rapidly declining print
advertising revenue, and newsrooms everywhere are scrambling.

Please report problems with the web pages to the maintainer