The RISKS Digest
Volume 31 Issue 40

Thursday, 5th September 2019

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Avoiding a space collision
MIT Tech Review
Elon Musk said the satellites his company launches will avoid potential collisions on their own.
QZ
Strangelove redux: U.S. experts propose having AI control nuclear weapons
Bulletin of the Atomic Scientists
Tesla autopilot is found partly to blame for 2018 freeway crash
via GG
Tesla customers locked out of our cars: unknown error
Reddit
iPhone hacks
The Register
Google accused of leaking personal data to thousands of advertisers
Liam Tung
Governments Shut Down the Internet to Stifle Critics. Citizens Pay the Price
NYTimes
600,000 GPS trackers left exposed online with a default password of '123456'
Catalin Cimpanu
How Apple's HomePod turned my friends into rude troglodytes
Chris Matyszczyk
Apple is Bad at Software, says Google
Security Boulevard
Algorithmic Foreign Policy
Scientific American
Oregon Judicial Department hit by phishing attack
Bradenton
Cyberattacks Mar Start of Academic Year
InsideHigherEd
Ask Amy: Son left home, but left behind racy mementos
WashPost
'Dutch mole' planted Stuxnet virus in Iran nuclear site on behalf of CIA, Mossad
The Times of Israel
Frequency-sensitive trains and the lack of failure-mode analysis
R.G. Newbury
Forget email: Scammers use CEO voice 'deepfakes' to con workers into wiring cash
Liam Tung
Re: Sometimes simplicity is dangerous ...
Alexander Klimov
Re: Facebook's big win
Amos Shapir
Re: Phishing spam is getting better
Roger Bell_West
Re: A Harvard freshman says he was denied entry to the U.S. over social media posts
Dick Mills
Re: Contingency plan for compromised fingerprint database
Martin Ward
Info on RISKS (comp.risks)

Avoiding a space collision (MIT Tech Review)

the keyboard of geoff goodfellow <geoff@iconia.com>
Mon, 2 Sep 2019 10:14:07 -1000
The European Space Agency <https://www.esa.int/ESA> had to move one of its
satellites out of the way today to protect it from colliding with a SpaceX
Starlink satellite, crashing into a mega-constellation satellite.
Specifically, it had to fire the Aeolus satellite's thrusters in order to
increase its altitude so it could pass over a SpaceX Starlink satellite.

Aeolus <https://www.esa.int/Our_Activities/Observing_the_Earth/Aeolus>, a
scientific satellite launched in August 2018 to improve weather forecasting,
started returning data shortly after the time of the expected collision,
showing it had successfully avoided a collision. ESA said it was rare that
it has to dodge active satellites: most maneuvers of this sort are to avoid
debris. Aeolus orbits considerably lower than the Starlink constellation's
current orbit height so it is possible that the SpaceX satellite it had to
dodge was one of the three that SpaceX is de-orbiting after it lost contact
with them.
<https://www.technologyreview.com/f/613907/spacex-has-lost-communication-with-three-of-its-60-starlink-satellites/>

*Subtle dig:* It's hard not to interpret the news as a criticism of
SpaceX's plans to launch 12,000 satellites to provide broadband Internet connections. Other firms, like Telesat, OneWeb
<https://www.technologyreview.com/f/613043/oneweb-is-about-to-launch-its-first-internet-satellites-to-connect-the/> and LeoSat, have similar
plans. SpaceX started by launching 60 of the satellites in May 2019, but it
plans to rapidly ramp up the numbers in the coming months.
<https://www.technologyreview.com/f/613580/spacex-has-launched-the-first-60=
-satellites-of-its-space-internet-system/>,

*Space debris:* The ESA is far from alone in its concerns. Space debris
experts warn that these sorts of mega constellations of satellites have the
potential to cause far greater and longer-lasting problems than more
eye-catching stunts like India's anti-satellite missile test It's currently
very rare to have to dodge active satellites, the ESA said
<https://www.technologyreview.com/s/613239/why-satellite-mega-constellations-are-a-massive-threat-to-safety-in-space/>
<https://www.technologyreview.com/f/613228/india-says-it-has-just-shot-down-a-satellite-in-space/>.
<http://blogs.esa.int/space19plus/programmes/space-debris/>, but we can
expect to see several hundreds of collision warnings every week before long.

*A potential solution:* Today's manual collision avoidance processes simply
won't work in an age of mega-constellations. There will be too many to keep
tabs on. As a result, ESA is preparing to automate this process using
artificial intelligence systems, which assess potential collisions and move
satellites out of the way. Until those are up and running, we're relying on
human observation and intervention.
<https://twitter.com/esaoperations/status/1168540912282165248>

https://www.technologyreview.com/f/614250/one-of-spacexs-starlink-satellites-almost-collided-with-a-weather-forecasting-satellite/


Elon Musk said the satellites his company launches will avoid potential collisions on their own. (QZ)

the keyboard of geoff goodfellow <geoff@iconia.com>
Mon, 2 Sep 2019 10:31:34 -1000
“Within a year and a half, maybe two years, if things go well, SpaceX will
probably have more satellites in orbit than all other satellites combined,''
Elon Musk said last week.

This is an exaggeration. There are almost 2,000 operational satellites in
space right now. But Thursday night's launch of 60 satellites for anew
Internet network called Starlink is the first step towards that goal. Today,
Musk's space company said it expects to launch six more times in 2019, with
the goal of operating 720 satellites by the end of the 2020, and eventually
more than 4,000.
<https://qz.com/1618386/spacex-launches-first-starlink-internet-satellites/>

The Federal Communications Commission—the lead regulator for American
satellites—approved these satellite, among 13,000 new satellites okayed
in the last year. That huge number has many in the space community nervous
about the potential for collisions with other satellites or with space
debris.
<https://qz.com/1170077/chinas-plummeting-space-station-is-just-a-taste-of-the-worlds-space-junk-problem/>
<https://qz.com/773511/photos-this-is-the-damage-that-tiny-space-debris-traveling-at-incredible-speeds-can-do/>

Neither the United States nor the world has a reliable system for managing
traffic in space, and policymakers are struggling to keep up with the
private sector's growing ability to hurl computers into the cosmos at faster
and faster rates.

*Musk said the satellites his company launches will avoid potential
collisions on their own. And Mark Juncosa, the SpaceX executive in charge of
developing the Starlink satellites, downplayed concerns when answering press
inquiries on the matter last week.  “It might be worth mentioning for
people that are not in the space industry space is really big,'' he said.

https://qz.com/1627570/how-autonomous-are-spacexs-starlink-satellites/


Strangelove redux: U.S. experts propose having AI control nuclear weapons (Bulletin of the Atomic Scientists)

Gabe Goldberg <gabe@gabegold.com>
Wed, 4 Sep 2019 15:23:15 -0400
Hypersonic missiles, stealthy cruise missiles, and weaponized artificial
intelligence have so reduced the amount of time that decision makers in the
United States would theoretically have to respond to a nuclear attack that,
two military experts say, it's time for a new U.S. nuclear command, control,
and communications system. Their solution? Give artificial intelligence
control over the launch button.

In an article in War on the Rocks titled, ominously, America Needs a 'Dead
Hand,' U.S. deterrence experts Adam Lowther and Curtis McGiffin propose a
nuclear command, control, and communications setup with some eerie
similarities to the Soviet system referenced in the title to their query
piece.  The Dead Hand was a semiautomated system developed to launch the
Soviet Union's nuclear arsenal under certain conditions, including,
particularly, the loss of national leaders who could do so on their own.
Given the increasing time pressure Lowther and McGiffin say U.S. nuclear
decision makers are under, “[I]t may be necessary to develop a system based
on artificial intelligence, with predetermined response decisions, that
detects, decides, and directs strategic forces with such speed that the
attack-time compression challenge does not place the United States in an
impossible position.''

https://thebulletin.org/2019/08/strangelove-redux-us-experts-propose-having-ai-control-nuclear-weapons#

...and pay for it with bitcoin.


Tesla autopilot is found partly to blame for 2018 freeway crash

geoff goodfellow <geoff@iconia.com>
Wed, 4 Sep 2019 14:02:05 -1000
Car on Autopilot struck parked fire truck near Los Angeles* Report is
second concluded by NTSB on Tesla automation

U.S. transportation safety investigators found Tesla's design of its
automated driver-assist system was partly to blame for a crash in which an
inattentive driver slammed into a fire truck parked on a freeway near Los
Angeles in 2018.

The National Transportation Safety Board also cited the driver's failure to
stop for the truck, which was parked with its emergency lights on, in the 22
Jan 2018, collision, which caused no injuries. The driver's actions were
“due to inattention and overreliance on the vehicle's advanced driver
assistance system,'' the NTSB said in a final report released Wednesday.

The vehicle's design “permitted the driver to disengage from the driving
task'' the agency said, adding that the driver was using the system “in
ways inconsistent with guidance and warnings from the manufacturer.''

The findings are the latest to put the coming wave of automated driving
machines under a microscope over doubts about their safety and how they
interact with the humans behind the wheel. In 2017 the agency cited the
Tesla system's design as a contributor to a fatal 2016 crash in Florida,
prompting two recommendations to the company and other manufacturers to
improve the safety of partially autonomous driving tools. [...]

https://www.sfgate.com/business/article/Tesla-autopilot-is-found-partly-to-blame-for-2018-14413536.php
https://www.bloomberg.com/news/articles/2019-09-04/tesla-autopilot-gets-partial-blame-for-2018-crash-by-u-s-agency


Tesla customers locked out of our cars: unknown error (Reddit)

geoff goodfellow <geoff@iconia.com>
Mon, 2 Sep 2019 17:34:14 -1000
Customer service says they don't know root cause and are all hands on deck
to resolve. People stranded all over the country. Key card and fob work so
if you have that with you, you are in luck. Call center is blowing up.

https://www.reddit.com/r/RealTesla/comments/cyybke/tesla_customers_locked_out_of_our_carsunknown/

https://teslamotorsclub.com/tmc/threads/tesla-ap-down.164885/


iPhone hacks (The Register)

Tom Van Vleck <thvv@multicians.org>
Sun, 1 Sep 2019 11:51:42 -0400
There has been recent discussion of hacks of the iPhone OS.  See the article
in *The Register*, which points to the detailed article by Google Project
Zero.
https://www.theregister.co.uk/2019/08/30/google_iphone_exploit_chain/

The complexity and subtlety of the attacks described in the Project Zero
article is amazing.  It appears that this is not done by one powerful wizard
(like Mark Dowd) but rather a whole Ministry of Magic.

My guess would be that there are additional, similarly elaborate, exploits
not yet described.  QA guy's rule of thumb: for every bug you found, there
is one you haven't found yet.

iPhones are programmed in a C-like language extended with rules,
conventions, libraries, and frameworks.  It is like making a 737 Max
airliner out of trillions of individually glued matchsticks.  It might
fly... but the technology chosen is too delicate and vulnerable for the
purpose intended, and there may be significant systemic weaknesses not
addressed by choice of implementation technique.

It seems clear that trying to write secure operating systems in C does not
work.  Very smart people have tried for 50 years, and the solution to the
problem is not reduced to practice.

I think we need even more powerful tools.. and by tools I mean ideas and
approaches as well as compilers.  Rust, Swift, Scala, Go.  Well maybe.
Focusing on the language is not enough.  We tried that.  SEL4, Haskell.
Proof methodology.  Not yet accepted as standard, the way C replaced
assembler.  When I look at the Multics B2 and Secure VMS projects, I get the
feeling that we are still doing it wrong.  Trying to build skyscrapers with
two-by-fours and hammers.

I used to say, “the software is crying out to us with the only voice it
has, failure reports.  We have to listen, and figure out why, and imagine
solutions.''

I feel like our problem is philosophical. I'd like better clarity about what
we require operating systems to do, and what kind of certainty we want about
their behavior.

We are still in the pit, and better shovels won't be enough.


Google accused of leaking personal data to thousands of advertisers (Liam Tung)

Gene Wirchenko <gene@shaw.ca>
Thu, 05 Sep 2019 10:41:27 -0700
Liam Tung, ZDNet, 5 Sep 2019
Browser maker Brave says Google is using a secret workaround to bypass EU
data-protection laws and serve targeted ads.
https://www.zdnet.com/article/google-accused-of-leaking-personal-data-to-thousands-of-advertisers/


Governments Shut Down the Internet to Stifle Critics. Citizens Pay the Price (NYTimes)

Monty Solomon <monty@roscom.com>
Mon, 2 Sep 2019 19:03:42 -0400
https://www.nytimes.com/2019/09/02/world/africa/internet-shutdown-economy.html

Internet shutdowns have become one of the defining tools of government
repression in the 21st century ” but citizens bear the cost at work and at
home.


600,000 GPS trackers left exposed online with a default password of '123456' (Catalin Cimpanu)

Gene Wirchenko <gene@shaw.ca>
Thu, 05 Sep 2019 10:47:34 -0700
Catalin Cimpanu, ZDNet, 5 Sep 2019
Default password is a danger for customers, but also for the vendor itself.
https://www.zdnet.com/article/600000-gps-trackers-left-exposed-online-with-a-default-password-of-123456/

At least 600,000 GPS trackers manufactured by a Chinese company are using
the same default password of `123456', security researchers from Czech
cyber-security firm Avast disclosed today.

They say that hackers can abuse this password to hijack users' accounts,
from where they can spy on conversations near the GPS tracker, spoof the
tracker's real location, or get the tracker's attached SIM card phone number
for tracking via GSM channels.

Researchers explain that accounts on the cloud service are created as soon
as the GPS trackers are manufactured. They said that a malicious competitor
could hijack these accounts before the devices are sold and change their
passwords, effectively locking accounts and creating customer support
problems for Shenzhen i365-Tech and its resellers later down the road.


How Apple's HomePod turned my friends into rude troglodytes (Chris Matyszczyk)

Gene Wirchenko <gene@shaw.ca>
Thu, 05 Sep 2019 10:37:00 -0700
Chris Matyszczyk for Technically Incorrect, ZDNet, 5 Sep 2019
They say technology changes human behavior. As I've found when I invite
friends to my house. Thanks, Apple.
https://www.zdnet.com/article/how-apples-homepod-turned-my-friends-into-rude-troglodytes/

Still, here was a friend I'd known for some time who, after dinner, suddenly
decided to take control.

Take control of my HomePod that is.

Usually, when friends come over, I ask Siri to play a little quiet music to
add serenity to the atmosphere. Some Keith Jarrett, perhaps.  Or, if I don't
want the friends to stay too long, some Mud and Bay City Rollers hits from
the 70s.

Until that fateful night, though, no one had expressed unease about the
music. Until my friend suddenly shouted across the room: “Hey Siri, play
some Tears For Fears.''

Normally, this friend is politeness itself.

There was no “do you mind if we change the music?''  There wasn't even a
hint of “you know Beethoven's not cool anymore, don't you?''

It was as if it was de rigueur to shout to Siri—in the belief that she's
actually your own Alexa—and get what you feel like.

Would anyone have behaved this way with previous technologies? Did guests
simply walk over to the record player, the cassette player, the CD player
and change the music whenever they felt like it?


Apple is Bad at Software, says Google (Security Boulevard)

<>
Sat, 31 Aug 2019 23:59:31 -0400
https://securityboulevard.com/2019/08/apple-is-bad-at-software-says-google/


Algorithmic Foreign Policy (Scientific American)

Richard Stein <rmstein@ieee.org>
Sat, 31 Aug 2019 11:23:30 -0700
https://blogs.scientificamerican.com/observations/algorithmic-foreign-policy/

“Last year, China unveiled its development of a new artificial intelligence
system for its foreign policy. It's called a 'geopolitical environment
simulation and prediction platform,' and it works by crunching huge amounts
of data and then providing foreign policy suggestions to Chinese
diplomats. According to one source, China has already used a similar AI
system to vet almost every foreign investment project in the past few years.

“Consider what this development means: Slowly, foreign policy is moving away
from diplomats, political-risk firms and think tanks, the 'go-to'
organizations of the past. Slowly, foreign policy is moving toward advanced
algorithms whose primary objective is to analyze data, predict events and
advise governments on what to do. How will the world look when nations are
using algorithms to predict what happens next?''

Computer software digests human events and reactions to them. It does not
forget the past, but assigns weights to their apparent impact on the
governing world, regional, local or social order. Use this production system
(ala OPS5) to simulate (extrapolate) future events.

Risk: Coupled to an armed forces situation room, this platform seems certain
to possess `alarm fatigue' potential.

What ever happened to game theory and wisdom? Have these techniques and
experts become so expensive, or their advice so easy to mistrust, that only
a computer's recommendation can be accepted?

See The Man Who Saved the World for a fortuitous example of human common
sense at work.


Oregon Judicial Department hit by phishing attack (Bradenton)

Monty Solomon <monty@roscom.com>
Fri, 30 Aug 2019 10:47:58 -0400
https://www.bradenton.com/news/business/technology/article234530047.html


Cyberattacks Mar Start of Academic Year (InsideHigherEd)

Monty Solomon <monty@roscom.com>
Fri, 30 Aug 2019 10:54:50 -0400
https://www.insidehighered.com/news/2019/08/27/two-universities-targeted-hackers-just-new-school-year


Ask Amy: Son left home, but left behind racy mementos (WashPost)

Gabe Goldberg <gabe@gabegold.com>
Fri, 30 Aug 2019 00:35:18 -0400
Ask Amy: Son left home, but left behind racy mementos
Parent opened files on home computer to find nude photos.

https://www.washingtonpost.com/lifestyle/advice/ask-amy-son-left-home-but-left-behind-racy-mementos/2019/08/27/32b661f4-c04c-11e9-a5c6-1e74f7ec4a93_story.html


'Dutch mole' planted Stuxnet virus in Iran nuclear site on behalf of CIA, Mossad (The Times of Israel)

Gabe Goldberg <gabe@gabegold.com>
Thu, 5 Sep 2019 00:03:19 -0400
https://www.timesofisrael.com/dutch-mole-planted-infamous-stuxnet-virus-in-iran-nuclear-site-report/


Frequency-sensitive trains and the lack of failure-mode analysis (Re: RISKS-31.39)

"R. G. Newbury" <newbury@mandamus.org>
Tue, 3 Sep 2019 13:28:17 -0400
> Identifying all these failure modes in advance obviously takes more
> expertise and foresight—but is that really too much to ask of the
> relevant experts?

It is a lack of imagination. The 'relevant experts' are often what Nassim
Taleb calls Intelligent Yet Idiot. The experts transgress beyond their
expertise and wrongly (and disastrously) believe that NOTHING CAN GO WRONG,
beyond what they have considered. They lack the imagination to see other
scenarios. In Taleb's words, they cannot see black swans, therefore no black
swan can exist.

What is actually needed in the planning/design stage is to present the
unexpected scenario to people who face the real situation every day, and ask
them “X has just failed. What can happen next? What do you do? What can
happen then?''  And present it to *lots of people in the relevant
field*. Some one of them will likely have experienced it, or recognized it
lurking just out of sight, and *not gone there*.

The ultimate underlying cause of the crash of AF447 was that there was NO
FEEDBACK between the two flight controls. There was during the design stage
*and thereafter*, a total lack of imagination that the two pilots would do
or even WANT TO DO, different things. And, most importantly, no feedback to
tell the pilots that they *were* doing different things.

The pilot was unaware that the co-pilot had `frozen' with the stick full
aft. If he had known that, he would have called 'my plane' and whacked the
co-pilot across the face if necessary to regain control.

There was a complete lack of imagination of the human factor by 'the
experts'. That can happen even in hindsight: compare the 'investigation'
scenes in the movie Sully, where the 'experts' are utterly convinced that
Sullenberger 'ought to have turned back'. But they wanted him to do so
*instantly*. They pointed to the fact that, in simulations, pilots were able
to land safely. Not particularly noticeable in the scene, is the revelation
that it took the 'expert' pilots 17 attempts to land at Teterboro, even
though they knew exactly what was going to happen and could react instantly
in their *simulation*. Only when Sully forced a recognition of the human
factor was reality made real. The scenes are a great example of the power of
tunnel vision and how it can blind the best of the experts. Add politics or
money (but I repeat myself) and the mixture is toxic.

The other underlying causes of FA447 are also due to a lack of imagination
of *what could happen next*. The autopilot shut off when it lost air-speed
data. Why was it not commanded to cross-check with GPS data? Why was there
no *explicit* error message, followed by an automatic over-ride command to
turn on pitot heat, (as pitot icing is the most likely reason for a loss of
airspeed data and it cannot hurt), and to *turn off the stall warning* as it
was misleading. And an announcement. Moreover, if the airspeed data is
suspect, the warning should refer to a transfer to GPS data, and adjust the
displays accordingly so to not be misleading.

As it was, iirc, the autopilot silently disconnected itself, without
announcement, and suddenly, the stall warning started blaring *which caused
the copilot to panic*. What really should have happened was an announcement
along the lines of: “Warning: airspeed indication does not agree with GPS
data. Autopilot changing to use of GPS data. Turning on pitot heat. Stall
warning deactivated.''

Note that a similar cross-check of airspeed v GPS could have prevented the
737 disasters. If the plane were commanded to use the higher of the two
inputs (and warn accordingly) it is quite possible that neither disaster
would have occurred. (I presume that a non-operating GPS is now 'do not fly'
checkbox for commercial flights). (But of course, that might have actually
cost more money and the airlines did not request an upgrade being unaware of
the actual danger.)

Another example of lack of imagination is the Fukushima disaster.  None of
'the experts' considered what would happen if a tsunami did overflow the
sea-wall: But, but, but you will never, ever, get 10 feet of water on the
site!

I am reasonably certain that any graduates of the U.S. Navy's reactor school
would have instantly recognized that having the 'emergency' generator, AND
its fuel at the lowest level of the site was a major mistake. The generator
and its fuel should have been some distance away, and placed in an elevated
location, such as the top of a berm a couple of miles inland from the
reactors.

As another point, why was there no vent in the roof to disperse the
hydrogen? We know that a meltdown will release hydrogen. The great majority
of the damage to the building was not from the tsunami, it was from the
explosion of the (contained) hydrogen. This also destroyed a large amount of
the piping which could have been used for remediation/reduction of the
meltdown.

Putting the used reactor fuel storage in a pool six stories up, was just
plain stupid, especially in an earthquake prone site. It was apparently not
damaged by the tsunami, but *by the explosion*! They had to bring in
concrete pumpers to replenish the water in the fuel pool, which was now
leaking. But due to the damage to the building they had no way to remove the
fuel bundles, nor easily fix the leaks.  All a failure of imagination. What
could go wrong next? How do we avoid that event?

Lack of imagination is a widespread failure. I am sure that no engineer in
Minneapolis ever thought to consider what happens to the bridge if acid from
pigeon poop reduces that tie-plate from 1" down to a half inch? Or put
another way, what is the minimum allowed thickness of the structural
components before repair is necessary. Possibly that should be required in
the as-designed blue-prints, as instructions for upkeep.


Forget email: Scammers use CEO voice 'deepfakes' to con workers into wiring cash (Liam Tung)

Gene Wirchenko <gene@shaw.ca>
Wed, 04 Sep 2019 10:53:20 -0700
Liam Tung, ZDNet, 4 Sep 2019
AI-generated audio was used to trick a CEO into wiring $243,000 to a
scammer's bank account.
https://www.zdnet.com/article/forget-email-scammers-use-ceo-voice-deepfakes-to-con-workers-into-wiring-cash/


Re: Sometimes simplicity is dangerous ... (RISKS-31.390

Alexander Klimov <alserkli@inbox.ru>
Tue, 3 Sep 2019 10:39:24 +0000
> And that part of that bump recycles 20% of all the oxygen in the
> atmosphere.

It is unclear what `recycle' is supposed to mean, but if this phrase was
supposed to say that a mature forest produces oxygen, then it is not the
case. While the forest takes in carbon dioxide from the atmosphere during
photosynthesis and converts it to oxygen to support new growth, it also
gives off comparable levels of carbon dioxide when old trees die. To really
`produce' oxygen one needs to sink the produced carbon, for example, in a
swamp.


Re: Facebook's big win (RISKS-31.39)

Amos Shapir <amos083@gmail.com>
Sat, 31 Aug 2019 14:02:29 +0300
This court decision is not really that important.  Even if there were a
ruling which would require Facebook to get the consent of users for sharing
their data among its apps, it is easy to imagine what could happen:

Immediately afterward, every user in a country where such legislation is in
effect, would not be able to post anything on any of these apps, without
encountering a VERY LONG message of convoluted legalese, with an `I agree'
button at the end.

You can bet that 99.99% of them would click the button within 1 second.
Voila!  There you have it: consent.


Re: Phishing spam is getting better (Shapir, RISKS-31.39)

Roger Bell_West <roger@nospam.firedrake.org>
Fri, 30 Aug 2019 10:11:24 +0100
> This should be a golden rule for anyone reading email: Never click on any
> link in an unsolicited incoming message, especially not one from your bank
> (or any other service which may have access to your money).

Can you tell whether a message is unsolicited? Can you _really_?

This reduces easily to “Never click on any link in an incoming message.''
and from that we can quickly reach “Never trust any message's text/html
part.''

Alas, banks and others believe that their customers NEED to see the
corporate logo and the custom layout and the tracking bugs, and are
increasingly prone to have a fake text/plain part, usually along the lines
of “your client can't display this message.''

(I would remind them, if they cared, that RFC2046 5.1.4 requires that 'Each
part of a *multipart/alternative* entity represents the same data'.)


Re: A Harvard freshman says he was denied entry to the U.S. over social media posts (RISKS-31.39)

Dick Mills <dickandlibbymills@gmail.com>
Sat, 31 Aug 2019 13:11:23 -0400
For years I have heard similar anecdotes from Canadian friends.  They say
that U.S. Customs and Immigration employees seem to not know the rules.
Agents just make up rules as they go along.  Every agent has a different
idea of what the rules are.

That might be the real story in the Harvard student case.  Just a civil
servant doing security checks by ad hoc methods, and without adequate
training.

If there really were specific rules and procedures governing who is and is
not allowed in the country, it would be as thick as an old fashioned phone
book, and it would have been leaked to the press long ago.


Re: Contingency plan for compromised fingerprint database (Slonim, RISKS-31.37)

Martin Ward <martin@gkc.org.uk>
Thu, 29 Aug 2019 09:41:45 +0100
If the access control locks out after n tries (where << 10), then anyone can
carry out a denial of service attack (or at least: anyone who has n or more
fingers).

Please report problems with the web pages to the maintainer

x
Top