The RISKS Digest
Volume 32 Issue 37

Friday, 13th November 2020

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Moscow's facial recognition system can be hijacked for just $200
The Verge
Facial-Recognition Technology Needs More Regulation
Scientific American
Dominion Voting Machines Glitches
Markotime via Geoff Goodfellow
Zoom lied to users about end-to-end encryption for years, FTC says
Ars Technica
Europe is adopting stricter rules on surveillance tech
MIT Tech Review
Millions of Hotel Guests Worldwide Caught Up in Mass Data Leak
ThreatPost
Elon Musk Defends Neuralink Against Neuroscientist's Concerns of Chips Overheating
TechTimes
Apps Are Now Putting the Parole Agent in Your Pocket
WiReD
DNS Cache Poisoning Ready for Comeback
Holly Ober
The day the icons vanished!
Lindsay Marshall
Artificial intelligence model detects asymptomatic Covid-19 infections through cellphone-recorded coughs
MIT News
CPU-Heat Sink Thermal Paste Effectiveness
Richard Stein
Re: Algorithmic or Human fairness?
Anthony Thorn
Re: UK national police computer down for 10 hours after engineer pulled the plug
John Hall
Re: Whale Sculpture Stops Train From Plunge in the Netherlands
Jan Wolitzky
Re: Using AI to control a camera at a sports event—oops
Erling Kristiansen
Re: Facial recognition used to identify Lafayette Square protester accused of assault
John Levine
Re: What It's Like to Stress-Test Berlin's Brand New, Much Maligned Airport
3daygoaty
Re: Australian 300 MW battery
3daygoaty
Risk assessment: still high
Rob Slade
Working Group on Infodemics Policy Framework, Nov. 2020
Rob Slade
Info on RISKS (comp.risks)

Moscow's facial recognition system can be hijacked for just $200 (The Verge)

Monty Solomon <monty@roscom.com>
Thu, 12 Nov 2020 09:34:05 -0500
https://www.theverge.com/2020/11/11/21561018/moscows-facial-recognition-system-crime-bribe-stalking


Facial-Recognition Technology Needs More Regulation (Scientific American)

Richard Stein <rmstein@ieee.org>
Tue, 10 Nov 2020 12:47:31 +0800
https://www.scientificamerican.com/article/facial-recognition-technology-needs-more-regulation/

  "State and local authorities from New Hampshire to San Francisco have
  begun banning the use of facial-recognition technology. Their suspicion is
  well founded: these algorithms make lots of mistakes, particularly when it
  comes to identifying women and people of color. Even if the tech gets more
  accurate, facial recognition will unleash an invasion of privacy that
  could make anonymity impossible. Unfortunately, bans on its use by local
  governments have done little to curb adoption by businesses from start-ups
  to large corporations. That expanding reach is why this technology
  requires federal regulations—and it needs them now."

https://catless.ncl.ac.uk/Risks/search?query=facial+recognition reveals 34
prior submissions.

Business ventures will often fold, or fail to launch, if they can't find
commercial legal advantage (especially for product liability limitations,
etc.) to operate. Legislation that criminalizes biometric match inaccuracies
(high false-negative/positive) may be appropriate, but challenging to pass
Congressional hurdles with an infuriated business lobby. That legislative
action is pursued suggests there's considerable profit at risk.

A business that sells biometric matching products without disclosing false
negative/positive outcomes for their training data set either: (a) is
fortunate to find incurious purchasers; or (b), are unconcerned about
deployment outcomes because the product purchase agreement contract asserts
manufacturer liability indemnification rights.

Public safety organizations risk wrongful apprehension and incarceration if
they fail to crosscheck biometric matches against multiple, non-repudiated
identification systems of record before they act.
https://www.cnn.com/2020/06/24/tech/aclu-mistaken-facial-recognition/index.html
exemplifies this necessity.

https://www.blankrome.com/publications/biometric-privacy-2020-current-legal-landscape
discusses Illinois' Biometric Privacy Act (BIPA), a class action that
established biometric matching liability for privacy violations.

A uniform federal standard that governs public safety organizations and
commercial deployments via mandatory enforcement penalties for violations.
https://en.wikipedia.org/wiki/Classes_of_offenses_under_United_States_federal_law
would establish a firm foundation that deters biometric match abuses
including privacy invasive use.


Dominion Voting Machines Glitches

geoff goodfellow <geoff@iconia.com>
Mon, 9 Nov 2020 14:30:05 -1000
> Date: Mon, Nov 9, 2020 at 4:50 AM
> From: markotime <markotime@gmail.com>

> Think about this: DMCA (Digital Millennium Copyright Act) is likely to
> prove an insurmountable barrier to examination of these machines and their
> software, in search of most any aspect of the "glitch".  Among
> illegalities is "reverse engineering", which may even put statistical
> analysis of tallied votes into verboten territory.  Taken to extremes, the
> Act would seem to allow the SAME machines to be used in future elections,
> without scrutiny.  Scary.


Zoom lied to users about end-to-end encryption for years, FTC says (Ars Technica)

Monty Solomon <monty@roscom.com>
Mon, 9 Nov 2020 15:03:27 -0500
https://arstechnica.com/tech-policy/2020/11/zoom-lied-to-users-about-end-to-end-encryption-for-years-ftc-says/


Europe is adopting stricter rules on surveillance tech (MIT Tech Review)

geoff goodfellow <geoff@iconia.com>
Mon, 9 Nov 2020 14:43:58 -1000
*The goal is to make sales of technologies like spyware and facial
recognition more transparent in Europe first, and then worldwide.*

The European Union has agreed to stricter rules on the sale and export of
cyber-surveillance technologies like facial recognition and spyware. After
years of negotiations, the new regulation will be announced today in
Brussels. Details of the plan were *reported in Politico last month.
<https://www.politico.eu/article/europe-to-curtail-spyware-exports-to-authoritarian-countries/>*

The regulation requires companies to get a government license to sell
technology with military applications; calls for more due diligence on such
sales to assess the possible human rights risks; and requires governments to
publicly share details of the licenses they grant. These sales are typically
cloaked in secrecy, meaning that multibillion-dollar technology is bought
and sold with little public scrutiny.

“Today is a win for human rights globally, and we set an important
precedent for other democracies to follow suit,'' said Mark=C3=A9ta
Gregorov=C3=A1, a member of the European Parliament who was one of the lead
negotiators on the new rules, in a statement. “The world's authoritarian
regimes will not be able to secretly get their hands on European
cyber-surveillance anymore.''

Human rights groups have long urged Europe to reform and strengthen the
rules on surveillance technology. European-made surveillance tools were
used by authoritarian regimes during the 2011 Arab Spring and *continue*
<https://www.bbc.com/news/world-middle-east-40276568> to be sold to
dictatorships and democracies around the world today; news headlines and
political pressure have had little noticeable impact.

The main thing the new regulation achieves, according to its backers, is
more transparency. Governments must either disclose the destination, items,
value, and licensing decisions for cyber-surveillance exports or make
public the decision not to disclose those details. The goal is to make it
easier to publicly shame governments that sell surveillance tools to
dictatorships.

The regulation also includes guidance to member states to “consider the
risk of use in connection with internal repression or the commission of
serious violations of international human rights and international
humanitarian law," but that is nonbinding.  [...]
https://www.technologyreview.com/2020/11/09/1011837/europe-is-adopting-stricter-rules-on-surveillance-tech/


Millions of Hotel Guests Worldwide Caught Up in Mass Data Leak (ThreatPost)

Monty Solomon <monty@roscom.com>
Wed, 11 Nov 2020 14:09:41 -0500
A widely used hotel reservation platform has exposed 10 million files
related to guests at various hotels around the world, thanks to a
misconfigured Amazon Web Services S3 bucket. The records include sensitive
data, including credit-card details.

Prestige Software's Cloud Hospitality is used by hotels to integrate their
reservation systems with online booking websites like Expedia and
Booking.com.

The incident has affected 24.4GB worth of data in total, according to the
security team at Website Planet, which uncovered the bucket. Many of the
records contain data for multiple hotel guests that were grouped together on
a single reservation; thus, the number of people exposed is likely well over
the 10 million, researchers said.  [...]

https://threatpost.com/millions-hotel-guests-worldwide-data-leak/161044/


Elon Musk Defends Neuralink Against Neuroscientist's Concerns of Chips Overheating (TechTimes)

geoff goodfellow <geoff@iconia.com>
Mon, 9 Nov 2020 14:36:59 -1000
Elon Musk was recently approached on Twitter by a neuroscientist with a
couple of concerns about Neuralink brain chip <https://neuralink.com/>.
Temperature plays an important role in technology as overheating can signify
the machines being overworked.  What could happen if the Neuralink chips
would be overclocked?

One particular Amy Eskridge <https://twitter.com/amyceskridge> on Twitter,
an engineer turned chemist who then became a neuroscientist and is now
working as a theoretical physicist asked Elon Musk if he had considered the
possible heat transfer problem that could result from the overclocking
happening in the brain. She then stated that most likely, this has already
been thought of and continued to share her thoughts.

Amy believed that the depositing of the amyloid plaques could be used to
counter CNS heating through letting the amyloid protein absorb the heat.
She however stated that this was a "suboptimal strategy" due to the denature
protein that was accumulated as the plaques. Plaques were said to have
unintended harmful consequences directly to diseases meaning heat generated
from the multithreading brain processing will most likely produce unexpected
plaques.  Neuralink concerns over potential CSF leak

It was then stated that another problem would be a Cerebrospinal Fluid Link
(CSF) leak.
<https://www.hopkinsmedicine.org/neurology_neurosurgery/centers_clinics/brain_tumor/center/skull-base/types/csf-leak.html#:~:text=A CSF leak is a,and brain or sinus surgery>

This would mean that increased electrical activity would lead to the CSF
production increase to absorb heat into the fluid. Amy then stated that the
sustained electrical activity working over the baseline could result in
sustained pressure.

The sustained increase in intracranial hypertension coming from the
nontranscient increase in the electrical activity was also said to
eventually require the actual fluids to be drained in order to remove the
pressure from the brain's stem. Amy then stated that these are the two
examples of the very current default mechanisms by which the actual human
brain is able to dissipate certain undesirable heat spikes from the
increased CNS electric activity. It was also stated that despite the
increase of electrical conduction being desirable, it will still provoke
some undesirable reactions that are said to still need mitigation.

*Elon Musk addresses these concerns: the chip is already designed to
maintain a safe temperature*.  [...]

https://www.techtimes.com/articles/253970/20201108/elon-musk-defends-neuralink-against-neuroscientists-concerns-of-chips-overheating.htm


Apps Are Now Putting the Parole Agent in Your Pocket (WiReD)

Gabe Goldberg <gabe@gabegold.com>
Thu, 12 Nov 2020 01:36:47 -0500
The pandemic has stirred interest in smartphone software for remotely
monitoring parolees and people on probation. But the approach has raised
alarms.

https://www.wired.com/story/apps-putting-parole-agent-your-pocket/


DNS Cache Poisoning Ready for Comeback (Holly Ober)

ACM TechNews <technews-editor@acm.org>
Wed, 11 Nov 2020 12:33:41 -0500 (EST)
Holly Ober, UC Riverside News, 11 Nov 2020
  via ACM TechNews, Wednesday, November 11, 2020

Computer security researchers at the University of California, Riverside (UC
Riverside) and China's Tsinghua University found critical security flaws
that could lead to a resurgence of Domain Name System (DNS) cache poisoning
attacks. The exploit de-randomizes the source port and works on all cache
layers in the DNS infrastructure, including forwarders and resolvers. The
research team confirmed this finding by using a device that spoofs Internet
Protocol (IP) addresses and a computer that can trigger a request out of a
DNS forwarder or resolver; it exploited a novel network side channel to
execute the attack. The team, which has demonstrated the exploit against
popular public DNS servers, recommended the use of additional randomness and
cryptographic solutions to combat it.

https://orange.hosting.lsoft.com/trk/click?ref=znwrbbrs9_6-27f30x2265dax066003&


The day the icons vanished!

Lindsay Marshall <Lindsay.Marshall@newcastle.ac.uk>
Mon, 9 Nov 2020 16:31:25 +0000
Users of the RISKS.org website [at Newcastle] may have noticed that at some
point this weekend all the icons vanished. This turns out to be a classic
distributed systems risks. The website (and several other sites of mine) get
their icons from Font Awesome and were using their kit system which is just
generally convenient. The RISKS website issues a Content Security Policy
headers as a security measure. During the weekend there must have been an
update to the code used in the Font Awesome kit and it started trying to do
things that were forbidden by the CSP directives.  Result, no icons. I got
them back by including an *unsafe-inline* directive, but I really don't want
to have to do that.

I contacted Font Awesome and they tell me that they are not going to support
CSP or SRI in their kits as it it would be too complex for most users. I
have to use the desktop subsetter to make a local copy of the icons I
need. But what if they get updated?

Ugh.


Artificial intelligence model detects asymptomatic Covid-19 infections through cellphone-recorded coughs (MIT News)

Shannon McElyea <shannonm@gmail.com>
November 9, 2020 at 1:53:28 PM GMT+9
  [Via Dewayne Hendricks <dewayne@warpspeed.com> and
  "David J. Farber" <farber@gmail.com>]

Jennifer Chu, MIT News Office, 29 Oct 2020

"The team is working on incorporating the model into a user-friendly app,
which if FDA-approved and adopted on a large scale could potentially be a
free, convenient, noninvasive prescreening tool to identify people who are
likely to be asymptomatic for Covid-19. A user could log in daily, cough
into their phone, and instantly get information on whether they might be
infected and therefore should confirm with a formal test."

https://news.mit.edu/2020/covid-19-cough-cellphone-detection-1029

Artificial intelligence model detects asymptomatic Covid-19 infections
through cellphone-recorded coughs

Asymptomatic people who are infected with Covid-19 exhibit, by definition,
no discernible physical symptoms of the disease. They are thus less likely
to seek out testing for the virus, and could unknowingly spread the
infection to others.

But it seems those who are asymptomatic may not be entirely free of changes
wrought by the virus. MIT researchers have now found that people who are
asymptomatic may differ from healthy individuals in the way that they
cough. These differences are not decipherable to the human ear. But it turns
out that they can be picked up by artificial intelligence.

In a paper published recently in the IEEE Journal of Engineering in Medicine
and Biology, the team reports on an AI model that distinguishes asymptomatic
people from healthy individuals through forced-cough recordings, which
people voluntarily submitted through web browsers and devices such as
cellphones and laptops.

The researchers trained the model on tens of thousands of samples of coughs,
as well as spoken words. When they fed the model new cough recordings, it
accurately identified 98.5 percent of coughs from people who were confirmed
to have Covid-19, including 100 percent of coughs from asymptomatics—who
reported they did not have symptoms but had tested positive for the virus.

The team is working on incorporating the model into a user-friendly app,
which if FDA-approved and adopted on a large scale could potentially be a
free, convenient, noninvasive prescreening tool to identify people who are
likely to be asymptomatic for Covid-19. A user could log in daily, cough
into their phone, and instantly get information on whether they might be
infected and therefore should confirm with a formal test.

“The effective implementation of this group diagnostic tool could diminish
the spread of the pandemic if everyone uses it before going to a classroom,
a factory, or a restaurant,'' says co-author Brian Subirana, a research
scientist in MIT's Auto-ID Laboratory.  Subirana's co-authors are Jordi
Laguarta and Ferran Hueto, of MIT's Auto-ID Laboratory.

New AI model detects asymptomatic Covid-19 infections through
device-recorded coughs Vocal sentiments

Prior to the pandemic's onset, research groups already had been training
algorithms on cellphone recordings of coughs to accurately diagnose
conditions such as pneumonia and asthma. In similar fashion, the MIT team
was developing AI models to analyze forced-cough recordings to see if they
could detect signs of Alzheimer's, a disease associated with not only memory
decline but also neuromuscular degradation such as weakened vocal cords.

They first trained a general machine-learning algorithm, or neural network,
known as ResNet50, to discriminate sounds associated with different degrees
of vocal cord strength. Studies have shown that the quality of the sound
*mmmm* can be an indication of how weak or strong a person's vocal cords
are. Subirana trained the neural network on an audiobook dataset with more
than 1,000 hours of speech, to pick out the word *them* from other words
like *the* and *then*.

The team trained a second neural network to distinguish emotional states
evident in speech, because Alzheimer's patients—and people with
neurological decline more generally—have been shown to display certain
sentiments such as frustration, or having a flat affect, more frequently
than they express happiness or calm. The researchers developed a sentiment
speech classifier model by training it on a large dataset of actors
intonating emotional states, such as neutral, calm, happy, and sad.

The researchers then trained a third neural network on a database of coughs in order to discern changes in lung and respiratory performance.

Finally, the team combined all three models, and overlaid an algorithm to
detect muscular degradation. The algorithm does so by essentially simulating
an audio mask, or layer of noise, and distinguishing strong coughs—those
that can be heard over the noise—over weaker ones.

With their new AI framework, the team fed in audio recordings, including of
Alzheimer's patients, and found it could identify the Alzheimer's samples
better than existing models. The results showed that, together, vocal cord
strength, sentiment, lung and respiratory performance, and muscular
degradation were effective biomarkers for diagnosing the disease.

When the coronavirus pandemic began to unfold, Subirana wondered whether
their AI framework for Alzheimer's might also work for diagnosing Covid-19,
as there was growing evidence that infected patients experienced some
similar neurological symptoms such as temporary neuromuscular impairment.

“The sounds of talking and coughing are both influenced by the vocal cords
and surrounding organs. This means that when you talk, part of your talking
is like coughing, and vice versa. It also means that things we easily derive
from fluent speech, AI can pick up simply from coughs, including things like
the person's gender, mother tongue, or even emotional state. There's in fact
sentiment embedded in how you cough,'' Subirana says. “So we thought, why
don't we try these Alzheimer's biomarkers [to see if they're relevant] for
Covid.''

A striking similarity

In April, the team set out to collect as many recordings of coughs as they
could, including those from Covid-19 patients. They established a website
where people can record a series of coughs, through a cellphone or other
web-enabled device. Participants also fill out a survey of symptoms they are
experiencing, whether or not they have Covid-19, and whether they were
diagnosed through an official test, by a doctor's assessment of their
symptoms, or if they self-diagnosed. They also can note their gender,
geographical location, and native language.

To date, the researchers have collected more than 70,000 recordings, each
containing several coughs, amounting to some 200,000 forced-cough audio
samples, which Subirana says is “the largest research cough dataset that we
know of.'' Around 2,500 recordings were submitted by people who were
confirmed to have Covid-19, including those who were asymptomatic.

The team used the 2,500 Covid-associated recordings, along with 2,500 more
recordings that they randomly selected from the collection to balance the
dataset. They used 4,000 of these samples to train the AI model. The
remaining 1,000 recordings were then fed into the model to see if it could
accurately discern coughs from Covid patients versus healthy individuals.

Surprisingly, as the researchers write in their paper, their efforts have
revealed “a striking similarity between Alzheimer's and Covid
discrimination.''

Without much tweaking within the AI framework originally meant for
Alzheimer's, they found it was able to pick up patterns in the four
biomarkers—vocal cord strength, sentiment, lung and respiratory
performance, and muscular degradation—that are specific to Covid-19. The
model identified 98.5 percent of coughs from people confirmed with Covid-19,
and of those, it accurately detected all of the asymptomatic coughs.
“We think this shows that the way you produce sound, changes when you have
Covid, even if you're asymptomatic,'' Subirana says.

Asymptomatic symptoms

The AI model, Subirana stresses, is not meant to diagnose symptomatic
people, as far as whether their symptoms are due to Covid-19 or other
conditions like flu or asthma. The tool's strength lies in its ability to
discern asymptomatic coughs from healthy coughs.

The team is working with a company to develop a free pre-screening app based
on their AI model. They are also partnering with several hospitals around
the world to collect a larger, more diverse set of cough recordings, which
will help to train and strengthen the model's accuracy.

As they propose in their paper, “Pandemics could be a thing of the past if
pre-screening tools are always on in the background and constantly
improved.''

Ultimately, they envision that audio AI models like the one they've
developed may be incorporated into smart speakers and other listening
devices so that people can conveniently get an initial assessment of their
disease risk, perhaps on a daily basis.

This research was supported, in part, by Takeda Pharmaceutical Company
Limited.


CPU-Heat Sink Thermal Paste Effectiveness

Richard Stein <rmstein@ieee.org>
Thu, 12 Nov 2020 21:39:52 +0800
CPU manufacturers integrate self-preservation features to reduce meltdown
potential. Throttling—dynamic clock frequency scaling, and forced
power-shutdown are common techniques to prevent overheating or worse.

To dissipate CPU heat, the CPU package bonds to a heat sink to transfer
thermal energy into a cooling reservoir. Often, a set of fans roar to
dissipate heat. Some CPUs require liquid cooling (e.g., AMD Ryzen 7) for
compute-intensive applications (ray tracing, etc.).

An effective CPU-heat sink interface must be gap-free. Thermal paste
(https://en.wikipedia.org/wiki/Thermal_paste, retrieved on 12NOV2020) is
applied to create this bond. The paste conducts heat from the chip package
into the heat sink body, and consists of a colloidal metallic (silver)
suspension based on an epoxy or silicone-like gel.

There are several vendors of thermal paste products. Each is characterized
by distinct material properties: thermal conductivity, viscosity, dielectric
constant (lifetime), etc.  https://www.arctic.ac/en/MX-4/ACTCP00007B
(retrieved on 12NOV2020) references a specification for one thermal paste
product.

However unlikely, heat sink thermal paste "leakage" (heat transfer and
dissipation failure) would appear to rise with CPU power consumption
growth. Leakage at the wrong time in the wrong place may prove disastrous
for hosted applications.

Does a thermal paste leakage sensor exist? Would this sensor be
cost-effective to integrate into printed circuit boards and chassis
management?

With a very speedy clock and certain CPU power consumption profiles, thermal
paste leakage might comprise a significant barrier for computer system
manufacturers that prevents thermal qualification test completion.


Re: Algorithmic or Human fairness?

Anthony Thorn <anthony.thorn@atss.ch>
Mon, 9 Nov 2020 12:04:00 +0100
I find Richard Stein's argument in RISKS-32.36 for "keeping humans in the
loop" to be one-sided.

Humans can also be unfair!
;-)  perhaps you had not noticed...

The commonly recommended mitigation of unfair algorithms is transparency,
which seems sensible.

"4 eyes" is a well established practice and should be implemented for
critical decisions (human or algorithmic).


Re: UK national police computer down for 10 hours after engineer pulled the plug (RISKS-32.36)

John Hall <john@jhall.co.uk>
Mon, 9 Nov 2020 10:52:58 +0000
> But it reminded me of a joke that went around the Atlas computer lab in
> Manchester University (UK) in the late 1960s.

This joke originated in a famous SF short story by Fredric Brown called
"Answer", first published in 1954. The complete—very short—story can
be found online at http://www.roma1.infn.it/~anzel/answer.html

  [Also noted by Lars-Henrik Eriksson <lhe@it.uu.se> and
  Mark Brader <msb@Vex.Net>.  PGN]


Re: Whale Sculpture Stops Train From Plunge in the Netherlands (NYTimes)

Jan Wolitzky <jan.wolitzky@gmail.com>
Mon, 9 Nov 2020 05:25:22 -0500
It was only a fluke that the driver wasn't killed.

  [But "a fluke" is also a fish, which the whale is not.  PGN]


Re: Using AI to control a camera at a sports event—oops (RISKS-32.35)

Erling Kristiansen <erling.kristiansen@xs4all.nl>
Mon, 9 Nov 2020 17:59:34 +0100
[I nested two "oops" and accidentally deleted the first part of Erling's
message.  Here is the full message from RISKS-32.36.  PGN]

  This emphasizes an aspect that is often neglected in the AI hype: If
  presented with an input it was not programmed/trained to deal with, the
  result is unpredictable. In this particular case, no real harm was done,
  and we can laugh about it. But in other scenarios, the consequences can be
  grave. Despite the name, AI is not really intelligent at all, and, in
  particular, it is missing the context that would prevent a human camera
  operator from making such a mistake.


Re: Facial recognition used to identify Lafayette Square protester accused of assault (RISKS-32.36)

"John Levine" <johnl@iecc.com>
9 Nov 2020 17:35:18 -0500
> The protester might never have been identified, but an officer found an
> image of the man on Twitter and investigators fed it into a facial
> recognition system, court documents state. They found a match and made an
> arrest.

I have my doubts about the reliability of facial recognition, but it's worth
keeping in mind that there are two, arguably three, ways to use it.

One is the way they used it here—they have a single picture or a set of
pictures of one person, and they match it against a database to find out who
it is. I expect that once the system provided a match they they used other
means to see if that was the right person, e.g., does he live in the
area. This is analogous to flipping through books of mug shots.

A slightly different version of this is that you have two pictures and the
question is whether they are of the same person. I believe that Heathrow
airport does this. They take a picture of you and your ticket when you go
through security, and another picture as you get on the plane, to deter some
ticket switching scams.

A very different approach is that you have a big database of pictures of
people of interest, and you're constantly matching them against images from
cameras to see of any of them are in the area.

It seems to me that the first two are a lot less problematic and more
reliable than the third.


Re: What It's Like to Stress-Test Berlin's Brand New, Much Maligned Airport (RISKS-32.36)

3daygoaty <threedaygoaty@gmail.com>
Tue, 10 Nov 2020 16:30:44 +1100
*cough cough* at CS school we had this paraded in front of us:

Why do projects fail:
http://calleam.com/WTPF/?page_id=2086


Re: Australian 300 MW battery (RISKS-32.36)

3daygoaty <threedaygoaty@gmail.com>
Tue, 10 Nov 2020 16:38:13 +1100
The Big Tesla Battery (100MWh, soon to be 150MWh) in South Australia made
all of its surprise money (AUD37m/y) addressing FCAS (Frequency correction)
issues.  The South Australia Big Battery can address FCAS issues in Port
Douglas, 2500kms away and further.  The efficacy of another big battery
expected to make money on FCAS is limited in Australia.  A 300MWh (450MWh)
battery made with Lithium has a large carbon debt to pay down before anyone
says it is sustainable.  Australia needs to bite the bullet and realise
(like COVID response), demand-side *behaviour change* is the way, not supply
side white elephantiasis.


Risk assessment: still high

Rob Slade <rmslade@shaw.ca>
Thu, 12 Nov 2020 09:39:59 -0800
The *very first* Caribbean cruise following the declaration of the pandemic
has a CoVID scare.  Despite testing in advance and just before boarding, one
passenger has had a preliminary positive test during one of the regularly
scheduled tests while cruising.  Apparently this preliminary positive hasn't
yet been confirmed.

https://lite.cnn.com/en/article/h_d22fe985f2a974bf129aa1e5b4459476


Working Group on Infodemics Policy Framework, Nov. 2020

Rob Slade <rslade@gmail.com>
Thu, 12 Nov 2020 10:20:49 -0800
Reported by some as a set of guidelines for regulating social media
https://www.bbc.com/news/technology-54901083 the policy framework that has
been released by the Working Group on Infodemics
https://informationdemocracy.org/working-groups/concrete-solutions-against-the-infodemic/
is something many of us should be examining, and possibly critiquing.  The
policy framework itself can be found at:
https://informationdemocracy.org/wp-content/uploads/2020/11/ForumID_Report-on-infodemics_101120.pdf

The working group is supported by 38 countries, so this framework will
likely have wide currency and impact.  Looking at the composition of the
working group is interesting.  The majority are *NOT* technical people, but
those from political or media backgrounds.  It is good that techies aren't
the only ones involved, but the lack of a strong technical background may
show in the limited ability to implement some of the major recommendations.

The report itself is 128 pages long, but the twelve main recommendations
(divided into four categories) are listed on pages 14 and 15.  They are:

PUBLIC REGULATION IS NEEDED TO IMPOSE TRANSPARENCY REQUIREMENTS ON ONLINE
SERVICE PROVIDERS.

1. Transparency requirements should relate to all platforms' core functions
in the public information ecosystem: content moderation, content ranking,
content targeting, and social influence building.
2. Regulators in charge of enforcing transparency requirements should have
strong democratic oversight and audit processes.
3. Sanctions for non-compliance could include large fines, mandatory
publicity in the form of banners, liability of the CEO, and administrative
sanctions such as closing access to a country's market.

A NEW MODEL OF META-REGULATION WITH REGARDS TO CONTENT MODERATION IS
REQUIRED.

4. Platforms should follow a set of Human Rights Principles for Content
Moderation based on international human rights law: legality, necessity and
proportionality, legitimacy, equality and non discrimination.
5. Platforms should assume the same kinds of obligation in terms of
pluralism that broadcasters have in the different jurisdictions where they
operate. An example would be the voluntary fairness doctrine.
6. Platforms should expand the number of moderators and spend a minimal
percentage of their income to improve quality of content review, and
particularly, in at-risk countries.

NEW APPROACHES TO THE DESIGN OF PLATFORMS HAVE TO BE INITIATED.

7. Safety and quality standards of digital architecture and software
engineering should be enforced by a Digital Standards Enforcement Agency.
The Forum on Information and Democracy could launch a feasibility study on
how such an agency would operate.
8. Conflicts of interests of platforms should be prohibited, in order to
avoid the information and communication space being governed or influenced
by commercial, political or any other interests.
9. A co-regulatory framework for the promotion of public interest
journalistic contents should be defined, based on self-regulatory standards
such as the Journalism Trust Initiative; friction to slow down the spread
of potentially harmful viral content should be added.

SAFEGUARDS SHOULD BE ESTABLISHED IN CLOSED MESSAGING SERVICES WHEN THEY
ENTER INTO A PUBLIC SPACE LOGIC.

10. Measures that limit the virality of misleading content should be
implemented through limitations of some functionalities; opt-in features to
receive group messages, and measures to combat bulk messaging and automated
behavior.
11. Online service providers should be required to better inform users
regarding the origin of the messages they receive, especially by labeling
those which have been forwarded.
12. Notification mechanisms of illegal content by users, and appeal
mechanisms for users that were banned from services should be reinforced.

Please report problems with the web pages to the maintainer

Top