The RISKS Digest
Volume 30 Issue 74

Thursday, 5th July 2018

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Cyber-researchers Don't Think Feds or Congress Can Protect Against Cyberattacks
Defense One
Babylon claims its chatbot beats GPs at medical exam
bbc.com
Medical device security: Hacking prevention measures
HPE
Exactis said to expose 340-million records, more than Equifax breach
CNET
Supreme Court requires warrant for cellphone location data
Henry Baker
ICE hacked its algorithmic risk-assessment tool, so it recommended detention for everyone
BoingBoing
Energy company vulnerability allows access to customer accounts
Donald Mackie
Internet TV firmware update/soft powerswitch failure
Richard M Stein
Widespread Google Home outage: What NOT to do!
Lauren Weinstein
Cruel pranksters made NYC Internet kiosks play ice-cream truck tunes
Engadget
Swann home security camera sends video to wrong user
BBC
Hidden Microsoft Office 365 data gathering
LMG Security
Protecting civilians in cyberspace
Just Security
Rash of Fortnite cheaters infected by malware that breaks HTTPS encryption
Ars Technica
Really dumb malware targets cryptocurrency fans using Macs
Ars Technica
Sony Blunders By Uploading Full Movie to YouTube Instead of Trailer
TorrentFreak
Homeland Security subpoenas Twitter for data breach finder's account
Zack Whittaker
Wikipedia Italy Blocks All Articles in Protest of EU's Ruinous Copyright Proposals
Gizmodo
How a Major Computer Crash Showed the Vulnerabilities of EHRs
Medscape via Fr. Stevan Bauman
Apple 'Family Sharing' feature used by scammers to make purchases with hacked Apple IDs
Business Insider
“Trump administration tells FCC to block China Mobile from U.S.''
Corinne Reichert
Google is training machines to predict when a patient will die
Los Angeles Times
So What The Heck Does 5G Actually Do? And Is It Worth What The Carriers Are Demanding?
Harold Fel
Leaks, riots, and monocles: How a $60 in-game item almost destroyed EVE Online
Ars Technica
Gaming disorder is only a symptom of a much larger problem
WaPo
Ticketmaster: How not to manage customers after a data breach.
Michael Kent
Re: Police, Law Enforcement, and corporate use of facial recognition and facial images in court
Kelly Bert Manning
Re: Florida skips gun background checks for a year after employee
Kelly Bert Manning
Info on RISKS (comp.risks)

Cyber-researchers Don't Think Feds or Congress Can Protect Against Cyberattacks (Defense One)

Peter G Neumann <neumann@csl.sri.com>
Wed, 27 Jun 2018 20:52:12 PDT
http://www.defenseone.com/threats/2018/06/cyber-researchers-dont-think-feds-or-congress-can-protect-against-cyberattacks/149289/

Quite evidently, the U.S. government has little clue about defending itself
against cybersecurity attacks, and is consequently unprepared for any
digital disasters.


Babylon claims its chatbot beats GPs at medical exam (bbc.com)

Richard M Stein <rmstein@ieee.org>
Sat, 30 Jun 2018 10:28:47 +0800
[od -c output attached for peace of mind]

http://www.bbc.com/news/technology-44635134

  “Claims that a chatbot can diagnose medical conditions as accurately as a
  GP have sparked a row between the software's creators and UK doctors.''

Babylon's chatbot claims to out-achieve carbon-based physicians on the
UK MRCGP (Membership Royal College of General Practitioners)
examination. Babylon advocates their AI platform to complement a
physician's judgment, not as a wholesale replacement.

  “Babylon said that the first time its AI sat the exam, it achieved a
  score of 81%.  It added that the average mark for human doctors was 72%,
  based on results logged between 2012 and 2017.  But the RCGP said it had
  not provided Babylon with the test's questions and had no way to verify
  the claim.''

Given commercial aspirations, and the skyward trajectory of health care
service delivery, an attempt to capitalize on a ''cost-effective'' AI-based
alternative is likely. Favorable legislation, and weak regulatory oversight,
will induce businesses to pursue them despite potential public health risks.

A randomized control trial must be performed. Any business that promotes and
sells these AI diagnosis/treatment services must be required to enroll their
own employees and immediate family members as participants. The trial
outcome reviewers must be free from conflict of interest.


Medical device security: Hacking prevention measures (HPE)

Gabe Goldberg <gabe@gabegold.com>
Mon, 2 Jul 2018 15:37:04 -0400
With so many lives at stake, computer scientists and healthcare IT pros are
motivated to develop strategies that keep patients safe from medical device
hackers. They're making progress.

http://www.hpe.com/us/en/insights/articles/medical-device-security-hacking-prevention-measures-1806.html


Exactis said to expose 340-million records, more than Equifax breach (CNET)

Richard Forno <rforno@infowarrior.org>
Wed, 27 Jun 2018 18:50:14 -0400
http://www.cnet.com/news/exactis-340-million-people-may-have-been-exposed-in-bigger-breach-than-equifax/

We hadn't heard of the firm either, but it had data on hundreds of millions
of Americans and businesses and leaked it, according to Wired.

Abrar Al-Heeti
June 27, 2018 2:14 PM PDT

If you're a US citizen, your personal information—your phone number, home
address, email address, even how many children you have—may have just
become easily available to hackers in an alleged massive data leak.

Florida-based marketing and data aggregation firm Exactis exposed a database
containing nearly 340 million individual records on a publicly accessible
server, Wired reported. Earlier this month, security researcher Vinny Troia
found that nearly 2 terabytes of data was exposed, which seems to include
personal information on hundreds of millions of US adults and millions of
businesses, the report said.

“It seems like this is a database with pretty much every US citizen in it,''
Troia told Wired.

Exactis didn't immediately respond to a request for comment or confirmation.

The alleged breach reportedly exposed highly personal information, such as
people's phone numbers, home and email addresses, interests and the number,
age and gender of their children. Credit card information and Social
Security numbers don't appear to have been leaked. Troia told Wired that he
doesn't know where the data is coming from, “but it's one of the most
comprehensive collections I've ever seen.''

Because Exactis hasn't confirmed the leak, it's hard to know exactly how
many people are affected. But Troia found two versions of the database that
each had around 340 million records, with roughly 230 million on consumers
and 110 million on business contacts, according to Wired. Exactis says on
its website that it has over 3.5 billion consumer, business and digital
records.

The data leak is noteworthy not only for its breadth, but also for the depth
of information the records have on people. Every record reportedly has
entries that include more than 400 variables on characteristics like whether
the person smokes, what their religion is and whether they have dogs or
cats. But Wired noted that in some instances, the information is inaccurate
or outdated.

Just because people's financial information or Social Security numbers
weren't leaked doesn't mean they're not at risk for identity theft. The
amount of personal information that was exposed could still help scammers
impersonate or profile them.

Huge compromises to personal information have been making headlines
lately. In 2017, Equifax was involved in a massive data breach of 145.5
million people's data. And in October, Yahoo revealed that all 3 billion
accounts were hacked in a 2013 breach.


Supreme Court requires warrant for cellphone location data

Henry Baker <hbaker1@pipeline.com>
Mon, 25 Jun 2018 07:09:23 -0700
Nice to see that the “Third Party Doctrine''—which gave the govt “most
favored nation status'' w.r.t. your data—is finally being chipped away.

However, as this law professor points out, this decision will have little
practical effect.

[Sorry for the length of this posting, but every point is salient.]

http://www.vox.com/the-big-idea/2018/6/22/17493632/carpenter-supreme-court-privacy-digital-cell-phone-location-fourth-amendment

The latest Supreme Court decision is being hailed as a big victory for
digital privacy.  It's not.

Carpenter forces police to get a warrant before getting some cellphone
data.  But other Fourth Amendment cases will undermine its impact.

By Aziz Huq Updated Jun 23, 2018, 7:43am EDT

Congratulations—a closely divided US Supreme Court has just ruled in
Carpenter v. United States that you have a constitutional right to privacy
in the locational records produced by your cellphone use.  Law enforcement
now cannot ask Sprint, AT&T, or Verizon, for cell tower records that reveal
your whereabouts through your phone's interaction with those towers, at
least without a warrant.

Carpenter builds on two earlier decisions.  In 2011, the Court required a
warrant before police placed a GPS tracker on a vehicle to track its
movements.  In 2014, it forbade warrantless searches of cellphone during
arrests.  Whatever it's other flaws, the Roberts Court thus seems to
understand electronic privacy's importance.

But there are a couple of things to know before toasting the Court's high
regard for privacy in the digital age.  The Roberts Court, building on what
the preceding Rehnquist Court did, has created an infrastructure for Fourth
Amendment law that makes it exceptionally easy for police to do a search,
even when a warrant is required.  The law also makes it exceptionally
difficult for citizens to obtain close judicial oversight, even when the
police have violated the Constitution.  As a result of these background
rules, even a decision as seemingly important as Carpenter is unlikely to
have any dramatic effect on police practices.

It's not just that our digital privacy is insufficiently protected, in other
words.  It's that our Fourth Amendment rights and remedies in general have
been eroded.  Once enough holes have been poked in the general system for
vindicating Fourth Amendment interests, the decision to extend Fourth
Amendment coverage to a new domain—such as cell-site locational data --
is just not terribly significant.

Timothy Ivory Carpenter had been convicted of nine armed robberies based on
witness testimony, but the prosecution also stressed in its closing argument
records obtained from his cellphone company.  Those records showed how
Carpenter's phone interacted with the cell phone towers that carried its
signal.  As Chief Justice Roberts emphasized, the records painted a detailed
picture of Carpenter's movements over 127 days.

Yet the government did not use a warrant based on probable cause to obtain
those cell-site records, relying instead on a statute called the Stored
Communications Act.

Forcing police to get a warrant is not much of a protection these days

Consider first the core constitutional protection on which Chief Justice
Robert's opinion in Carpenter hinged—the requirement of a warrant based
on probable cause from a judge before the police can acquire cell-site
records that allow for detailed physical tracking of suspects' movements.

>From now on, the police will usually have to get a warrant before seeking
such information.  But that offers limited protection.  One reason: In other
Fourth Amendment cases, the Court has held that it is not just life-tenured
federal judges who can issue warrants.  A warrant can also be obtained from
a range of other officials, including municipal court clerks who have no law
training and no tenure protection.  Such clerical staff lack the skills and
incentives to examine warrant applications closely to determine compliance
with the law.  Still, they are allowed to issue warrants.

Even where there are no such court clerks, it is well known that police and
prosecutors go “judge shopping'' when a physical search or arrest is in
play.  Judges have varying reputations for being more or less careful in
scrutinizing warrant applications.  It is often well known which judges in a
city or courthouse are more or less scrupulous.  When police have a weak
warrant application, they have a strong incentive to avoid judges who will
give it a close read.

These weaknesses in the warrant regime for physical searches or arrests are
exacerbated when electronic data is at issue.  Warrant applications for cell
tower records often rest on technical details about the geographic and
temporal scope of the search.  These applications might in theory seek a
quite varied range of information, including the target's location, the
number of calls he made, and the manner in which he used apps.

Review of the application will also require fine judgments about when
information can be shared with other law enforcement agencies and government
officials.  Just because a prosecutor can obtain electronic data, for
example, that surely doesn't mean she can hand it over to, say, a political
appointee in the White House or a Department of Transportation employee who
happens to be the subject's boyfriend.

Because close scrutiny by an experienced and independent judge has become so
easy to avoid, there is no guarantee these questions matters will get
careful and independent consideration—even if a warrant is sought and
issued consistent with the main holding of Carpenter.

The hurdle of “probable cause'' has also been steadily lowered

Assume that police are before a scrupulous judge.  Even then, the background
Fourth Amendment rules mean that they have a light burden to bear.  As Chief
Justice Robert's opinion today stresses, a warrant can be issued only based
on “probable cause.''  But in a series of earlier cases about physical
searches, the Court has winnowed down the “probable cause'' requirement to
the showing of a mere “fair probability'' that evidence of a crime will be
found.

This “fair probability'' requirement has become easier to satisfy in recent
decades because federal and state legislatures have created sweeping
penalties for conspiracies to commit crimes and for accomplices.

Showing a “fair probability'' of a conspiracy to commit a crime is not
difficult.  Under federal law, for example, a criminal conspiracy
exists if there's an agreement to commit any criminal act in the
future, and one step—even a lawful one—taken to that end.  In
one case, for example, a Google search served as the “overt act'' for
an elaborate conspiracy charge, even in the absence of evidence of
actual planned criminal conduct.

This sweeping definition of criminal liability interacts with the weak
“probable cause'' rule.  Police need only show a “fair probability''
that a single lawful action has been taken in relation to a criminal
agreement, and they are entitled to a warrant.  This is not hard to
do.

This problem is pervasive across Fourth Amendment law.  But it has
particular significance to cell-site locational data.  Such data maps
the movements of a group of people—precisely the evidence that is
routinely relevant to conspiracy charges.  So with a conspiracy theory
in hand, it will often be very easy for the police to meet the
(exceedingly weak) probable cause standard.

Would a warrant requirement have made a practical difference in
Carpenter's case?

In Carpenter's case, investigators had a confession from one of the
participants in the string of armed robberies.  They also had the cell
numbers of other participants, including Carpenter's.  These two
pieces of information would almost certainly have been enough to allow
the government to get a warrant on a conspiracy theory of probable
cause.

But imagine that the investigator couldn't even pull together evidence
showing probable cause of a conspiracy.  Imagine that they instead
play fast and loose with the contents of the warrant application.  For
example, the application might rest on some dubious evidence, and the
investigator might consciously choose not to confirm its accuracy.
Once charges have been filed, could a defendant get the locational
data thrown out on the grounds that the warrant application was based
on false pretenses?

Once again, general Fourth Amendment law makes this possible in theory
but unlikely in practice.  To get evidence acquired by a warrant
tossed out of court, a defendant must show that an investigator acted
with “reckless disregard'' in preparing a warrant application.  In most
states and in federal court, there is no rule that permits the
defendant to examine police or prosecutor records.  Hence, the
defendant often must make this recklessness showing without any
documentary evidence of what the police did.

It is therefore usually practically impossible for most defendants to
challenge flawed search warrants.  Again, warrants for electronic data
are no different.

Even if a defendant succeeds in getting a warrant quashed, moreover,
the Supreme Court has said that a reviewing court of appeals must look
again at the warrant—now placing a thumb on the scales in favor of
the investigating officer.  In effect, when the government loses the
rare case in which a defendant can show a warrant to be flawed, it
gets a second chance to have the warrant restored by a court of
appeals.

Prosecutors can use illicitly obtained information if a suspect
testifies

Still, lean your imagination into the wind to imagine a defendant who
has overcome all these constraints, and had a warrant quashed.  The
evidence from that flawed warrant can still be introduced at trial if
the defendant chooses to testify.  The Supreme Court established that
rule in the 1971 case of Harris v. New York, on the grounds that if
the defendant could give testimony, the government had the concomitant
right to undermine it by whatever information was in its hands.

As a result, even when the government has illegally acquired evidence,
its possession of that evidence creates a strong incentive for
defendants not to take the stand.  Needless to say, this will often
make the prosecutor's job easier.

If a defendant chooses not to testify, that is still not the end of
the story.  The government can also argue that information gathered
unlawfully without a warrant should be admitted because there was an
emergency.  Chief Justice Roberts explicitly carved out an emergency
exception in his Carpenter opinion, citing the possibility of “bomb
threats, active shootings, and child abductions.''  In such cases, no
warrant is required.

Also, if the locational data was acquired without a warrant before
Carpenter was decided, the Court held that it need not be kept out.
Carpenter hence helps no one whose cell-site locational data was
acquired before this week.  And the Carpenter opinion also leaves open
the possibility that police can acquire less than seven days of
cell-site data without a warrant.

Are there other paths for redress?  Someone in Carpenter's shoes,
whose Fourth Amendment rights have been violated, can technically sue
the police for damages even if they are not charged with a criminal
offense.  The problem is that the Court has almost completely
squelched the availability of damages for most constitutional wrongs,
including the Fourth Amendment, through a series of technical
anti-plaintiff rules.

In short, the legal framework of Fourth Amendment remedies has been
riddled with so many exceptions and loopholes that Carpenter's holding
that a warrant is required to acquire cell-site locational data is
likely to impose no great burden on the police.

If police can't get the information through cellphone companies, they
will turn up the heat on suspects

But the facts around the electronic data in Carpenter make the Court's
holding especially hollow.  Locational data is held not only by
telephone company.  It is also contained on a person's phone, even if
she chooses to disable locational tracking.  (Certain apps can track
locational data produced by a phone's internal sensors without the
owner's knowledge or permission.)  This data is generally accurate to
a foot or so.

Police can thus acquire location data—and much more—if they ask
for consent to examine a phone.  Extensive psychological research
shows that most of the time—especially if the suspect is a woman or
a racial minority—suspects are likely to say yes.

General Fourth Amendment law says police can seek consent to make a
search.  In the physical search context, the Court has consistently
ignored the fact that people often feel they have no choice but to
acquiesce.

Consider the leading Supreme Court case on consent searches, United
States v. Drayton.  Two men are traveling by bus in Florida, when
police board the bus and question passengers about their trip.  The
first man is asked to “consent'' to a pat down.  He does—and the
officer finds blocks of cocaine taped to his groin.  After this first
man is led away in handcuffs, the officer turns to his traveling
companion and says, “Mind if I check you?''  The second man agrees.
Drugs are found in exactly the same spot on his body.  The Supreme
Court holds that he consented to the search.

My students, encountering Drayton for the first time, often have a
moment of cognitive dissonance.  Why, they wonder, did the suspect
consent after he saw what happened to his friend?  When I point out
that both men were racial minorities in a jurisdiction with a history
of police violence, and that neither was highly educated nor socially
privileged, then the facts start to make more sense.

Ironically, the Carpenter decision makes it more likely that police
will aggressively exploit the weaknesses of the Court's consent
case-law.  By making it slightly more hassle to obtain cell-site
locational data from a telephone company, the Court has encouraged
police to exploit the frailty of its consent doctrine.  That is, by
making it harder to acquire electronic data from a third party, the
Court has nudged police toward more forceful and unpleasant
confrontations with citizens by which “consent'' can be secured.

This should not count as a “success'' for Fourth Amendment freedoms.

Electronic privacy rests on the rules and remedies that apply to the
Fourth Amendment generally.  In the past 40 years, those rules and
remedies have been substantially eroded by a Court unwilling to
constrain police.

The result today is that even when a decision endorses Fourth
Amendment protection—and requires a warrant, as in Carpenter --
that protection is easy to avoid, and likely ineffectual in practice.

Aziz Huq is the Frank and Bernice J. Greenberg professor of law at the
University of Chicago Law School.


ICE hacked its algorithmic risk-assessment tool, so it recommended detention for everyone (BoingBoing)

Richard Forno <rforno@infowarrior.org>
June 27, 2018 at 08:09:05 GMT+9
http://boingboing.net/2018/06/26/software-formalities.html

One of the more fascinating and horrible details in Reuters' thoroughly
fascinating and horrible long-form report on Trump's cruel border policies
is this nugget: ICE hacked the risk-assessment tool it used to decide whom
to imprison so that it recommended that everyone should be detained.

This gave ICE a kind of empirical facewash for its racist and inhumane
policies: they could claim that the computer forced them to imprison people
by identifying them as high-risk. The policy let ICE triple its detention
rate, imprisoning 43,000 people.

http://boingboing.net/2018/06/26/software-formalities.html


Energy company vulnerability allows access to customer accounts

Donald Mackie <donald@iconz.co.nz>
Sat, 30 Jun 2018 08:01:30 +0930
According to this story a customer alerted the company in November 2017.
What is interesting is the pace and incompleteness of response, lack of
information to customers and time for a complete fix.

Apart from the (sadly) routine nature of the vulnerability story here, one
of the risks I see is that of testing inherited legacy systems in company
handovers/changes. A governance and due diligence question.

http://www.stuff.co.nz/national/stuff-circuit/105039080/z-energy-security-beach-admitted-as-ceo-fronts-and-apologises

Cue A-Z of system security joke.


Internet TV firmware update/soft powerswitch failure

Richard M Stein <rmstein@ieee.org>
Wed, 27 Jun 2018 19:18:41 -0700
While on vacation the home we rented was equipped with all manner of
Internet of mistakes devices, including an Internet-connected television.

At 0200 one morning, it switched on suddenly. Apparently, the owners --
out of convenience or pure ignorance—elected for firmware auto-
updates.

The family was startled, as the volume had been boosted by the flash
memory save and reboot; the legacy off-state was not restored. The
line-of-sight TV controls remained operative.

Although the specific TV possesses features that can auto-detect user
inactivity after a fixed duration, or if there's an extended loss of
input signal, I cannot help imagining if the upgrade had bricked these
soft switches, or it possessed a “thermal runaway'' virus maliciously
designed to ignite the unit.


Widespread Google Home outage: What NOT to do!

Lauren Weinstein <lauren@vortex.com>
Wed, 27 Jun 2018 12:17:44 -0700
via NNSquad

There is apparently a widespread—possibly global—Google Home
outage. However, not all units are affected. Some of my units here are
down, at least one is up. The down units act if they were factory
reset and tell you to download the Home app. My recommendation is to
NOT do so! DON'T CHANGE ANYTHING! Give Google time to deal with this
from the server side.


Cruel pranksters made NYC Internet kiosks play ice-cream truck tunes (Engadget)

Monty Solomon <monty@roscom.com>
Wed, 4 Jul 2018 09:13:06 -0400
http://www.engadget.com/2018/07/03/linknyc-ice-cream-music-prank/


Swann home security camera sends video to wrong user (BBC)

Michael Marking <marking@tatanka.com>
Wed, 27 Jun 2018 19:09:58 +0000
http://www.bbc.co.uk/news/technology-44628399

  A leading security camera-maker has sent footage from inside a
  family's home to the wrong person's app.

  Swann Security has blamed a factory error for the data breach --
  which was brought to its attention by the BBC—and said it was a
  “one-off'' incident.  However, last month another customer reported a
  similar problem saying his version of the same app had received
  footage from a pub's CCTV system.  Swann said it was attempting to
  recover the kit involved in this second case. [...]

  The BBC first learned of the problem on Saturday, when a member of
  its staff began receiving motion-triggered video clips from an
  unknown family's kitchen.  Until that point, Louisa Lewis had only
  received footage from her own Swann security camera, which she had
  been using since December.  The development coincided with Ms
  Lewis's camera running out of battery power and requiring a
  recharge. [...]

  A Swann customer representative told Ms Lewis that nothing could be
  done until after the weekend.  And it was only after the matter was
  flagged to the firm's PR agency on Monday that she stopped receiving
  video clips. [...]

Even if this were a factory error, the system shouldn't have failed
absent multiple errors: the design of the manufacturing process, even
given active quality control, should not have been dependent on a
single point of failure. Most important, this failure mode should not
have been possible (ok, the likelihood shouldn't have been anything
but vanishingly small).

Moreover, this seems to have happened more than once.

Designers of these systems should, at least as an exercise, treat
manufacturers, distributors, retailers, and even users as potentially
hostile entities. This is especially true since firms are highly
unlikely to have ownership and control of the entire chain of
operations. For example, the credentials for one unit might
accidentally be swapped by a retailer for those of another. Gross
negligence is a form of hostility, and it is grossly negligent to
assume the absence of human error..


Hidden Microsoft Office 365 data gathering (LMG Security)

Peter Houppermans <peter@houppermans.net>
Thu, 28 Jun 2018 12:08:17 +0200
I came across this interesting post:

http://lmgsecurity.com/exposing-the-secret-office-365-forensics-tool/

Extract: “An ethical crisis in the digital forensics industry came to a
head last week with the release of new details on Microsoft's undocumented
`Activities' API. A previously unknown trove of access and activity logs
held by Microsoft allows investigators to track Office 365 mailbox activity
in minute detail. Following a long period of mystery and rumors about the
existence of such a tool, the details finally emerged, thanks to a video by
Anonymous and follow-up research by CrowdStrike.

Now, investigators have access to a stockpile of granular activity data
going back six months—even if audit logging was not enabled. For victims
of Business Email Compromise (BEC), this is huge news, because investigators
are now far more likely to be able to `rule out' unauthorized access to
specific emails and attachments.

Maybe I'm just picky, but I like to know what software is logging what
activity, due to compliance and confidentiality needs.  The two are
frequently in conflict, so precision is essential.

>From a privacy perspective, it appears it's time to revert to parchment,
 quill and ink..


Protecting civilians in cyberspace (Just Security)

Rob Slade <rmslade@shaw.ca>
Tue, 3 Jul 2018 18:24:27 -0700
Over the years, we've explored (and often shied away from) the idea of
infosec pros as a kind of military or police force, protecting the general
public from the digital/cybersecurity bad guys.

So I find this article on protecting civilians in cyberspace, seemingly by
people outside the traditional infosec community, quite interesting.  The
emphasis seems to be on human rights, rather than general computer use, but
there are some intriguing ideas just the same.

http://www.justsecurity.org/58838/protecting-civilians-cyberspace-ideas-road/

  [Interesting name.  It is Never *Just* Security, as (1) it is often
  something else as well, and (2) Security is never Just.  PGN]


Rash of Fortnite cheaters infected by malware that breaks HTTPS encryption (Ars Technica)

Monty Solomon <monty@roscom.com>
Tue, 3 Jul 2018 20:27:56 -0400
Malware can read, intercept, or tamper with the traffic of any
HTTPS-protected site.

http://arstechnica.com/information-technology/2018/07/rash-of-fortnite-cheaters-infected-by-malware-that-breaks-https-encryption/


Really dumb malware targets cryptocurrency fans using Macs (Ars Technica)

Monty Solomon <monty@roscom.com>
Tue, 3 Jul 2018 20:26:39 -0400
A command spread through Slack and Discord channels to cryptocurrency users
is a trap.

http://arstechnica.com/information-technology/2018/07/really-dumb-malware-targets-cryptocurrency-fans-using-macs/


Sony Blunders By Uploading Full Movie to YouTube Instead of Trailer (TorrentFreak)

Gabe Goldberg <gabe@gabegold.com>
Tue, 3 Jul 2018 17:43:12 -0400
Sony Pictures Entertainment's movie `Khali the Killer' is on release in the
United States and, as is customary, a trailer has been uploaded to YouTube.
However, on closer inspection, it appears that Sony uploaded the entire
movie in error. Oops.

http://torrentfreak.com/sony-blunders-uploading-full-movie-youtube-instead-trailer-180703/
The price is right...

  [Monty Solomon noted this item:
http://arstechnica.com/gaming/2018/07/sony-tries-to-upload-movie-trailer-to-youtube-posts-entire-movie-instead/
  PGN]


Homeland Security subpoenas Twitter for data breach finder's account (Zack Whittaker)

Gabe Goldberg <gabe@gabegold.com>
Mon, 2 Jul 2018 15:04:13 -0400
Zack Whittaker for Zero Day | 2 Jul 2018
http://www.zdnet.com/article/homeland-security-subpoenas-twitter-for-data-breach-finders-account/

Homeland Security has served Twitter with a subpoena, demanding the account
information of a data breach finder, credited with finding several large
caches of exposed and leaking data.

The New Zealand national, whose name isn't known but goes by the handle
Flash Gordon, revealed the subpoena in a tweet last month.

Also: Homeland Security's own IT security is a hot mess, watchdog finds

The pseudonymous data breach finder regularly tweets about leaked data found
on exposed and unprotected servers. Last year, he found a trove of almost a
million patients' data leaking from a medical telemarketing firm. A recent
find included an exposed cache of law enforcement data by ALERRT, a Texas
State University-based organization, which trains police and civilians
against active shooters. The database, secured in March but reported last
week, revealed that several police departments were under-resourced and
unable to respond to active shooter situations.

Homeland Security's export control agency, Immigration and Customs
Enforcement (ICE), served the subpoena to Twitter on April 24, demanding
information about the data breach finder's account.

  [Also noted by Gene Wirchenko. PGN]


Wikipedia Italy Blocks All Articles in Protest of EU's Ruinous Copyright Proposals (Gizmodo)

Lauren Weinstein <lauren@vortex.com>
Tue, 3 Jul 2018 08:39:56 -0700
NNSquad
http://gizmodo.com/wikipedia-italy-blocks-all-articles-in-protest-of-eus-r-1827312550

  On Tuesday, Wikipedia Italy set all of its pages to redirect to a
  statement raising awareness for the upcoming vote that (barring some
  legislative wrangling) would make the copyright directive law. The
  statement reads, in part (emphasis theirs): On July 5, 2018, The Plenary
  of the European Parliament will vote whether to proceed with a copyright
  directive proposal which, if approved, will significantly harm the
  openness of the Internet.  The directive instead of updating the copyright
  laws in Europe and promoting the participation of all the citizens to the
  society of information, threatens online freedom and creates obstacles to
  accessing the Web, imposing new barriers, filters and restrictions. If the
  proposal would be approved in its current form, it could be impossible to
  share a news article on social networks, or find it through a search
  engine; Wikipedia itself would be at risk.

Just a taste of what's coming to European Internet users if those laws
are enacted.


How a Major Computer Crash Showed the Vulnerabilities of EHRs (Medscape)

“Fr. Stevan Bauman'' <fatherstevan@indy.net>
Mon, 2 Jul 2018 23:45:55 -0400
Marcia Frellick, Medscape, 14 Jun 2018
http://www.medscape.com/viewarticle/898065%3Fsrc%3DWNL_infoc_180627_MSCPEDIT_hospmed%26uac%3D64984BJ%26impID%3D1667063%26faf%3D1

The recent communications outage at Sutter Health, the largest health system
in northern California, which cut off access to electronic health records
(EHRs), highlighted the frequency of such outages and the need for backup
plans and drills nationwide. [...]

Andrew Gettinger, MD, chief clinical officer for the Office of the National
Coordinator for Health Information Technology, part of the US Department of
Health and Human Services, said all systems need backup plans and pointed to
the recommendation from the Joint Commission for annual disaster drills.

“It's not a question of IS your system going to be unavailable, because I
think almost every computer system in every context is at some time or
another not available,'' he told /Medscape Medical News/. “The question is
then—what's the institutional contingency plan?''

Gettinger said that downtime for computer systems is not unlike other
disasters health systems plan for regularly.  “It's no different from what
happens when the power in the building goes out or the water supply goes out
or you're no longer able to get compressed oxygen or nitrous oxide.  I don't
think patients or doctors really need to be worried about it unnecessarily.''

All health systems should know about the SAFER guides
<https://www.healthit.gov/topic/safety/safer-guides>
(Safety Assurance Factors for EHR Resilience), put in place to address EHR
safety nationally, Gettinger said. The guides were updated last year.

Dean Sittig, PhD, a professor at the University of Texas Health's School of
Biomedical Informatics, helped write those guidelines and also was lead
author on a study in 2014 <https://www.ncbi.nlm.nih.gov/pubmed/25200197>
that surveyed US-based healthcare institutions that were part of a
professional collaborative on their exposure to downtime.

In that study, researchers found that nearly all (96%) of the 50 large,
integrated institutions who responded had at least one unplanned downtime in
the past 3 years and 70% had at least one unplanned downtime greater than 8
hours in the past 3 years. [...]

In another paper
http://www.nejm.org/doi/full/10.1056/NEJMsb1205420
Sittig wrote that, in April 2010, one third of the hospitals in Rhode Island
had to delay elective surgeries and divert some patients when an automatic
antivirus update crashed the system.

“You depend on the computer for everything—registration, scheduling, past
visit notes, results of laboratory tests. The healthcare system is now
dependent on the electronic health record to care for patients,'' Sittig told
/Medscape Medical News/.

In the Sutter case, a fire-suppression system was activated. Sittig
explained that the suppressions systems in data centers typically involve an
alarm going off to alert people to get out of the room, then doors lock and
all the oxygen is sucked out of the room and replaced with fire-retardant
gas.

Because the gas has to be flushed out, then the oxygen levels restored, then
the computers restarted, “you're talking probably a minimum of 4-6 hours,''
Sittig says. “That's when everything works perfectly.''  He said systems
should expect accidents to happen and that they will be costly.  “A big
hospital probably loses at least $1 million per hour when they're down,''
Sittig said.

But investments in data protection can be a hard sell. A chief financial
officer, Sittig said, may say a $3 million backup data center is too
expensive, for example.

“You have to ask them, 'Can you afford to be down 5 hours? That will cost us
$5 million. So we should spend the $3 million as an insurance policy,' ''
Sittig said.

Adding to the problem, he said, is that in the modern healthcare system,
with an institution that's been using an EHR 5 or more years, many young
providers have never worked in a place that has a paper system and aren't
familiar with those operations.

Sittig added that paper systems are subject to their own dangers—fire,
water, and wind, for example.

But electronic records that make it easy to spread information instantly
across hospitals, sometimes in many states, also can mean instant, massive
failures.

The first thing hospital systems do when a disaster strikes, Sittig says, is
decide what can be cut, and the first thing to go is usually the elective
surgeries. Then ambulances may be instructed to take patients elsewhere.
“Then you try to discharge the people who aren't very sick. Then they start
sending people home early.  We've created a system where we're relying on an
electromechanical device that we know is going to break. There's no question
computers are going to break.''


Apple 'Family Sharing' feature used by scammers to make purchases with hacked Apple IDs (Business Insider)

Gabe Goldberg <gabe@gabegold.com>
Sat, 30 Jun 2018 23:13:48 -0400
People are discovering that scammers are controlling their Apple accounts
using a feature for families to share apps

When David tried to download apps on his iPhone and iPad recently, he found
he wasn't able to because his account was linked to something called *Family
Sharing*.

That's a feature that Apple introduced in 2014 to make it easier to share
apps, iCloud storage, and iTunes content like music and movies with up to
five family members.

But this was news to David, who says he didn't remember turning on Family
Sharing. After he dug into his account settings, he received a popup that to
remove himself from the Family Sharing account he needed to contact a name
that was in Chinese—and he had no way to get in touch.

http://www.businessinsider.com/apple-family-sharing-feature-used-by-scammers-to-make-purchases-hacked-accounts-2018-6


“Trump administration tells FCC to block China Mobile from U.S.'' (Corinne Reichert)

Gene Wirchenko <genew@telus.net>
Tue, 03 Jul 2018 18:50:25 -0700
Corinne Reichert, ZDNet, 3 Jul 2018
Mobile access to U.S. telecommunications networks would carry a `substantial
and unacceptable risk' to national security and law enforcement, the U.S.
government has said.
http://www.zdnet.com/article/trump-administration-tells-fcc-to-block-china-mobile-from-us/

selected text:

The Federal Communications Commission (FCC) has been advised by the
Executive Branch to deny China Mobile entry to the United States
telecommunications industry, citing “substantial and unacceptable risk to US
law enforcement and foreign intelligence collection''.

The Executive Branch, which includes the Departments of Justice, Homeland
Security, Defense, State, and Commerce, along with the Offices of Science
and Technology Policy and the US Trade Representative, made the
recommendation almost seven years after China Mobile International (USA)
made the application for a certificate under s214 of the Communications Act.

A 2013 letter [PDF] from counsel for China Mobile USA had noted the “extreme
delay'' in granting the licence—which was originally applied for in
September 2011—saying the delay “is causing significant and unwarranted
harm to China Mobile USA's business operations''.

Huawei Australian chair John Lord last week said the Chinese technology
giant is the most audited, inspected, reviewed, and critiqued IT company in
the world, and has never had a national security issue.

“After every kind of inspection, audit, review, nothing sinister has
been found. No wrongdoing, no criminal action or intent, no 'back
door', no planted vulnerability, and no 'magical kill switch'.  In
fact, in our three decades as a company no evidence of any sort has
been provided to justify these concerns by anyone—ever.''


Google is training machines to predict when a patient will die (Los Angeles Times)

Richard M Stein <rmstein@ieee.org>
Sat, 30 Jun 2018 13:41:59 +0800
http://www.latimes.com/business/technology/la-fi-tn-google-artificial-intelligence-healthcare-20180618-story.html

  “What impressed medical experts most was Google's ability to sift through
  data previously out of reach: notes buried in PDFs or scribbled on old
  charts. The neural net gobbled up all this unruly information then spat
  out predictions. And it did so far faster and more accurately than
  existing techniques. Google's system even showed which records led it to
  conclusions.

  “Dean envisions the AI system steering doctors toward certain medications
  and diagnoses. Another Google researcher said existing models miss obvious
  medical events, including whether a patient had prior surgery. The person
  described existing hand-coded models as `an obvious, gigantic roadblock'
  in healthcare. The person asked not to be identified discussing work in
  progress.

  “For all the optimism over Google's potential, harnessing AI to improve
  healthcare outcomes remains a huge challenge. Other companies, notably
  IBM's Watson unit, have tried to apply AI to medicine but have had limited
  success saving money and integrating the technology into reimbursement
  systems.''

The perfect *death panel* proxy, and no longer a burden to physicians,
bioethicists, insurance agents, hospital administrators, and patient
advocates, Google's Medical Brain AI platform calculates a human life's
merit score.

Can this platform factor patient quality of life outcome potential into the
learning algorithm's neural network processing decisions? What weight would
this factor possess relative to the others? Under what medical conditions is
this platform relevant to even consult? What happens if a test result
applied as an input, such as for blood chemistry, is skewed by a
contaminated reagent?

Until proven to improve health care outcomes, if ever, a "blackbox
warning label'' seems like a wise precaution.
http://en.wikipedia.org/wiki/Boxed_warning.

Will Google's Medical Brain employees and immediate family members be
required to participate in a randomize control trial using the Medical Brain
AI platform?


So What The Heck Does 5G Actually Do? And Is It Worth What The Carriers Are Demanding? (Harold Fel)

Dewayne Hendricks <dewayne@warpspeed.com>
July 4, 2018 at 10:18:15 AM GMT+9
Harold Fel, WetMachine, 28 Jun 2018

http://www.wetmachine.com/tales-of-the-sausage-factory/so-what-the-heck-does-5g-actually-do-and-is-it-worth-what-the-carriers-are-demanding/

It's become increasingly impossible to talk about spectrum policy without
getting into the fight over whether 5G is a miracle technology that will end
poverty, war and disease or an evil marketing scam by wireless carriers to
extort concessions in exchange for magic beans. Mind you, most people never
talk about spectrum policy at all—so they are spared this problem in the
first place. But with T-Mobile and Sprint now invoking 5G as a central
reason to let them merge, it's important for people to understand precisely
what 5G actually does. Unfortunately, when you ask most people in Policyland
what 5G actually does and how it works, the discussion looks a lot like the
discussion in Hitchhikers Guide To the Galaxy where Deep Thought announces
that the answer to Life the Universe and Everything is `42'.

So while not an engineer, I have spent the last two weeks or so doing a deep
dive on what, exactly does 5G actually do—with a particular emphasis on
the recently released 3GPP standard (Release 15) that everyone is
celebrating as the first real industry standard for 5G. My conclusion is
that while the Emperor is not naked, that is one Hell of a skimpy thong he's
got on.

More precisely, the bunch of different things that people talk about when
they say `5G': millimeter wave spectrum, network slicing, and something
called (I am not making this up) `flexible numerology' are real. They
represent improvements in existing wireless technology that will enhance
overall efficiency and thus add capacity to the network (and also reduce
latency). But, as a number of the more serious commentators (such as Dave
Burstien over here) have pointed out, we can already do these things using
existing LTE (plain old 4G). Given the timetable for development and
deployment of new 5G network technology, it will be at least 5 years before
we see more than incremental improvement in function and performance.

Put another way, it would be like calling the adoption of a new version of
Wi-Fi `C5G Wi-Fi.' (Which I am totally going to do from now on, btw, because
why not?)

I elaborate more below . . .

There are a bunch of important questions to keep in mind when evaluating
what we ought to do about 5G as a policy question. (a) What exactly is 5G?
(b) How does it compare to existing LTE? and, (c) How much are we being
asked to pay for it in policy terms?

What Exactly Do We Mean By CG

CG technically means `generation'.  My favorite explanation can be found in
this old Best Buy commercial. As a general rule, we use `G' to indicate a
significant shift in capability, architecture and technology. For example,
the shift from analog to digital voice in 2G, or the inclusion of limited
data capability as an overlay to voice in 3G. The shift to 4G was marked by
a shift to an all packet-switched data network in which voice is supported
as one feature on the network. In addition, 4G turned out to be fairly
homogeneous for a variety of reasons I won't get into now. Basically, after
a brief flirtation by Sprint and a few others with WiMax, all the carriers
ended up using LTE.

So the switch to 5G ought to mean a major boost in both technology and
speed. And it will, eventually. But for now, it's not so much a generational
shift like the previous shifts but a modest transition over time. By that I
don't mean simply that we will see 5G networks operating with 4G cores for a
long time. That's always true. Carriers deployed LTE and still maintained
(some to this day) 3G networks in parallel. That is necessary so that people
and businesses can switch legacy equipment at a rational pace. What I mean
is that the capabilities that are supposed to make 5G so awesome are not
really that awesome right now, and won't be for at least 5 more years.

What Makes 5G More Awesome?

Here is where it gets confusing. You can see a good tutorial on the network
architecture here. But this represents a relatively recent change in how we
talk about 5G. Originally, i.e., back in 2015, we were talking about
millimeter wave as 5G, with nothing else going on in the lower frequencies
counting as 5G. [...]


Leaks, riots, and monocles: How a $60 in-game item almost destroyed EVE Online (Ars Technica)

Monty Solomon <monty@roscom.com>
Tue, 3 Jul 2018 20:29:22 -0400
When the developers of EVE Online added expensive in-game vanity items... it
went poorly.

http://arstechnica.com/gaming/2018/07/monocles/


Gaming disorder is only a symptom of a much larger problem (WaPo)

Richard M Stein <rmstein@ieee.org>
Mon, 02 Jul 2018 11:54:05 +0800
[Suitably revised, this submission might make a good April Fool's comp.risks
contribution in 2019. And jolt a few CxOs from their Caesar salad lunch. od
-c output attached below for peace of mind.]

http://www.washingtonpost.com/opinions/gaming-disorder-is-only-a-symptom-of-a-much-larger-problem/2018/06/29/64f2866a-7a21-11e8-93cc-6d3beccdd7a3_story.html


Mobile electronic devices generate addiction symptoms that mirror those
caused by nicotine. The iGen—young people raised on smart phones and
social media—are especially vulnerable to screen addiction disorder.

Would an enterprising state attorney general attempt the equivalent of a
“Tobacco Master Settlement Agreement''
http://en.wikipedia.org/wiki/Tobacco_Master_Settlement_Agreement) against
mobile device manufacturers, application developers, and social media for
public health expenditures arising from treatment?

>From the MSA wikipedia page:

  “The general theory of these lawsuits was that the cigarettes produced by
  the tobacco industry contributed to health problems among the population,
  which in turn resulted in significant costs to the states' public health
  systems.''

Recall that the Tobacco MSA amounted to settlement payments from tobacco
firms for ~$US 200-375B over 25 years to reimburse states for expenses
arising from tobacco-related illness and disease treatment.  The MSA also
imposed restrictions that prohibited tobacco advertisements toward young
people—a core audience for addictive products, and a business model
impediment that penalizes income capture potential.

Hypothetically, would substitution of “mobile devices, apps, and social
media'' for “tobacco'' (the MOBASS MSA?) in an equivalent agreement be viable?
The epidemiological evidence, per states' public health system impact to
date, might not immediately substantiate this extrapolation.  As evidence
linking tobacco usage to illness accumulated from the 1950s through 1990s,
so might evidence of screen addiction disorder and the affects it
introduces.

The spectacle of mobile device, social media, and application vendors called
to testify under oath before Congress that “our products are not addictive''
would rival the perjury committed by tobacco industry executive
predecessors. Michael Mann's “The Insider''
http://www.imdbcom/title/tt0140352/?ref_=nv_sr_2) might need a sequel!


Ticketmaster: How not to manage customers after a data breach.

Michael Kent <michael.mail@37.org.uk>
Sun, 1 Jul 2018 22:43:10 +0100
Like many in the UK I was contacted by Ticketmaster to let me know that my
data might have been accessed through malicious software on the servers of a
third party service provider.  They have very kindly offered me a years free
identity monitoring by Experian.

The issue?  The email tells me to sign up by...

  "Visit the Data Patrol website to get started:
  http://my.garlik.com/garlik-ui/expnuk/login
  http://click.customerservice.tmm.ticketmaster.co.uk ...

Not a Ticketmaster site, not an Experian site, just a site that screams
***SCAM***!!!

A minute or two googling tells me that this is probably the legitimate
service provider but this really isn't how to give customers confidence that
you take security seriously!


Re: Police, Law Enforcement, and corporate use of facial recognition and facial images in court (RISKS-30.73)

Kelly Bert Manning <bo774@freenet.carleton.ca>
Mon, 2 Jul 2018 15:24:10 -0400
This has been used in British Columbia for a decade, has proved quite
effective, and is pretty much settled case law in both Criminal and Civil
cases.

In BC Driver Licencing and BC Service Card issuing for the BC Medical
Services Plan have been offloaded out of core government to a Crown
Corporation, the Insurance Corporation of BC. Photo ID Service Cards for MSP
are a relatively new development. The Original BC “Care Cards'' did not have
photos and involved little or no verification of the identity of who they
were issued to.  BC Residents have the option of combining the BC Service
Card and BC Driver's Licence into one card, or having separate cards.  Most
Privacy Professionals I have discussed this with chose to to have separate
cards. I have overnight dialysis in a clinic 3 times a week, so I just take
my BC Service Card and a transit pass with me.

The BC Liquor Control Board used to issue its own Photo ID cards, decades
ago, but those were also offloaded onto ICBC as “BC ID Cards'' for people who
did not have a BC Driver's Licence, such as a former premier who surrendered
his DL after being caught driving under the influence in Hawaii.

There have been at least two widely reported instances where Facial
Recognition has been used to trigger investigations, or to identify
criminals from photos.

After the 2011 Stanley Cup Riot in Vancouver ICBC offered to scan its Facial
Image DB, using the same recognition software that ICBC began using in 2008,
without notice to customers, to detect attempts at Driver's Licence
Fraud. ICBC had a vested interest in identifying the Rioters who damaged or
destroyed automobiles insured by ICBC and later sued at least 46 people in
Civil Actions. Facial Images flagged as possible Fraud attempts are reviewed
by Police, not by ICBC employees. Bio-metric factors as height, weight, and
eye colour are also used in the matching, not just Facial Recognition.

http://www.burnabynow.com/news/six-burnaby-defendants-in-icbc-stanley-cup-riot-civil-suit-1.1896960

BC Information and Privacy Commissioner Elizabeth Denham ruled that ICBC
could only do that with due process.  Police turned to the Internet and
crowd sourced identification of the rioters from pictures posted on the
web. That turned out to be very effective, resulting in tips about the names
of hundreds of rioters. Human eyes still beat facial recognition?

http://www.macleans.ca/news/last-two-stanley-cup-rioters-sentenced-to-time-behind-bars-for-assault/

“Prosecutors laid 912 charges against 300 suspects, and 284 people pleaded
guilty. Another six had the charges against them stayed, while 10 went to
trial, resulting in nine convictions and one acquittal.''

Elizabeth Denham is now the UK Data Commissioner responsible for
investigating the Cambridge Analytica scandal.

http://www.cbc.ca/news/canada/british-columbia/police-can-t-use-icbc-facial-recognition-to-track-rioters-1.1207398

http://www.oipc.bc.ca/investigation-reports/1245

Executive Summary [8] “I conclude that ICBC must immediately cease
responding to requests from police to use the facial recognition database
for the purposes of identifying individuals for police absent a subpoena,
warrant or court order.''

ICBC's undisclosed use of Facial Recognition to detect attempts at DL Fraud
became public knowledge when RCMP arrived at a Government Office in Victoria
to arrest a Civil Servant who had a meteoric rise under the name Richard
Perran. That turned out to be a family affair, with his wife also working in
the BC Public Service Under a stolen identity. He had also obtained a Public
Service subsidized Master's Degree from the University of Victoria, and
tried to leverage that into at PhD under the stolen name after being
convicted, despite being ordered to stop using the stolen name as a
condition of sentencing and probation.

http://www.timescolonist.com/icbc-fraud-check-snared-civil-servant-accused-of-altering-record-to-get-government-job-1.21668
http://bctrialofbasi-virk.blogspot.com/2009/12/police-probe-hiring-of-bc-civil-servant.html
http://www.pressreader.com/canada/times-colonist/20120617/281479273491097


Re: Florida skips gun background checks for a year after employee forgets login (RISKS-30.72,73)

Kelly Bert Manning <bo774@freenet.carleton.ca>
Mon, 2 Jul 2018 16:00:39 -0400
Did Security Administrators at the National Instant Criminal Background
Check System (NICS) detect the fact that IDs were not being used and ask why
the users were not using assigned IDs? If so did they pursue that with
management in the Florida Department of Agriculture and Consumer Service?
The report cited in RISKS says that the Florida OIG detected the issue.

I was a top-level RACF Security Admin for the BC Ministry of Health from
1980 until 2016, first in the BC Public Service and later as a Contracted
Resource working for a world scale IT Services company with its HQ in
Montreal.

One of the auto generated routine reports that I had to review was a report
of IDs that had expired passwords because the User had not changed the
password for more than 60 days.

Last-Use Date was also tracked.

Part of my job was repeatedly nagging user supervisors about whether the
person the ID was issued to was still working in a position that required
access, based on the expired password and the last use date.

Repeating the query at regular intervals was part of my job even though it
tended to make me seem like a broken record.

Responding to requests for new user IDs was something I used to revisit the
matter. That is, why did the area need a new user ID when it had been issued
positional IDs that had not been used in years, or even decades.

Positional IDs are associated with a specific job function.

If a user leaves, or changes job roles they get a new ID associated with the
new role. The old ID should be reassigned deactivated, or deleted as part of
that transition.

Please report problems with the web pages to the maintainer

x
Top