The RISKS Digest
Volume 33 Issue 69

Friday, 28th April 2023

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Farmers crippled by satellite failure as GPS-guided tractors grind to a halt
Sydney Morning Herald
GPS clock turnover—again and again
GPS
Russian pranksters posing as Zelensky trick Fed Chair Jerome Powell
WashPost
Large amount of content missing from RISKS-33.68
Steve Bacher
There's a new form of keyless car theft that works in under 2 minutes
Ars Technica
eFile tax website served malware to visitors for weeks
AppleInsider
California Man Falls In Love With AI Chatbot Phaedra
India Times
Actor kicked out of Facebook for impersonating his stage character
Amos Shapir
*Intelligence leak*
Rob Slade
Fox News vs Dominion Voting Systems
NYTimes articles via PGN
The Crypto Detectives Are Cleaning Up
The New York Times
To avoid an AI *arms race*, the world needs to expand scientific collaboration
Charles Oppenheimer
ChatGPT falsely told voters their mayor was jailed for bribery.
WashPost
Why regulators in Canada and Italy are digging into ChatGPT's use of personal information
CBC
ChatGPT is making up fake Guardian articles. Here's how we are responding
The Guardian
ChatGPT detector tools resulting in false accusations of students for cheating
USA Today
On the Impossible Security of Very Large Foundation Models
El-Mhamedi via Prashanth Mundkur
AI vs the culture industry
Politico
In AI Race, Microsoft and Google Choose Speed Over Caution
NYTimes
AI is now indistinguishable from reality
via geoff goodfellow
In Defense of Merit in Science
via geoff goodfellow
ICE Records Reveal How Agents Abuse Access to Secret Data
WiReD
Security breaches covered up by 30% of companies, reveals study
9to5mac
Why it's hard to defend against AI prompt injection
The Register
Lawmakers Introduce Bill to Keep AI from Going Nuclear
nextgov.com
Mercenary spyware hacked iPhone victims with rogue calendar invites, researchers say
Tech Crunch
Chinese spy balloon gathered intelligence from sensitive U.S. military sites, despite U.S. efforts to block it
NBC News
Nearly eight years of breath test results cannot be used in drunk-driving prosecutions, SJC rules
The Boston Globe
The Huge 3CX Breach Was Actually 2 Linked Supply Chain Attacks
WiReD
Re: Metro operator investigated for using automation system without clearance
Steve Bacher
Re: OpenSSL KDF and secure by default
Cliff Kilby
Info on RISKS (comp.risks)

Farmers crippled by satellite failure as GPS-guided tractors grind to a halt (Sydney Morning Herald)

geoff goodfellow <geoff@iconia.com>
Wed, 19 Apr 2023 07:46:44 -0700
Tractors have ground to a halt in paddocks across Australia and New Zealand
because of a signal failure in the satellite farmers use to guide their
GPS-enabled machinery, stopping them from planting their winter crop.

The satellite failure on Monday was a bolt from the blue for farmers in NSW
and Victoria, who were busy taking advantage of optimal planting conditions
for crops including wheat, canola, oats, barley and legumes.

“You couldn't have picked a worse time for it,''D said Justin Everitt, a
grain grower in the Riverina who heads NSW Farmers' grains committee.
“Over the past few years, all these challenges have been thrown at us, but
this is just one we never thought would come up.''

Tractors that pull seed-planting machinery, as well as the massive combine
harvesters that reap Australia's vast grain crops, are high-tech beasts that
can cost hundreds of thousands of dollars.

They are enabled with GPS tracking and can be guided to an accuracy within
two centimetres, enabling seed-planting equipment to sow crops with
precision to drive up efficiency, prevent wastage and boost environmental
sustainability.

All that went out the window when the Inmarsat-41 satellite signal failed.

Katie McRobert, general manager at the Australia Farm Institute, said
Australian farmers sourced their GPS signal from one satellite, which was a
critical risk to rural industries.

Having all your GPS eggs in one basket is a vulnerability on a good day,
and a fatal weakness on a bad one,'' McRobert said.

“If the Medibank and Optus data breaches didn't make the agriculture
industry sit up and take notice, the implementation of kill switches on
stolen Ukrainian tractors in 2022 should have been a three-alarm wake-up
call.  [...]

https://www.smh.com.au/national/farmers-crippled-by-satellite-failure-as-gps-guided-tractors-grind-to-a-halt-20230418-p5d1de.html


GPS clock turnover—again and again

Peter Neumann <neumann@csl.sri.com>
Fri, 7 Apr 2023 13:21:31 PDT
Bernie Cosell asked Victor Miller a question, which Victor
referred to me.

  This is very strange: this morning, my cell phone thinks it is August 18th
  2003.  It is *supposed* to get the time/date from the network.  What could
  have caused this?  I guess I can turn the network off and put in the right
  time/date by hand, but any ideas how my phone could have gotten so
  confused??

Apparently it's just one more 1024-week turnover, as reported in RISKS-20.07
The reset is apparently receiver-dependent, e.g., resetting to 6 Jan 1980 or
the previous reset date, as in Bernie's case:

  THE POTENTIAL RESETTING OF GLOBAL POSITIONING
  SYSTEM (GPS) RECEIVER INTERNAL CLOCKS

  1 Introduction

  1.1 The timing mechanism within GPS satellites may cause some GPS
  equipment to cease to function after 22 August 1999 due to a coding
  problem. The GPS measures time in weekly blocks of seconds starting from 6
  January 1980.  For example, at midday on Tuesday 17 September 1996, the
  system indicates week 868 and 302,400 seconds.  However, the software in
  the satellites' clocks has been configured to deal with 1024
  weeks. Consequently on 22 August 1999 (which is week 1025, some GPS
  receivers may revert to week one (i.e., 6 January 1980).

  1.2 Most airborne GPS equipment manufacturers are aware of the potential
  problem and either have addressed the problem previously, or are working
  to resolve it.  However, there may be some GPS equipment (including
  portable and hand held types) currently used in aviation that will be
  affected by this potential problem.

  2 Action to be taken by Aircraft Operators Aircraft operators, who use GPS
  equipment (including portable and hand held types), as additional radio
  equipment to the approved means of navigation, should enquire from the GPS
  manufacturer whether the GPS equipment will exhibit the problem. Equipment
  that exhibits the problem must not be used after 21 August 1999 and either
  be removed from the aircraft or its operation inhibited.

  For the Civil Aviation Authority, Safety Regulation Group, Aviation House,
  Gatwick Airport South, West Sussex RH6 OYR

Does anyone know if there have been any desire to automagically fix this
problem?  or do we just continue to kick the can down another 1024 days?
PGN


Russian pranksters posing as Zelensky trick Fed Chair Jerome Powell (WashPost)

Monty Solomon <monty@roscom.com>
Thu, 27 Apr 2023 22:49:16 -0400
https://www.washingtonpost.com/business/2023/04/27/russian-pranksters-posing-zelensky-trick-fed-chair-jerome-powell/


Large amount of content missing from RISKS-33.68

Steve Bacher <sebmb1@verizon.net>
Sat, 8 Apr 2023 08:49:47 -0700
And this is no April Fool's joke, is it. All of the articles from " In Gen
Z's world of dupes, fake is fabulous—until you try it on
<https://catless.ncl.ac.uk/Risks/33/68#subj2>" through " AI-Powered Vehicle
Descriptions: Save Money, Save, Time, Sell More!
<https://catless.ncl.ac.uk/Risks/33/68#subj9>" are missing.  The first
article ends with a link from the ninth article, which was strange in
itself.

  [I don't think I have ever had an emacs moment like this, Where I managed
  to lose a large chunk of something without immediately noticing it and
  being able to yank the deleted text back—in this case *after* the
  complete issue had been spelling checked and date checked, all set up with
  the final insertion of the grep-generated ToC in the right order.  Perhaps
  I tried incorrectly to move one item to a different position in the issue.
  If anyone has a pargticular hankering for the missing items, you might try
  browsing on the Subject: line of the missing item.  I am very short of
  spare time at the moment, and seriously backlogged since 7 April.  I also
  may have lost a few items from the week before 1 April in the shuffle.
  However, now it feels like water under the bridge.  Here's a start.
  Bummer.  PGN]


There's a new form of keyless car theft that works in under 2 minutes (Ars Technica)

Gabe Goldberg <gabe@gabegold.com>
Tue, 11 Apr 2023 00:37:22 -0400
As car owners grow hip to one form of theft, crooks are turning to new ones.

When a London man discovered the front left-side bumper of his Toyota RAV4
torn off and the headlight partially dismantled not once but twice in three
months last year, he suspected the acts were senseless vandalism. When the
vehicle went missing a few days after the second incident, and a neighbor
found their Toyota Land Cruiser gone shortly afterward, he discovered they
were part of a new and sophisticated technique for performing keyless
thefts.

It just so happened that the owner, Ian Tabor, is a cybersecurity researcher
specializing in automobiles. While investigating how his RAV4 was taken, he
stumbled on a new technique called CAN injection attacks.

https://arstechnica.com/information-technology/2023/04/crooks-are-stealing-cars-using-previously-unknown-keyless-can-injection-attacks/

  [Hacking the CAN-bus is hardly new, but this attack seems relatively easy.
  PGN]


eFile tax website served malware to visitors for weeks (AppleInsider)

Monty Solomon <monty@roscom.com>
Fri, 7 Apr 2023 19:06:18 -0400
https://appleinsider.com/articles/23/04/05/efile-tax-website-served-malware-to-visitors-for-weeks


California Man Falls In Love With AI Chatbot Phaedra (India Times)

Gabe Goldberg <gabe@gabegold.com>
Mon, 10 Apr 2023 16:36:35 -0400
Artificial intelligence has advanced quite a bit in recent years, so much so
that it is now equipped to reject humans. A 40-year-old man in California
recently confessed that he fell in love with an AI chatbot but was
heartbroken when she rejected his "steamy" advances.

https://www.indiatimes.com/trending/wtf/divorced-man-falls-in-love-with-ai-chatbot-phaedra-598271.html


Actor kicked out of Facebook for impersonating his stage character

Amos Shapir <amos083@gmail.com>
Mon, 24 Apr 2023 18:42:33 +0300
Idan Mor is an Israeli comic actor, who had invented and played a stage
character named Gadi Wilcherski.  He became famous when he had taken to
participating in anti-government demonstrations in character, and even been
interviewed on main channels, initially without them being aware that the
person they were .

The Wilcherski character had gotten a life of his own, being politically
active, even appearing before Knesset committees, and of course had his own
Facebook page.

But last week, after the actor Idan Mor had uploaded to his FB page images
of himself playing his stage persona, FB (as usual) had acted as judge, jury
and executioner, and closed down Mor's page on the pretext of "impersonating
a real living person".

It seems that this "hall of mirrors" situation was just too much for FB's
fact checking.


*Intelligence leak*

Rob Slade <rslade@gmail.com>
Thu, 13 Apr 2023 13:34:22 -0700
The latest `intelligence leak' that is all over the media is, quite likely,
yet another Manning/Snowden thing.

But, given a lot of the details that are starting to come out, I can't help
but thinking that it could very well be another type of discord attack ...


Fox News vs Dominion Voting Systems

Peter Neumann <neumann@csl.sri.com>
Thu, 20 Apr 2023 14:29:12 PDT
Win, Lose, or Draw?  Making Sense of Settlement
*The New York Times*, Business section, 20 Apr 2023,
National edition, pages B1, B4, B5 (three-page spread)

1. The Victor: Tiffany Hsu
   Dominion emerges in a stronger position to win back sksittish
   clients and score new business.

2. The First Amendment: David Enrich
   Assaults are likely to continue on a a landmark Supreme Court
   ruling that protects the media,

3. The Reaction: Michael M. Grynbaum
   Some critics of Fox News were hoping Murdoch and the network would
   face a stiffer penalty.

4. Critic's Notebook: James Poniewozik
   The network's main goal became the maintenance of a reality bubble
   that its hosts helped shape.


The Crypto Detectives Are Cleaning Up (The New York Times)

Gabe Goldberg <gabe@gabegold.com>
Sun, 23 Apr 2023 20:45:14 -0400
Early adopters thought cryptocurrencies would be free from prying eyes.
But tracking the flow of funds has become a big business.

https://www.nytimes.com/2023/04/22/business/crypto-blockchain-tracking-chainalysis.html


"Diego.Latella" <diego.latella@isti.cnr.it>
Thu, 13 Apr 2023 17:37:31 +0200
*Bulletin of Atomic Scientists*
https://thebulletin.org/2023/04/to-avoid-an-ai-arms-race-the-world-needs-to-expand-scientific-collaboration/#post-heading


ChatGPT falsely told voters their mayor was jailed for bribery. (WashPost)

Monty Solomon <monty@roscom.com>
Fri, 7 Apr 2023 19:03:11 -0400
The AI chatbot falsely told users that Australia's Hepburn Shire mayor
Brian Hood was jailed in an international bribery scandal. He was actually the
*whistleblower*.  He may sue.

https://www.washingtonpost.com/technology/2023/04/06/chatgpt-australia-mayor=
-lawsuit-lies/

  [Another quirky risk of whistle-blowing.  PGN]


Why regulators in Canada and Italy are digging into ChatGPT's use of personal information (CBC)

Matthew Kruk <mkrukg@gmail.com>
Fri, 7 Apr 2023 12:01:28 -0600
https://www.cbc.ca/news/world/openai-chatgpt-data-privacy-investigations-1.6804205

As governments rush to address concerns about the rapidly-advancing
generative artificial intelligence industry, experts in the field say
greater oversight is needed over what data is used to train the systems.

Earlier this month, Italy's data protection agency launched a probe of
OpenAI <https://www.cbc.ca/news/world/italy-openai-chatgpt-ban-1.6797963>
and temporarily banned ChatGPT, their AI-powered chatbot. On Tuesday,
Canada's privacy commissioner also announced an investigation of OpenAI

<https://www.cbc.ca/news/politics/privacy-commissioner-investigation-openai-chatgpt-1.6801296>.

  Both agencies cited concerns around data privacy.


ChatGPT is making up fake Guardian articles. Here's how we are responding (The Guardian)

geoff goodfellow <geoff@iconia.com>
Sat, 15 Apr 2023 07:34:58 -0700
Last month one of our journalists received an interesting email. A
researcher had come across mention of a Guardian article, written by the
journalist on a specific subject from a few years before. But the piece was
proving elusive on our website and in search. Had the headline perhaps been
changed since it was launched? Had it been removed intentionally from the
website because of a problem we'd identified? Or had we been forced= to take
it down by the subject of the piece through legal means?

The reporter couldn't remember writing the specific piece, but the headline
certainly sounded like something they would have written. It was a subject
they were identified with and had a record of covering. Worried that there
may have been some mistake at our end, they asked colleagues to go back
through our systems to track it down. Despite the detailed records we keep
of all our content, and especially around deletions or legal issues, they
could find no trace of its existence.

Why? Because it had never been written.

Luckily the researcher had told us that they had carried out their research
using ChatGPT. In response to being asked about articles on this subject,
the AI had simply made some up. Its fluency, and the vast training data it
is built on, meant that the existence of the invented piece even seemed
believable to the person who absolutely hadn't written it.

Huge amounts have been written about generative AI's tendency to manufacture
facts and events. But this specific wrinkle—the invention of sources --
is particularly troubling for trusted news organisations and journalists
whose inclusion adds legitimacy and weight to a persuasively written
fantasy. And for readers and the wider information ecosystem, it opens up
whole new questions about whether citations can be trusted in any way, and
could well feed conspiracy theories about the mysterious removal of articles
on sensitive issues that never existed in the first place.  [...]

https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article


ChatGPT detector tools resulting in false accusations of students for cheating (USA Today)

Steve Bacher <sebmb1@verizon.net>
Sat, 22 Apr 2023 15:05:34 -0700
Professors are using ChatGPT detector tools to accuse students of
cheating. But what if the software is wrong? <#>

Universities, professors and students are grappling with the repercussions
of using AI cheating detectors from companies like Turnitin and GPTZero.

https://www.usatoday.com/story/news/education/2023/04/12/how-ai-detection-tool-spawned-false-cheating-case-uc-davis/11600777002/

  [My solution to teaching about Chat GPT is not to ask for essays, but
  rather to provide each student with a crafted essay of specific relevance
  to the course and ask the class to write a thorough analysis of the
  falsehoods.  The thoroughness of the analysis would be a strong indicator
  of each student's abilities.  Team efforts could even be considered in
  some topics, although the teams should not be the same from topics to
  topic.  This strategy would work for many different types of classses --
  history, literature, science, etc.  PGN]


On the Impossible Security of Very Large Foundation Models

Prashanth Mundkur <prashanth.mundkur@gmail.com>
Thu, 20 Apr 2023 10:18:13 -0400
https://arxiv.org/abs/2209.15
SoK: On the Impossible Security of Very Large Foundation Models
El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta,
L=C3=AA-Nguy=C3=AAn Hoang, Rafael Pinot, John Stephan

Large machine learning models, or so-called foundation models, aim to serve
as base-models for application-oriented machine learning. Although these
models showcase impressive performance, they have been empirically found to
pose serious security and privacy issues. We may however wonder if this is
a limitation of the current models, or if these issues stem from a
fundamental intrinsic impossibility of the foundation model learning
problem itself. This paper aims to systematize our knowledge supporting the
latter. More precisely, we identify several key features of today's
foundation model learning problem which, given the current understanding in
adversarial machine learning, suggest incompatibility of high accuracy with
both security and privacy. We begin by observing that high accuracy seems
to require (1) very high-dimensional models and (2) huge amounts of data
that can only be procured through user-generated datasets. Moreover, such
data is fundamentally heterogeneous, as users generally have very specific
(easily identifiable) data-generating habits. More importantly, users' data
is filled with highly sensitive information, and may be heavily polluted by
fake users. We then survey lower bounds on accuracy in privacy-preserving
and Byzantine-resilient heterogeneous learning that, we argue, constitute a
compelling case against the possibility of designing a secure and
privacy-preserving high-accuracy foundation model. We further stress that
our analysis also applies to other high-stake machine learning
applications, including content recommendation. We conclude by calling for
measures to prioritize security and privacy, and to slow down the race for
ever larger models.

It was mentioned in passing in this article:

  Google's Rush to Win in AI Led to Ethical Lapses, Employees Say
  The search giant is making compromises on misinformation and other harms in
  order to catch up with ChatGPT, workers say
  Davey Alba and Julia Love, Bloomberg, 19 Apr 2023
https://www.bloomberg.com/news/features/2023-04-19/google-bard-ai-chatbot-raises-ethical-concerns-from-employees


AI vs the culture industry (Politico)

Peter Neumann <neumann@csl.sri.com>
Mon, 24 Apr 2023 9:51:47 PDT
And the Grammy goes to an AI music generator?

The breakout hit of the spring is *Heart on My Sleeve*—a track that
sounds just like a collaboration between musicians Drake and The Weeknd,
two mega-popular artists who didn't record any of the song themselves.

The track relies purely on AI-generated imitations of their voices, posted
online by a pseudonymous TikTok user, and since it went viral last weekend
it's been heard millions of times. It has also generated takedown notices
and a statement from The Weeknd's label, Universal Music Group, slamming
AI-powered copyright infringement and calling for users of the technology to
get on the right *side of history*.  [long item truncated for RISKS]


In AI Race, Microsoft and Google Choose Speed Over Caution (NYTimes)

Monty Solomon <monty@roscom.com>
Fri, 7 Apr 2023 20:41:26 -0400
Technology companies were once leery of what some artificial intelligence
could do. Now the priority is winning control of the industry's next big
thing.

https://www.nytimes.com/2023/04/07/technology/ai-chatbots-google-microsoft.html


AI is now indistinguishable from reality

geoff goodfellow <geoff@iconia.com>
Wed, 26 Apr 2023 05:16:28 -0700
It's hard to believe, but this ad was AI generated. It's not real. The
future is here.  [...]

https://twitter.com/0xgaut/status/1650867275103174660


In Defense of Merit in Science

geoff goodfellow <geoff@iconia.com>
Fri, 28 Apr 2023 09:02:35 -0700
  Preeminent scientists are sounding the alarm that ideology is undermining
  merit in the sciences. I strongly support them.*—Richard Dawkins

     *In Defense of Merit in Science

Merit is a central pillar of liberal epistemology, humanism, and democracy.
The scientific enterprise, built on merit, has proven effective in
generating scientific and technological advances, reducing suffering,
narrowing social gaps, and improving the quality of life globally. This
perspective documents the ongoing attempts to undermine the core principles
of liberal epistemology and to replace merit with non-scientific,
politically motivated criteria. We explain the philosophical origins of this
conflict, document the intrusion of ideology into our scientific
institutions, discuss the perils of abandoning merit, and offer an
alternative, human-centered approach to address existing social
inequalities.

*Keywords: STEM; Enlightenment; meritocracy; critical social justice;
postmodernism; identity politics; Mertonian norms*...

https://journalofcontroversialideas.org/article/3/1/236via
https://twitter.com/RichardDawkins/status/1651970327902138370


ICE Records Reveal How Agents Abuse Access to Secret Data (WiReD)

=?iso-8859-1?Q?Jos=E9_Mar=EDa?= Mateos <chema@rinzewind.org>
Wed, 19 Apr 2023 17:58:26 -0400
According to an agency disciplinary database that WIRED obtained through a
public records request, ICE investigators found that the organization's
agents likely queried sensitive databases on behalf of their friends and
neighbors. They have been investigated for looking up information about
ex-lovers and coworkers and have shared their login credentials with family
members. In some cases, ICE found its agents leveraging confidential
information to commit fraud or pass privileged information to criminals for
money.

https://www.wired.com/story/ice-agent-database-abuse-records/


Security breaches covered up by 30% of companies, reveals study\ (9to5mac)

Monty Solomon <monty@roscom.com>
Sun, 9 Apr 2023 11:15:36 -0400
https://9to5mac.com/2023/04/07/security-breaches-covered-up/

 [How do you know?  Paradox: If they were any better at covering it up. the
 reported percent might be smaller!  PGN]


Why it's hard to defend against AI prompt injection (The Register)

"Li Gong" <ligongsf@gmail.com>
Fri, 28 Apr 2023 14:26:56 +0100
Prompt injection— a new type of attack

https://www.theregister.com/2023/04/26/simon_willison_prompt_injection/


Lawmakers Introduce Bill to Keep AI from Going Nuclear (nextgov.com)

"Richard Marlon Stein" <rmstein@protonmail.com>
Fri, 28 Apr 2023 12:26:19 +0000
The bicameral and bipartisan bill, Block Nuclear Launch by Autonomous AI Act
of 2023, primarily seeks to mandate a human element in all AI systmes and
protocols that govern U.S. nuclear devices.

  Hope AI does not exceed the president's authority to deploy a nuclear
  weapon, as satirized by Stanley Kubrick in Dr. Strangelove.

https://www.nextgov.com/emerging-tech/2023/04/lawmakers-initiate-several-efforts-put-guardrails-ai-use/385711/


Mercenary spyware hacked iPhone victims with rogue calendar invites, researchers say (Tech Crunch)

Monty Solomon <monty@roscom.com>
Tue, 11 Apr 2023 18:42:03 -0400
https://techcrunch.com/2023/04/11/quadream-spyware-hacked-iphones-calendar-invites/


Chinese spy balloon gathered intelligence from sensitive U.S. military sites, despite U.S. efforts to block it (NBC News)

Monty Solomon <monty@roscom.com>
Sun, 9 Apr 2023 17:45:08 -0400
Chinese spy balloon gathered intelligence from sensitive U.S. military
sites, despite U.S. efforts to block it

The intelligence China collected was mostly from electronic signals, which
can be picked up from weapons systems or include communications from base
personnel.

https://www.nbcnews.com/politics/national-security/china-spy-balloon-collected-intelligence-us-military-bases-rcna77155


Nearly eight years of breath test results cannot be used in drunk-driving prosecutions, SJC rules

Monty Solomon <monty@roscom.com>
Thu, 27 Apr 2023 10:22:47 -0400
https://www.bostonglobe.com/2023/04/26/metro/years-breathalyzer-results-cannot-be-used-drunk-driving-prosections/


The Huge 3CX Breach Was Actually 2 Linked Supply Chain Attacks (WiReD)

Monty Solomon <monty@roscom.com>
Mon, 24 Apr 2023 20:39:15 -0400
https://www.wired.com/story/3cx-supply-chain-attack-times-two/


Re: Metro operator investigated for using automation system without clearance (WashPost, RISKS-33.68)

Steve Bacher <sebmb1@verizon.net>
Sat, 8 Apr 2023 08:59:28 -0700
In the story linked to by
https://www.washingtonpost.com/transportation/2023/03/24/metrorail-ato-train-operator/,
it says that the fatal accident that triggered the disablement of ATO was
not the failure of the ATO technology itself but of physical components that
stopped working.

First of all, aren't the physical components to be considered a part of the
ATO system, thus meaning that ATO has effectively failed?

More important, this is an inevitable eventuality for self-driving cars and
related technologies.  What happens when the detection devices have
mechanical breakdowns?  Hopefully the systems are designed with safe
failover procedures, at minimum turning control automatically over to the
human operator or to a human remote controller if there is no human in the
vehicle.  But what if they aren't?

A mechanical failure of a robot waiter might have relatively minor
consequences.  But what of a robotic hospital transport system, say?

Hopefully we carbon-based units will still be around to repair stuff.


Re: OpenSSL KDF and secure by default (RISKS-33.67)

Cliff Kilby <cliffjkilby@gmail.com>
Thu, 6 Apr 2023 23:25:59 -0400
OpenSSL is the hammer for just about every screw related to certificates
and encryption and has recently even added mainstream support for key
derivation functions (KDF). This class of functions allows for stretching a
potentially weak memorized secret into a more resistant authenticator in a
systematic manner.
OpenSSL has been using passwords and passphrases for a long time for
protecting private keys, so there is a whole class of functions for use of
those secrets and even some guidance provided for them.
https://www.openssl.org/docs/manmaster/man1/openssl-passphrase-options.html
"If no password argument is given and a password is required then the user
is prompted to enter one.

The actual password is`password;. Since the password is visible to utilities
(like 'ps' under Unix) this form should only be used where security is not
important."

So far, so good. You can put the password in the command line, but it is
flagged appropriately and alternatives exist that shuffle the password from
memory to memory without being exposed to the process list.
Not so with openssl's kdf module.
https://www.openssl.org/docs/manmaster/man1/openssl-kdf.html

"Specifies the password as an alphanumeric string (use if the password
contains printable characters only). The password must be specified for
PBKDF2 and scrypt."

The password isn't a first-order option here, it's only a pass through
option. So while this at first glance appears to follow the same format as
the passphrase system (pass:secret), "pass" is load bearing. It triggers
the execution path for the secret. It cannot be replaced by "env" or "file"
to specify alternatives, and there is no default path to "if you didn't
specify a secret, it defaults to prompting". The alternatives for other
secrets in kdf also have load bearing option flags. You cannot pass the
secret as hex except in the clear. Oddly, this module supports the
first-order options for "digest", "cipher" and "mac", but somehow missed
"pass".

penSSL KDF works well, but because there is no secure path to use it, it
mind as well not exist.

Please report problems with the web pages to the maintainer

x
Top