The RISKS Digest
Volume 33 Issue 68

Saturday, 1st April 2023

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Ifixme.com announces 'Right to Repair' program for your human body
via Henry Baker
In Gen Z's world of dupes, fake is fabulous—until you try it on
WashPost
Grindr warns Egyptian police may be using fake accounts to trap users
WashPost
A scammer tricked Instagram into banning influencers with millions of followers. Then he made them pay to recover their accounts.
ProPublica
Amazon Begs Employees Not to Leak Corporate Secrets to ChatGPT
Futurism
People talking about what AI will do to society, here's a niche example that's happening right now
TJStebbing
Google and Microsoft's chatbots are already citing one another in a misinformation sh*tshow
The Verge
Warning: AI-generated YouTube Video Tutorials Spreading Infostealer Malware
The Hacker News
AI-Powered Vehicle Descriptions: Save Money, Save, Time, Sell More!
slisghtly redacted by PGN
Elon Musk and other tech leaders call for pausee on 'dangerous race' to make AI as advanced as humans
CNBC
On using Microsoft's Bing Chat for programming
PGN
Microsoft Patched Bing Vulnerability That Allowed Snooping on Email, Other Data
Robert McMillan
DC Metro Will Retrofit Faregates To Cut Down On Fare Evasion
DCist
Metro operator investigated for using automation system without clearance
The Washington Post
Biden Acts to Restrict U.S. Government Use of Spyware
NTTimes
Flight problems, not turbulence, found in death of former White House official
WashPost
Researchers exploit vulnerabilities of smart-device microphones and voice assistants
techxplore.com
OpenSSL KDF and secure by default
OpenSSL
All of your Internet usage will be subject to government tracking and control.
Lauren Weinstein
Cryptocurrencies
Amy Castor
Pwn2Own Hackers Breach a Tesla Twice
Marco Marcelline
Voting vendor in Reality Winner's leak is coming to Texas
Texas Observer
Malicious Actors Use Unicode Support in Python to Evade Detection
Phylum via Monty Solomon
Progressives Across Nation Locked Out Of Accounts After CAPTCHA Asks 'Select All Squares That Contain A Woman'
Babylonbee
SF loses 150K daily office workers during pandemic
SanFranChron
Any friend that can be replaced by GPT-4 ...
Rob Slade
Info on RISKS (comp.risks)

Ifixme.com announces 'Right to Repair' program for your human body

Henry Baker <hbaker1@pipeline.com>
Sat, 1 April 2023 00:00:57 +0000
S. California, April 1, 2023.—Ifixme.com (http://Ifixyou.com) announced
today its foray into the medical self-repair business with its 'Right to
Repair' program for the human body.  Ifixme.com (http://Ifixyou.com) is
building on its successful self repair and battery-replacement programs for
Medical Devices, and brings a host of interested volunteers to do teardowns,
write repair manuals, and participate in forums with many thousands of users
and professionals.  Ifixme.com has been a supporter of 'Right to Repair'
laws across the UnitedStates, and intends to stand up to the doctors' and
dentists' lobbies to enable ordinary people to perform their own procedures.
https://www.cnbc.com/2023/03/29/elon-musk-other-tech-leaders-pause-training-ai-beyond-gpt-4.html

  [Lauren later added this apt comment:]

The Open Letter to Stop 'Dangerous' AI Race Is a Huge Mess
https://www.vice.com/en/article/qjvppm/the-open-letter-to-stop-dangerous-ai-race
-is-a-huge-mess

  Yeah, you ain't kidding. -L
  PGN]


On using Microsoft's Bing Chat for programming

Peter Neumann <neumann@csl.sri.com>
Mon, 27 Mar 2023 14:17:11 PDT
Dani Barrack pointed an interesting article on letting ChatBots write
critical code:

  Planting Undetectable Backdoors in Machine Learning Models
  https://arxiv.org/abs/2204.06974

This paper is full of RISKS-worthy warnings about what might *not* be
appropriate for generating code for systems with life-critical and other
stringent requirements.  It is worth reading by those who think it might be
a good idea.  PGN


Microsoft Patched Bing Vulnerability That Allowed Snooping on Email, Other Data (Robert McMillan)

ACM TechNews <technews-editor@acm.org>
Fri, 31 Mar 2023 12:22:00 -0400 (EDT)
Robert McMillan, *The Wall Street Journal*, 29 Mar 2023

Microsoft last month patched an issue discovered by security firm Wiz Inc.
in the Bing search engine that allowed unauthorized access to email and
other data. The researchers determined an error in the way applications were
configured on Microsoft's Azure cloud-computing platform could allow
unauthorized access to Bing users' Microsoft 365 emails, documents,
calendars, and other tools. The software giant said a small number of
applications usingthe Azure Active Directory login management service were
impacted by the misconfiguration issue. Wiz said it had no evidence the
issue had been used by anyone. In announcing in a blog post the issue had
been fixed, Microsoft offered ways in which companies and consumers can
better protect themselves from such unauthorized intrusions.


DC Metro Will Retrofit Faregates To Cut Down On Fare Evasion (DCist)

Gabe Goldberg <gabe@gabegold.com>
Thu, 23 Mar 2023 15:55:16 -0400
Metro says it will spend up to $40 million to redesign its new faregates,
making it harder to jump over them and evade paying the fare.  [...]

New faregates, which were installed across all 97 stations last year, now
have sensors that can detect when someone jumps them. That's the beep you
may often hear in stations. Metro spent $70 million on the faregate
replacement, which also added new features like larger and brighter
displays, bi-directional access, and improved safety features.  The old
ones, installed in 1990, had reached the end of their useful life.

Metro board members at the time didn't want to make the faregates too
cage-like, similar to NYC, so it didn't hurt the atmosphere of Metro
stations. But new General Manager Randy Clarke has put a renewed emphasis on
stopping fare evasion as the transit agency faces a fiscal cliff next year.

The transit agency released new data Monday saying 13% of Metrorail riders
did not tap in and pay for their rides, amounting to 40,000 fare evasions
each weekday during the first two-and-a-half months of 2023.

https://dcist.com/story/23/03/21/metro-will-retrofit-faregates-to-cut-down-on-far
e-evasion/

 [How long will it take to catch $70M worth of offenders to make it
 worthwhile?  At an average fare of $5 and roughly 200,000 offenders each
 year, the answer is 70 years.  That's really nifty long-term planning.
 PGN]


Metro operator investigated for using automation system without clearance (The Washington Post)

Gabe Goldberg <gabe@gabegold.com>
Mon, 27 Mar 2023 16:52:39 -0400
The Washington Metrorail Safety Commission said it is investigating a train
operator, raising questions about the self-piloting system Metro is testing.

Metro has been testing ATO for more than a year as it moves toward returning
train operations to automatic piloting. Metrorail was designed for the ATO
system and had been operating that way for decades until a fatal train crash
14 years ago. Train movements have since been controlled manually by
operators in each train's cab.

The train operating in ATO earlier this month shot past the Innovation
Center station platform, said Max Smith, spokesman for the safety
commission. During its ongoing investigation, the commission discovered the
operator had used the ATO system multiple times, even though the commission
hasn't given the transit agency permission for its use.

“The evidence does show that this operator had been using it over the course
of that day and had previously used ATO,'' Smith said.

“When he was interviewed, he admitted he was curious to see if ATO would
work,'' Benson said. “Based on the investigation, there is no evidence this
is a systemic problem.''  [...]

Benson said the overrun occurred at a station where a team that is testing
and preparing Metro for ATO had not yet installed the necessary track
equipment that interacts with the ATO system, and also had not conducted
engineering tests.

https://www.washingtonpost.com/transportation/2023/03/24/metrorail-ato-train-oper
ator/


Biden Acts to Restrict U.S. Government Use of Spyware (NYTimes)

Jan Wolitzky <jan.wolitzky@gmail.com>
Mon, 27 Mar 2023 18:11:53 -0400
President Biden on Monday signed an executive order restricting American
government use of a class of powerful surveillance tools that have been
abused by both autocracies and democracies around the world to spy on
political dissidents, journalists and human rights activists.

The tools in question, known as commercial spyware, give governments the
power to hack the mobile phones of private citizens, extracting data and
tracking their movements. The global market for their use is booming, and
some U.S. government agencies have studied or deployed the technology.

Commercial spyware, including Pegasus, made by the Israeli firm NSO Group,
has also been used against American government officials overseas. On
Monday, a senior administration official said that at least 50 U.S.
government personnel in at least 10 countries had been hacked with spyware,
a larger number than was previously known.

https://www.nytimes.com/2023/03/27/us/politics/biden-spyware-executive-order.html


Flight problems, not turbulence, found in death of former White House official (WashPost)

Monty Solomon <monty@roscom.com>
Sat, 25 Mar 2023 02:07:31 -0400
Flight problems, not turbulence, found in death of former White House
official The flight was marked by a series of missteps, alerts and system
issues before the plane lurched violently in the sky, killing Dana Hyde, the
NTSB said.

https://www.washingtonpost.com/transportation/2023/03/24/dana-hyde-airplane-turbulence/


Researchers exploit vulnerabilities of smart-device microphones and voice assistants (techxplore.com)

Richard Marlon Stein <rmstein@protonmail.com>
Fri, 24 Mar 2023 08:55:31 +0000
https://techxplore.com/news/2023-03-exploit-vulnerabilities-smart-device-micropho
nes.html

“The researchers developed Near-Ultrasound Inaudible Trojan, or NUIT (French
for *nighttime*) to study how hackers exploit speakers and attack voice
assistants remotely and silently through the Internet.''

Ultrasound exploit of assistants like Siri and Alexa via mobile
devices. Unwise to connect Siri or Alexa to your door locks.

RISKS-30.46 subj11 identified ultrasound surveillance hacks in SEP2017.


OpenSSL KDF and secure by default (OpenSSL)

Cliff Kilby <cliffjkilby@gmail.com>
Thu, 23 Mar 2023 11:39:03 -0400
OpenSSL is the hammer for just about every screw related to certificates and
encryption and has recently even added mainstream support for key derivation
functions (KDF). This class of functions allows for stretching a potentially
weak memorized secret into a more resistant authenticator in a systematic
manner.

OpenSSL has been using passwords and passphrases for a long time for
protecting private keys, so there is a whole class of functions for use of
those secrets and even some guidance provided for them.
https://www.openssl.org/docs/manmaster/man1/openssl-passphrase-options.html
If no password argument is given and a password is required then the user is
prompted to enter one.  *pass:**password*

The actual password is *password*. Since the password is visible to
utilities (like 'ps' under Unix) this form should only be used where
security is not important.

So far, so good. You can put the password in the command line, but it is
flagged appropriately and alternatives exist that shuffle the password from
memory to memory without being exposed to the process list.  No so with
openssl's kdf module.
https://www.openssl.org/docs/manmaster/man1/openssl-kdf.html *-kdfopt*
*nm*:*v**pass:**string*


All of your Internet usage will be subject to government tracking and control.

Lauren Weinstein <lauren@vortex.com>
Fri, 24 Mar 2023 15:35:26 -0700
It appears that a lot of people don't understand the implications of laws
like Utah's—which will extend beyond the state, and be copied by many
other states—involving limits on children accessing social media. In
order to prevent children from creating social media accounts by themselves,
it is required that *all* adult users of social media be identified via
government IDs. This is literally the beginning of Chinese-style control and
tracking of ALL Internet usage here in the U.S. Nothing less. -L

https://lauren.vortex.com/2023/03/23/government-internet-id-nightmare


Cryptocurrencies (Amy Castor)

Gabe Goldberg <gabe@gabegold.com>
Sat, 25 Mar 2023 15:26:09 -0400
This chapter lays out the Biden administration's policy toward crypto.  It
is strident, as you'd expect just after a huge disaster like FTX.  This is
the no-coiner view coming from the highest levels of power.

Crypto bros and their pet politicians have long claimed that if you
overregulate crypto, you'll kill innovation. The White House is saying that,
for all the promises and hot air, there is no innovation here, so the path
is clear to regulate the hell out of you.

https://amycastor.com/2023/03/24/do-kwon-arrested-white-house-hates-crypto-coinbase-wells-notice-sec-charges-justin-sun-signature-sold-ftx-bahamas-party-fund-returns/


Pwn2Own Hackers Breach a Tesla Twice (Marco Marcelline)>

ACM TechNews <technews-editor@acm.org>
Wed, 29 Mar 2023 11:41:43 -0400 (EDT)
Marco Marcelline, *PC Magazine*, 25 Mar 2023, via ACM TechNews

Participants of the Pwn2Own software exploitation conference hacked
technology from automaker Tesla twice at the Zero Day Initiative's Pwn2Own
software exploitation conference, earning $350,000 and a Model 3
infotainment system. The team from French security company Synacktiv
executed a time-of-check-to-time-of-use (TOCTOU) exploit against a Tesla
Gateway, then employed a heap overflow and an out-of-band write
vulnerability to gain access to and compromise the Model 3. Pwn2Own
describes a TOCTOU exploit as a “file-based race condition that occurs when
a resource is checked for a particular value, and that value changes before
the resource is used, invalidating the results of the check.''  SecurityWeek
said Tesla is expected to release patches to correct the flaws exposed by
the Synacktiv hacks.


Voting vendor in Reality Winner's leak is coming to Texas (Texas Observer)

Douglas Lucas <dal@riseup.net>
Thu, 30 Mar 2023 21:37:12 +0000
First part of a series at the Texas Observer, Austin-based news magazine
founded in 1954:

https://www.texasobserver.org/reality-winner-vr-systems-whistleblower/

This article, authored by me, discusses the 2016 cyberattacks Reality Winner
disclosed—two related spearphishing offensives by Kremlin military
officers, first against election technology supplier VR Systems, then
against local Florida elections officials—in the context of the Texas
Secretary of State's office certifying the vendor's e-pollbooks for use in
elections statewide a little more than a year ago. I interview a county
information security officer, two county elections administrators, as well
as Winner's mother and lawyer, all of them Texans. Toward the end of the
piece, I discuss polarization and historical context around various
evidence, and around various lack of Risks include spearphishing,
proprietary nature of evidence preventing Congressional and public
oversight, lawsuits as propaganda, and more.

I'm looking to better understand the Texas Secretary of State's examiner
reports of electronic pollbooks and election management systems, so if
anyone likes my article and has expertise on these subjects, please feel
free to contact me offlist.

Oh, and there's a bit of anchor text in my article near the conclusion,
namely “computer security trainwrecks fill the news on the daily'' that
hyperlinks a certain email list regarding threats to computer systems.
What's the threat? Eternal September. (I jest...)

  [Great place to publish it.  Ronnie Dugger was long-time publisher of *The
  Texas Observer*, and he was influential in bringing many election
  technology problems to light—e.g., *The New Yorker* in November 1988,
  and *The Nation* Aug 16-23 2004.  See RISKS-7.70, 9.32, 33.47.  PGN]


Malicious Actors Use Unicode Support in Python to Evade Detection (Phylum via Monty Solomon)

Monty Solomon <monty@roscom.com>
Sat, 25 Mar 2023 12:52:59 -0400
Phylum uncovers a threat actor taking advantage of how the Python
interpreter handles Unicode to obfuscate their malware.

https://blog.phylum.io/malicious-actors-use-unicode-support-in-python-to-evade-detection


Progressives Across Nation Locked Out Of Accounts After CAPTCHA Asks 'Select All Squares That Contain A Woman' (Babylonbee)

geoff goodfellow <geoff@iconia.com>
Mon, 27 Mar 2023 16:08:11 -0700
https://babylonbee.com/news/progressive-locked-out-of-bank-account-after-captcha
-prompt-select-all-the-squares-that-contain-a-woman


SF loses 150K daily office workers during pandemic (SanFranChron)

geoff goodfellow <geoff@iconia.com>
Sat, 25 Mar 2023 21:16:11 -0700
City also drops 33K jobs in hotels, restaurants and retail in shift to
work and shop at home

Enough office workers left Downtown San Francisco during the pandemic to
fill almost four Giants games at Oracle Park.

The city has lost nearly 150,000 daily office workers since the start of the
pandemic in early 2020 during a shift to remote work and online shopping,
*the San Francisco Chronicle* reported citing a city budget report.
<https://www.sfchronicle.com/sf/article/vacant-17804926.phpf>

The city has lost an estimated 147,303 daily office workers since the
coronavirus, according to an analysis from the city's Budget and Legislative
Analyst Legislative Analyst's Office sent to Supervisor Connie Chan.

In March 2020, there were 245,505 office jobs in Downtown San Francisco.

Downtown also lost 32,688 jobs since 2019 in the hospitality, food service
and retail industries, according to the report.

The report studied economic challenges to Downtown, including the impact of
remote work on tax revenue from offices, how workers benefit small
businesses, vacant commercial space, diversifying industries and a lack of
housing.

A study conducted by Stanford University cited in the report said that,
before the pandemic, office workers would spend $168 per week near their
workplaces.  [...]

https://therealdeal.com/sanfrancisco/2023/02/28/sf-loses-150k-daily-office-workers-during-pandemic/


Any friend that can be replaced by GPT-4 ...

Rob Slade <rslade@gmail.com>
Wed, 29 Mar 2023 06:44:58 -0700
(I seem to have wandered into a number of digressions in composing this
piece, but they all seem to tie together, so I hope you'll bear with me ...)

Decades ago, I was at a teacher's conference.  I was in a session dealing
with computers in education.  The morning paper had published an article
about computers in education, and, particularly, using computers to teach,
and, therefore, replacing teachers.  Someone asked about this.  The
presenter thought for a moment, and replied that any teacher who could be
replaced by a computer, *should* be replaced by a computer.  His point was
that teaching was a complex task, and that any teacher who taught in such a
rote manner that he (or she) could be replaced by a machine would be better
off out of the profession, and the profession (and the education system)
would be better off without him (or her).

Which story I am relaying to lead into:

We are worrying about the wrong thing with regard to AI.

The programs DALL-E, ChatGPT, and others that rely on machine-learning
and pattern models derived from large data sets, have recently racked up an
impressive series of accomplishments.  They have produced some amazing
results.  Everyone is now talking about artificial intelligence as if it is
an accomplished fact.  It isn't.

These programs have been able to produce some absolutely amazing results.
But they have been able to produce amazing results for people who have been
able to learn how to use them.  That does not fit my definition of any kind
of intelligence, let alone an artificial one.  If the impressive results
can only be obtained by people who are willing to put in the time to learn
how to use these tools, then they *are* tools.  Just tools.  Complicated
and impressive tools, yes.  But just tools.  They do not have their own
intelligence.

Intelligence would require that the system would be able to provide
satisfactory results for pretty much anybody.  A person, and intelligence,
is able to query the requestor as to whether the results provided are
satisfactory.  If the results are not satisfactory, the intelligence is
able to query the requester and find out why not, and use this information
to modify the results until the results *are* satisfactory.  And that is,
of course, only one of the aspects of intelligence.  There are many others,
such as motivation.  So, while I'm willing to grant that these tools are
very sophisticated, complicated, and definitely useful developments, they
don't get us that much closer to actual artificial intelligence.

The results from these tools have created a great deal of interest, even in
the general populace.  It has particularly created interest within the
business community, and new investment artificial intelligence projects and
companies is probably a good thing.  (Unless, of course, we are all on a
hiding to nothing and we never *will* get real artificial intelligence.
But let's assume for the moment that we will.)  It has also engendered a
good deal of discussion on the wisdom of pursuing artificial intelligence,
and the dangers of artificial intelligence.  Since my particular field is
dangers associated with information systems, I have been very interested in
all of this, and think it's a good thing.  We should be considering the
dangers, particularly the dangers, with regard to machine learning, that we
have created, and are perpetuating, bias in our systems, particularly when
the data sets that we use to train machine learning systems are,
themselves, collected, collated, and maintained, by artificial intelligence
systems.  Which may already be affected by various forms of bias that we
engendered in the first place, and have never realized are even there.

There is, however, one fairly consistent theme that appears in discussions
of the dangers of artificial intelligence, and which DALL-E, ChatGPT, and
their ilk have indicated is a false concern.  While it is primarily a
screaming point of the conspiracy theory and tin foil hat crowd, many
people are concerned about the possibility of what tends to be referred to
as *The Singularity*.  This is the hypothesis (and it is a fairly logical
hypothesis), that when we do, actually, get artificial intelligence, that
is truly intelligent, and can work on improving itself, that such a system
would advance so rapidly that there would be absolutely no way that we
could keep up, and it would, from our perspective, almost immediately
become so intelligent that we would have no chance of controlling it.  It
would rapidly become intelligent enough that any of our protections, which
are never perfect, would leave open a vulnerability which the system itself
could exploit, and therefore it would, again, almost immediately, from our
perspective, be beyond our control.  What happens at that point is open to
a variety of conjectures.  This intelligence could turn evil, from our
perspective, and wipe out the human race.  (Some people would consider this
a good thing.)  Or, it might create a kind of benevolent dictatorship,
managing our lives and having pretty much complete control of the entire
human race, since it would be able to commandeer all information systems,
which means basically every form of business, industry, entertainment, and
any other human activity.  Or, the artificial intelligence may simply take
us.  Or, well, there are all kinds of other options that people have
explored and theorized.

None of these options particularly scare me.

That's the wrong thing to worry about.  What we should be worrying about is
relying on artificial intelligence, and, particularly, these recent
examples.  These tools are not really intelligent.  They do not
understand.  They do not comprehend.  They do not appreciate.  They just
predict the likelihood of the next piece of output from patterns, in masses
of data, that they have being fed.  (I have mentioned with, elsewhere, the
fact that what we are feeding them is possibly biasing them, and that the
bias is probably self reinforcing.  And we'll come back to that point.)

I asked ChatGPT to write a sermon.  It did a very banal, pedestrian job.
When I pointed out some of the flaws, ChatGPT basically gave me back the
same thing, all over again.  It didn't understand my complaint: it just
responded based upon my statement.  It didn't understand my statement: my
statement was just a prompt to the system, and had similar enough terms to
the first prompt that the output was, basically, identical.

I gave a friend an opportunity of a trial with it.  He said that it
produced a reasonable Wikipedia article.

I think this is illustrative in ways that most people wouldn't.  I have
never thought highly of Wikipedia.  While I applaud the general concept, I
feel that, in actual implementation, Wikipedia is the classic example of
the pooling of ignorance.  When I first set out to assess Wikipedia, I, of
course, as an expert in the field, looked up the entry on computer
viruses.  It was terrible.  As far as I know, having checked it several
times in the intervening years (although I haven't looked at it recently),
it's still terrible.  At one point it had more than one factual error per
sentence.  And, of course, in those early, carefree, bygone days when I
still have some thought that maybe Wikipedia might be a useful exercise, I
made corrections to these errors.  Corrections which were, of course,
immediately rescinded by Wikipedia's editorial staff.

Wikipedia does not rely on expert opinion.  How could it?  The editorial
staff of Wikipedia do not know how to judge who is expert, and who is not,
on a given entry, or topic.  The original computer virus entry did, and as
far as I know still does, contain the common received wisdom on computer
viruses, with all of the mistakes, errors, and misconceptions, that the
common man holds about computer viruses.  Therefore, when I tried to
correct these errors, the Wikipedia staff felt that I was introducing
errors, and so they reverted back to their original mistake-ridden text.
For an actual expert, there is, actually, no point in even attempting to
correct the errors in Wikipedia.  Wikipedia relies upon the common man's
perception, and, therefore, it's pretty close to social media as a source
of information.  There is an enormous quantity, but there is not
necessarily very much quality.

(My take on, and attitude towards, Wikipedia, while formed many years ago
on the basis of the number of mistakes in the technical entries may be
[possibly unfairly] reinforced by the fact that after Gloria died,
Wikipedia removed all entries to her from my entry in Wikipedia.  I found
this very personally hurtful, and, to this day, I have no idea why they did
it.)

Wikipedia relies upon entries available on the web, and therefore may rely
heavily on social media.  Wikipedia also goes by seniority, not by
expertise.  If you are higher up on the Wikipedia editorial food chain, you
can reverse any entry or correction that an expert makes.  Therefore, it is
no surprise that Wikipedia is riddled with errors, particularly in recent
discoveries, and in any area where expert opinion is of value.  Wikipedia
has become the Funk and Wagnalls of the information age.  It's widely
available, possibly useful in general cases, and very often wrong.

This is why my friend's further comment that it made *the classic error*,
was also illustrative.  *The classic error* will be repeated, in many
articles, and postings, made on the Web, by those who think they know the
case, but are not necessarily fully informed.  This type of material will
be repeated, ad nauseam, on social media, thereby reinforcing the truth and
validity of this erroneous material.

And, of course, ChatGPT has been trained on social media.  ChatGPT has been
trained on material, and text, that could be gathered to give an indication
of how we humans speak in response to queries.  Or challenges.  (This is
also why ChatGPT is likely to become obnoxious and abusive if you challenge
it. That's the way people react on social media, and it's social media that
provides the material that has trained ChatGPT.)

ChatGPT, and DALL-E, the graphic, or art, generating version of the pattern
model tool, are simply responding, with patterns that they can predict from
a massive database that they have assessed, of what is to be produced in
response to any prompt.  It's simply using statistical models (very complex
statistical models, to be sure), to generate what the average human being
would generate, if challenged in the same way.  There is no understanding
on the part of either ChatGPT, or DALL-E, or any others of those pattern
model tools.  They do not understand.  They do not comprehend.  They don't
have to.  They just churn out what it is likely that a human being would
churn out in response to the same prompt.

I asked ChatGPT to produce various materials in recent tests.  What I got
was pedestrian and uninspired.  Well, of course it was.  ChatGPT is not
understanding, and doesn't have any way to obtain inspiration.  It's just
going to generate something in response to a prompt.  And it is going to
generate what most human beings would generate.  And most human beings are,
let's face it, lazy.  So, what most human beings would produce, when
challenged to produce a an article, or a sermon, or a presentation outline,
would be pedestrian, banal, and uninspired.  It's the type of article that
you read in most trade magazines.  Vendors go to professional authors and
ask them to produce an article on blat.  The professional author does a
quick Google search on the topic, feels that they are expert, and turns out
banal, pedestrian, uninspired text.  There is nothing innovative, and there
is nothing in the material that leads to any item or idea that would spark
creative thought.  That's not what most human beings do, that's not what
most of the material on social media is, and so that's what ChatGPT
produces.

Many years ago, I ran across a quote which said that creativity is allowing
yourself to make mistakes.  Art is knowing which ones to keep.  ChatGPT
does make mistakes.  But most of them simply are not worth keeping.
ChatGPT doesn't think about what it's doing: it just predicts the next,
most likely next, probable word that a human being would write in this
stream of text.  So, ChatGPT isn't going to create anything that's
inspired, isn't going to create anything that's creative, isn't going to
produce much of anything that is much of use for anything, and if we fail
to understand this, we fail to realize what relying on ChatGPT can produce
for us.  Which is, basically, so much dross.

I have recently read many articles which assert that ChatGPT can provide
for us mundane letters, mundane article outlines, and mundane articles
themselves, which will be of a help in business.  But that is only because
we, as a society, have become accustomed to the mundane, and accept it.
And, if we continue to use ChatGPT for these types of purposes, we will, in
fact, produce more mundane dross, and, increasingly find that garbage
acceptable.  We are training ourselves to accept the banal, and the
uninformative.  Eventually we will train ourselves to accept a word salad
which is completely devoid of any meaning at all.

ChatGPT is becoming more capable, or at least more facile.  It is being
trained on larger and larger data sets.  Unfortunately, those data sets are
being harvested, by and large from social media, and by and large with the
aid of existing artificial intelligence tools.  Therefore, the fear that
some have raised, that we have already biased our artificial intelligence
tools by the data that we gave to them, is now being self-reinforced.  The
biased artificial intelligence tools that we created with biased data, are
now being used to harvest data, in order to feed to the next generation of
pattern model tools.  This means that the bias, far from being eliminated,
is being steadily reinforced, as is the bias towards meaningless dross.  If
we rely on these tools, that is, increasingly, what we are going to get.

And, with the reliance on artificial intelligence in the metaverse, that is
what we are going to get in the metaverse.  The metaverse is an incredibly
complex undertaking.  It is, if all the parts that we have been promised,
are included, a hugely complex system, orders of magnitude more complex
than any we have yet devised, with the possible exception of the Internet,
and the World Wide Web and social media itself.  We will need to have
artificial intelligence tools to manage the metaverse.  And these tools are
going to have our existing biases, and are going to have the bias towards
uncreative, uninspired garbage.  And therefore, that's what the metaverse
is going to give us.

Increasingly readable, and convincing, garbage to be sure, but garbage
nonetheless.  Do we really want to be convinced, by garbage?

At any rate, in another test, I complained to ChatGPT that I was lonely.  I
mean, most people don't listen anyways, and most people don't listen very
well.  So I figured that ChatGPT would be at least as good as one of my
friends, who, after all, have disappeared, since they are terrified that
I'm going to talk about Gloria, or death, or grief, or pain, all of which
are taboo subjects in our society.

The thing is, ChatGPT doesn't know about the taboo subjects in our
society.  So, it gave me an actually reasonable response.  Now, it wasn't
great.  ChatGPT cannot understand what I am going through, and cannot
understand or appreciate the depths of my pain and loneliness.  But at
least it was reasonable.  It suggested a few things.  Now, they are all
things that I have tried.  But they were reasonable things.  It said to
talk to my friends.  As previously mentioned I can't.  When challenged,
ChatGPT fairly quickly goes into a loop, basically suggesting the same
things over and over again.  But it also suggested that I take up volunteer
work.  Now, of course, I knew this.  It is something that I suggest to
people who are in depression.  And I have done it.  And, it does help, to a
certain extent.  So, a half point, at the very least, for ChatGPT.

I can give more points to that than that to ChatGPT.  It doesn't give me
facile and stupid cliches.  It didn't say anything starting with *at least*.
It didn't tell me that Gloria was in a better place.  It didn't tell me that
bad things wouldn't happen to me if I only had more faith.  All of which
people have said to me.  And it's all very hurtful.  So ChatGPT at least
gets another half point for not being hurtful.  (If we are still trying for
the Turing test, at this point, I would say that, in order to pass, we would
have to make ChatGPT more stupid and inconsiderate.)

But I'm not willing to give ChatGPT very much credit at this point.  It's
not very useful.  It wasn't very analytical.  And I did challenge some of
its suggestions, to see what kind of response I got when I challenged
ChatGPT on various points.  I did sort of challenge it on the friend's
point, and it didn't get defensive about that.  So, at least another half
point to ChatGPT.

But, as I say, it's not very good.  It's as good as a trade rag article,
and it's probably as good as any Wikipedia article.  In other words, not
very good.  The material is pedestrian and. I don't think that bereavement
counselors have anything to worry about, quite yet.

I should also note that so far, I have the free version of ChatGPT, and
therefore I am not talking to GPT-4.  This is GPT-3.  So it's not as good
as the latest version.  And I would like to give the latest version a try,
but I strongly suspect that it wouldn't do all that much better.  But it
would be an interesting test.

Relying on ChatGPT, for anything but the absolute, most pedestrian tasks is
asking for trouble.  It can't understand.  It is going to make mistakes.
If you present it as an interface, and, talking about my test about
loneliness and bereavement, I realize that I may have prompted some idiot
with a grief account to try and tie ChatGPT on to a grief account, as a
kind of automated bereavement counselor, well, that's really asking for
trouble.  Trying to use ChatGPT with people who are, in fact, in real
trouble, could create a disaster.  Please, those of you with grief
accounts, do not try this at home.  This is only for trained idiots, who
actually know that there is no such thing as artificial intelligence, and
realize that ChatGPT isn't that much more of an event from ELIZA.  (If you
don't know who ELIZA was, it passed the Turing test more than four decades
ago, and it only took two pages of BASIC code.)

There is concern that adding the appearance of an emotional component to
computer systems, and particularly artificial intelligence systems, will
create dangerous situations for users.  This is a very realistic concern.
We have seen a number of instances, over at least half a century, where
individuals have attributed to, sometimes very simple systems,
intelligence, personality, and even concepts of a soul.

As only one aspect of the difficulties, but also the importance, of looking
at emotive, or affective, artificial intelligence, or any kind of
intelligence in any computer system, consider the case of risk analysis.
In information security, we need to teach students of the field that
penetration testing, and even vulnerability analysis, does not lead you
directly to risk analysis.  This is because penetration testing, auditing,
and vulnerability analysis are generally performed by outside specialists.
These people may be very skilled, and may be able to produce a great deal
that is of value to you, but there is one thing that they, signally, do not
know: the value of the assets that you are protecting.  The value, that is,
to you.  The value of an asset, whether a system, piece of information, or
a database of collected information, has a value to the enterprise that
holds it.  But it is only that enterprise, and the people who work there,
who really do understand the value of that asset.  The value in a variety
of ways, and therefore the protections that must be afforded to that
asset.  Therefore, no outside firm can do a complete risk analysis, since
they do not understand, or fully comprehend, the value, or values, and the
range of different types of value, that the asset holds.  For the company.

Currently, our so-called artificial intelligence tools, may be able to
perform some interesting feats.  But they do not understand.  And,
particularly in regard to affect and emotion, they do not understand, even
what these are, let alone how important they are.  Now, we can certainly
make some effort to instruct artificial systems as to certain aspects of
human behavior, and the indicators that the human may be in high states of
emotion.  However, the systems will have no understanding, no comprehension,
of these emotional states.  They will not understand the subtleties and
nuances of emotional states.  We can give them a set of directives as to how
to behave with regard to people, but they will not understand, they will
only behave.  This is a backstop solution, and it cannot be complete.  It is
akin to the difference between justice and law, in all of our human
societies.  Supposedly, we think of our legal systems as providing justice.
We even call institutions related to the legal system departments of
justice.  But we all know, in our heart of hearts, that there is a
difference between legal and right.  We all know that there are times when
our laws come to an unexpected situation, and are then unjust.  In the same
way, we cannot simply give a set of commands to a computer, as to how to
deal with a human that is in an emotional state, and expect that this will
address all possible situations.  Because the computers do not have an
understanding of emotion.

In this latter regard, I highly recommend reading *Affective Computing*, by
Rosalind Picard http://victoria.tc.ca/int-grps/books/techrev/bkaffcmp.rvw .
Her work looks not only at human factors engineering, but also at the
significance of affect, or some similar analogue, in regard to motivation
and decision in automated systems.

  [This long piece has obviously been written by ChatSLADE.  I include it as
  the last item in this issue in case you might think it is overly long --
  even though it is clearly relevant to the Open-AI and ChatBot items up
  front in this issue.  PGN]

Please report problems with the web pages to the maintainer

x
Top