<Prev | [Index] | Next>


rmstein@ieee.org
Date: Wed, 17 Apr 2019 14:04:14 +0800

Robert Wright byline, behind paywalls as:

1) "Fallible machines, fallible humans," via https://www.straitstimes.com/opinion/fallible-machines-fallible-humans retrieved on 17APR2019;

2) "Autonomous machines: industry grapples with Boeing lessons" via https://www.ft.com/content/f96478e0-59e0-11e9-939a-341f5ada9d40

The cited news articles discuss technology-dependent systems (medical infusion pumps, aircraft, industrial robotic manufacturing) and their dependency on human engagement to monitor activity.

Today's AI cannot independently comprehend context: they can match patterns, but cannot rationalize the recognized pattern in a way that emulates a human's mind.

No machine can be programmed today to process contextual awareness and independently act to preserve and protect human life during an emergency. An organization or individual expecting this outcome apparently believes that science fiction is real. They must be disabused of this fallacy.

In the FT and Straits Times articles, Mark Sujan of University of Warwick asks, "How do we ensure that the system knows enough about the world within which it's operation? That's a complex thing."

As noted by Don Norman (see http://catless.ncl.ac.uk/Risks/12/48%23subj7.1 for example),
"The real RISK in computer system design is NOT human error. It is designers who are content to blame human error and thereby wash their hands of responsibility."

Demonstrating system behavior when subjected to erroneous or negative input stimulus can reveal more about system safety-readiness and resilience than demonstration of behavior under nominal stimulus conditions. Anomalous system states, in a simulator, can instruct and refine operational readiness.

Successful and effective system operation depends on informed, trained, and engaged human oversight. Safety critical system operators must possess perspicacity. Clear indicators of anomalous behavior, and insightful operator reaction to them, are essential to ensure a safe outcome.


<Prev | [Index] | Next>