[Index] | Next>


rmstein@ieee.org
Date: Sun, 7 Oct 2018 12:08:25 +0800

https://www.washingtonpost.com/news/posteverything/wp/2018/10/05/feature/doctors-are-surprisingly-bad-at-reading-lab-results-its-putting-us-all-at-risk

Physicians make mistakes interpreting lab results, assessing diagnostic images, prescribing medicine, etc. These errors can portend either a fatal outcome or expensive mitigation based on the incorrect assessment of patient symptoms, history, and diagnostic evidence.

The Agency for Healthcare Research and Quality (AHRQ) of the US Health and
Human Services estimated in 2014 that 5% of outpatients experience misdiagnosis, and 13% of emergency room patients are misdiagnosed for stroke
(see https://psnet.ahrq.gov/perspectives/perspective/169/diagnostic-errors).

As AI-based assistance -- robo-medicine -- encroaches on medical specializations, a significant risk arises from the training reference input data used to construct these platforms. The risk materializes from the who/what that arbitrates between "correct" and "incorrect" or "pass" and
"fail" machine-generated diagnostic conclusions and therapeutic recommendations. Physicians will be challenged to justify and pursue robo-medicine's diagnostic findings and therapeutic recommendations based on a "probably approximately correct learning" technique.

Robo-medicine viability depends on the input training data used to construct the core decision engine, the artificial life or neural network framework that is constructed to generate a presumably viable diagnosis based on patient symptoms, physiological data, and history. That physicians make mistakes assessing this information implies that the machine training stimulus input and output inherits and partially acquires human judgment, however imperfect.

Without physician intervention, an automatic patient diagnostic/treatment life cycle consisting of pre-existing and chronic conditions, blood/urine chemistry, diagnostic image analysis, surgical robots (robots with knives), and prescription generators comprises a future medical-industrial ecosystem that can amplify misdiagnosis frequency and severity. Without independent and continuous monitoring, reporting and correction, medical error cluster formation is a likely outcome. A proactive and concurrent maintenance and oversight life cycle is imperative to mitigate emergent risks.

Accountability and traceability must remain with the physician in charge of patient care. It would be irresponsible and dangerous, though possibly cost-effective, to allow robo-medicine dispensation without physician oversight. Publication of patient life cycle experience, including automated misdiagnosis and maltreatment incidents, is essential to enable independent analysis. How to achieve this reporting, and preserve patient confidentiality, privacy, and anonymity poses a significant and sustained challenge.

Procedures are required to govern robo-medicine's therapeutic analysis, findings and recommendations. A misdiagnosis or incorrect therapy schedule must be quickly reported to the FDA's MAUDE repository. An unrecognized and unchallenged diagnostic or therapeutic defect escape in a hospital emergency room may be catastrophic.

As technological risk multiples in the medical-industrial complex, elevated financial and legal penalties against suppliers are needed to deter irresponsible product deployment. Robo-medicine platforms must be indemnification-exempt should a physician initiate suspect, incorrect, or life-threatening therapeutic recommendations and procedures. Mandatory peer consultation is a requirement.

If the medical-industrial complex sustains caveat emptor (buyer beware) as their business model, independent and conflict-free reviewers of product viability and effectiveness becomes mandatory. Robo-medicine manufacturing and qualification processes for software and hardware must become transparent.

Consumer trust and confidence accrues from evidence that sponsors it, not marketing or propaganda. Compulsory reporting of unvarnished defect escape of misdiagnosis and questionable therapeutic recommendations are necessary to reveal robo-medicine's defects. Regulatory governance, enforcement, and vigilance must strengthen to improve patient outcome and suppress the accelerated misdiagnosis potential of robo-medicine.


[Index] | Next>