News & Articles

Medical Ethics and Legal Liability

Artificial Intelligence
G. Wayne Moore, B.Sc., MBA, FASE
03/12 /19

Over the next few years, the role of AI will continue to grow but ethical and legal issues are yet to be addressed.”

-Daniel Sokol, PhD, Medical Ethicist

As with any exciting and potentially game changing technology the initial hyperbole about its value and the claims made for its future often runs wildly ahead of the underlying science. It almost always also runs ahead of two other key areas; the potential ethical issues of implementing the technology, and the medico-legal issues that will absolutely come into play. Artificial Intelligence (AI) is currently defined as “a machine’s ability to make decisions and perform tasks that simulate human intelligence and behavior.” As a matter of fact, AI is somewhat at a disadvantage in the mainstream of public opinion due to its long history of scaring the heck out of us in Hollywood movies, like, 2001 A Space Odyssey (“Hello Dave…”), The Terminator (“I’ll be back”), and War Games (“How about a nice game of chess?”).

Given that perception, how will patients feel about a “machine” giving a diagnosis? And if that AI generated diagnosis is incorrect how exactly do you hold an algorithm accountable? Who, you might ask will be legally blamed and sued if the AI proves faulty and there is a negative patient outcome? Easy, it will be the algorithm developer, the original manufacturer of the device, the HTM/IT departments, the hospital in general, the clinician, the FDA for clearing the device (well maybe not the FDA, it’s too hard, too expensive, and takes too long to sue a Federal Agency and collect).

From an ethics perspective how can doctors, for example, obtain informed consent from patients if no one clearly understands how the AI’s deep-learning algorithm (definition of deep learning is: The ability for machines to autonomously mimic human thought patterns through artificial neural networks composed of cascading layers of information) works because it is too complicated, or when the error (i.e., false negatives & false positives) rate is unknown? And what if the algorithm used contains or develops biases (i.e., machine learning – a facet of AI that focuses on algorithms, allowing machines to learn and change without being programmed when exposed to new data), thereby, for example, discriminating against certain types of patients: the young, the old, the rich, the poor, men or women, or even New England Patriot fans?

I understand that AI is coming to healthcare, and I know it is coming fast, but we have some serious thinking to do before we turn the controls over to HAL.

Until next month’s newsletter; stay out of the Matrix and brush up on your cognitive computing skills.

 

Wayne

 

About the Author, G. Wayne Moore:
A 30-year veteran of the diagnostic ultrasound market Wayne has held senior level positions with several major medical equipment manufacturers, including Honeywell Medical Systems and Siemens Medical Solutions. Wayne has been directly involved in the development and commercialization of more than 15 technologically intensive ultrasound systems. He is widely published in diagnostic ultrasound literature, a sought after speaker at medical imaging conferences, has served as an expert witness in multiple ultrasound litigations, and holds more than 16 United States ultrasound related patents. Wayne obtained his MBA from the University of Denver – Daniels College of Business.

He was elected as a Fellow of the American Society of Echocardiography (FASE) in 2009.

Acertara Labs
Correspondence: Dave Dallaire
1900 South Sunset Street, Suite F, Longmont, CO 80501, USA
Email: [email protected]
www.acertaralabs.com

March 11, 2019 Newsletter