Viewpoint: How to tell patients AI is part of their care

As artificial intelligence use expands in the healthcare space, physicians must be aware of how to properly explain the role of AI to patients and address ethical concerns that arise, a commentary published in the American Medical Association's Journal of Ethics said.

The commentary was written by Daniel Schiff, a PhD student at the Georgia Institute of Technology in Atlanta who studies AI and its intersection with social policy, and Jason Borenstein, PhD, director of graduate research ethics programs at the institute.

Here's how physicians can address several ethical concerns around telling patients AI is involved in their care:

1. Informed consent. One ethical challenge stemming from AI in healthcare is the difficulty of obtaining a patient's informed consent to use a novel AI device.

"The novelty and technical sophistication of an AI device places additional demands on the informed consent process," the authors said. "When an AI device is used, the presentation of information can be complicated by possible patient and physician fears, overconfidence or confusion."

Additionally, physicians must be able to effectively explain to patients how an AI device works for an informed consent process to proceed appropriately, the authors said.

Assuming the physician is informed about the AI technology, he should explain the basic nature of the technology to the patient and distinguish between the roles human caregivers will play during each part of the procedure and the roles the AI/robotic system or device will play, the authors said.

2. Patient perceptions of AI. Patients and providers have various perceptions about AI, including a concern for potential medical errors.

"In addition to delineating the role of the AI system, the physician can address the patient's fears or overconfidence by describing the risks and potential novel benefits attributable to the AI system," the authors said.

For example, beyond sharing that they have done a procedure with an AI system in the past, the physician should describe studies comparing the system to human surgeons.

"In this way, the patient's inaccurate perceptions of AI can be countered with a professional assessment of the benefits and risks involved in a specific procedure," the authors said.

3. Potential medical errors and AI. Identifying who is morally responsible and perhaps legally liable for a medical error that involves AI technology is often complicated by the "problem of many hands," the authors said. This problem refers to the challenge of attributing moral responsibility when the cause of patient harm is distributed among several persons or organizations. 

"A first step toward assigning responsibility for medical errors (thus hopefully minimizing them in the future) is to disentangle which people and professional responsibilities might have been involved in committing or preventing the errors," the authors said.

These actors may involve the coders and designers who created the technology; the physicians responsible for understanding the technology; the medical device companies that sell the technology; and the hospitals responsible for ensuring best practices when using AI systems.

Click here to see the February issue of the AMA Journal of Ethics, which includes several perspectives on AI in healthcare.

More articles on clinical leadership & infection control:
Baptist Health South Florida's cancer institute implements genomics platform
Antiseptic soap, mouthwash use could help reduce post-discharge MRSA by up to 44%
CMS finds sanitation, patient safety issues at Montana psychiatric hospital

© Copyright ASC COMMUNICATIONS 2019. Interested in LINKING to or REPRINTING this content? View our policies by clicking here.

 


IC Database-3

Top 40 Articles from the Past 6 Months