Nurses vs. AI: What happens when nurses disagree with the AI's assessment

As artificial intelligence becomes more integrated into hospitals and patient care, questions arise as to when and how often nurses can override the algorithm and if disciplinary action should be given for wrong decisions, The Wall Street Journal reported June 15.

Melissa Beebe, RN, an oncology nurse at Sacramento-based UC Davis Medical Center, describes how an alert told her one of her patients had sepsis. She was sure it was wrong and that the algorithm wasn't taking into account the patient's leukemia; however, hospital rules require nurses to follow protocols when a patient is flagged for sepsis, she said. In order to override the AI, she would have to get physician approval, and she said she could face disciplinary action if she was wrong.

Ms. Beebe followed the AI prompt and drew blood from the patient, even though it could expose him to infection and increase his costs.

"When an algorithm says, 'Your patient looks septic,' I can't know why. I just have to do it," Ms. Beebe told the Journal. "I'm not demonizing technology, but I feel moral distress when I know the right thing to do and I can't do it."

The test came back negative for sepsis.

UC Davis told the Journal that algorithms are meant as a starting point for clinical assessment and protocols, such as taking blood after a sepsis alert, are recommended but not required. Nurses do not face disciplinary action for overriding an algorithm "unless it is something that is blatantly against standards of care."

"If a nurse feels strongly this does not make sense for their patient they should use their clinical judgment" and contact the physician, the medical center told the Journal. "The ultimate decision-making authority resides with the human physicians and nurses."

Technology like artificial intelligence can be a powerful tool in medicine when used alongside humans to help assess, diagnose and treat patients, experts told the Journal. However, it is sometimes implemented without adequate training or flexibility, nurses and clinicians said. Some said they feel pressure from hospital administrators to defer to the algorithm.

A National Nurses United survey of 1,042 registered nurses found 24 percent said they had been prompted by AI to make choices they believed "were not in the best interest of patients based on their clinical judgment and scope of practice" regarding patient care and staffing. Of those, 17 percent reported they were permitted to override the algorithm, while 31 percent weren't allowed and 34 percent said they needed physician or supervisor's permission.

While some nurses are concerned about their ability to treat patients using their own decisions, nurse trainers are finding some nurses rely too heavily on the AI to tell them what steps to take.

Jeff Breslin, a registered nurse at Sparrow Hospital in Lansing, Mich., has been working at the Level 1 trauma center since 1995 and training nurses and students for years. He said he's noticed newer, digitally native nurses often trust the algorithm over their own observations.

Clinicians warned those who are penalized for overriding the AI will also begin to rely on it more heavily. Some technologies are trying to negate the nurse versus AI conflict by capitalizing on nurse behavior in order to improve patient outcomes. 

A team at New York City-based Columbia University developed a predictive model that quantifies a nurse's intuition and uses it as an early warning system. The AI tracks the frequency and type of surveillance a nurse does on a patient. Researchers found a strong correlation between increased nurse activity with such notes as "seems off" or "patient not as alert" and patient deterioration. The algorithm, called Concern, scores patients as high risk, at risk or no risk based on nurse activity.

The AI tool has been implemented at Boston-based Brigham & Women's Hospital. Although nurses were hesitant to use it, hospital leaders assured them it was intended to support nurses' clinical judgment. When the algorithm scores a patient at risk, it prompts nurses to use critical thinking to figure out what's going on, Patricia Dykes, PhD, RN, Brigham & Women's program director of research for the Center for Patient Safety, Research and Practice, told the Journal. Implemented as an early warning system, AI tools can increase nurse autonomy, Dr. Dykes said.

Initial results show the model warns of deterioration five to 26 hours before traditional methods.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>