AMA: IT experts must enlist medical ethicists, clinicians to strike potential biases in AI health tech

Listen
Text
  • Small
  • Medium
  • Large

While enhancing health technologies such as artificial intelligence tools has the potential to improve the patient experience and care outcomes, it can also intensify longstanding inequities in health systems across the U.S., according to James Madara, MD.

In an Aug. 20 viewpoint article published by the American Medical Association, Dr. Madara, who serves as CEO and executive vice president of the association, highlighted the threats AI poses to healthcare equity across the country.

There's a threat "within AI to amplify existing biases when this evolving technology is used to support clinical decision-making, diagnostic results, predictive analytics and similar functions," he wrote. Adding that biases can occur in almost every aspect of healthcare AI through training data set collection and interpretation, creating machine learning algorithms and modeling conclusions for physicians diagnosing disorders and implementing treatment courses.

Regarding COVID-19 and AI, Dr. Madara said the severely disproportionate impact the virus has on marginalized communities including minorities, the elderly and chronically ill must not be exacerbated by AI models that amplify bias. Moving forward, tech experts must work with medical ethicists and clinicians to identify all potential algorithmic bias, based on factors including race or ethnicity, age, gender, socioeconomic status and location.

"Developers and clinicians should have a shared understanding of not only what an algorithm does and to whom it applies; they must also know what it cannot do and to whom it should not apply," Dr. Madara wrote.

Copyright © 2021 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars