In an Aug. 20 viewpoint article published by the American Medical Association, Dr. Madara, who serves as CEO and executive vice president of the association, highlighted the threats AI poses to healthcare equity across the country.
There’s a threat “within AI to amplify existing biases when this evolving technology is used to support clinical decision-making, diagnostic results, predictive analytics and similar functions,” he wrote. Adding that biases can occur in almost every aspect of healthcare AI through training data set collection and interpretation, creating machine learning algorithms and modeling conclusions for physicians diagnosing disorders and implementing treatment courses.
Regarding COVID-19 and AI, Dr. Madara said the severely disproportionate impact the virus has on marginalized communities including minorities, the elderly and chronically ill must not be exacerbated by AI models that amplify bias. Moving forward, tech experts must work with medical ethicists and clinicians to identify all potential algorithmic bias, based on factors including race or ethnicity, age, gender, socioeconomic status and location.
“Developers and clinicians should have a shared understanding of not only what an algorithm does and to whom it applies; they must also know what it cannot do and to whom it should not apply,” Dr. Madara wrote.
More articles on artificial intelligence:
NYU Langone, Facebook set out to develop AI-powered 5-minute MRI scan
Healthcare provider and payer execs point to AI, robotics as next industry disruptors: report
NIH launches AI-focused COVID-19 database of chest scans