With an approaching federal deadline, healthcare and legal experts have developed a framework for evaluating the use of AI-powered algorithms.
As AI, clinical algorithms and predictive analytics become more prevalent in healthcare, HHS finalized a rule April 26 to ensure that these tools do not discriminate "on the basis of race, color, national origin, sex, age and disability."
By May 6, CMS-funded entities must comply with the rule.
Concerned about legal liability, hospitals and other healthcare organizations may discontinue sex-inclusive algorithms, potentially causing harm. Faculty members from the University of Maryland's law and medical schools created a resource to guide healthcare organizations through the legal complexities of using these tools.
For example, the framework recommends analyzing whether a tool would be less accurate if the patient's sex was excluded.
"Deciding the legality and appropriateness of sex's inclusion requires going beyond the math to probe the 'why' for inclusion," according to Katherine Goodman, PhD, professor at the University of Maryland School of Medicine in Baltimore and lead author of the framework.
When based on biological factors, the use of sex in clinical algorithms is appropriate and lawful, Dr. Goodman said in a Jan. 15 news release from the university. "But if risk differs between men and women for nonbiologic reasons, such as sex-based stereotypes or unconscious biases in medical treatment, that can make adjusting algorithmic predictions for sex unfair and likely unlawful."
The framework was published in The New England Journal of Medicine.