Viewpoint: 5 steps to battle AI bias

Andrea Park - Print  | 

Of the many challenges that arise from introducing artificial intelligence into healthcare, perhaps the most pressing is that of algorithmic bias, in which AI systems have discriminatory tendencies built into their processes.

In an op-ed for the journal Nature, Matt Kusner, PhD, an associate professor in the department of computer science at University College London, and Joshua Loftus, PhD, an assistant professor in the department of technology, operations, and statistics at New York University, outlined a methodology for developing fairer AI models.

"Only by unearthing the true causes of discrimination can we build algorithms that correct for these," they wrote, positing that "causal models" are the key, as they make clear the underlying processes — discriminatory or otherwise — driving their outputs.

The op-ed offers five guidelines for using causal models responsibly and ensuring that they achieve the unbiased results they were designed to pursue:

1. Collaborate across fields.

2. Partner with stakeholders.

3. Make the workforce equitable.

4. Identify when algorithms are inappropriate.

5. Foment criticism.

"Algorithms are increasingly used to make potentially life-changing decisions about people. By using causal models to formalize our understanding of discrimination, we must build these algorithms to respect the ethical standards required of human decision-makers," the authors concluded.

More articles on AI:
Viewpoint: AI may not steal every job, but it will disrupt employment
1st AI-developed drug heads to clinical trials
Google AI detects anemia from retinal imaging data

© Copyright ASC COMMUNICATIONS 2021. Interested in LINKING to or REPRINTING this content? View our policies by clicking here.