IBM takes steps to prevent algorithmic bias in AI

Julie Spitzer -

IBM scientists have developed a new safeguard aimed at reducing algorithmic bias in artificial intelligence, Futurism reports.

Under its Trusted AI effort, IBM Research published a paper that describes a transparency document, dubbed the "supplier's declaration of conformity," that outlines the safety and product testing AI algorithms and underlying information have undergone.

The researchers suggest an AI SDoC would answer questions to help evaluate an algorithm's performance, such as: "Was the dataset and model checked for biases?" and "Was the service checked for robustness against adversarial attacks?"

"An SDoC for AI services will contain sections on performance, safety and security," the paper reads. "Moreover, it will list how the service was created, trained and deployed along with what scenarios it was tested on, how it will respond to non-tested scenarios, and guidelines that specify what tasks it should and should not be used for."

Several other industries make SDoCs voluntary, and IBM suggests the same for AI. The researchers also note their idea could help boost consumer trust in AI.  

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.