IBM takes steps to prevent algorithmic bias in AI

IBM scientists have developed a new safeguard aimed at reducing algorithmic bias in artificial intelligence, Futurism reports.

Under its Trusted AI effort, IBM Research published a paper that describes a transparency document, dubbed the "supplier's declaration of conformity," that outlines the safety and product testing AI algorithms and underlying information have undergone.

The researchers suggest an AI SDoC would answer questions to help evaluate an algorithm's performance, such as: "Was the dataset and model checked for biases?" and "Was the service checked for robustness against adversarial attacks?"

"An SDoC for AI services will contain sections on performance, safety and security," the paper reads. "Moreover, it will list how the service was created, trained and deployed along with what scenarios it was tested on, how it will respond to non-tested scenarios, and guidelines that specify what tasks it should and should not be used for."

Several other industries make SDoCs voluntary, and IBM suggests the same for AI. The researchers also note their idea could help boost consumer trust in AI.  

More articles on artificial intelligence:

NYU School of Medicine teams up with Facebook to improve MRIs
DOD taps Australian digital health company for 'smartphones for health' program
Mass General, MIT team develops experimental 'in-body GPS' system

© Copyright ASC COMMUNICATIONS 2020. Interested in LINKING to or REPRINTING this content? View our policies by clicking here.

 

Featured Webinars

Featured Whitepapers