The software service, which runs on the IBM Cloud, helps businesses manage AI models from IBM along with those built on frameworks from its competitors — such as Amazon and Microsoft — by explaining how AI makes decisions as the algorithms run.
Displayed on a visual dashboard, the service outlines which factors swayed an AI’s decision in one direction versus another, the model’s confidence in the recommendation, and suggested data to add to help mitigate any bias that the service detects.
IBM Research also plans to release a set of algorithms and codes related to AI bias detection and mitigation, dubbed the AI Fairness 360 tool kit, to the open-source community. IBM’s goal for the release is to encourage researchers to integrate bias detection as they build AI models.
“It’s time to translate principles into practice,” Beth Smith, general manager of Watson AI at IBM, said in a news release. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”
More articles on artificial intelligence:
Google Cloud taps Carnegie Mellon professor to lead AI
Indiana U School of Medicine partners with Fujifilm for AI research
AI to save healthcare $150B by 2025: 5 report insights