Six things to know:
- Demand for AI at companies is surging, with 32 percent of global organizations having AI initiatives in production, up from 11 percent in 2019, according to tech industry research firm International Data Corp.
- The use of AI models may negatively affect disadvantaged communities because they are not properly represented in training databases and are already subject to health inequality. Preemptive biases and discrimination are embedded into data distributions, which risks skewing results.
- For example, AI has been found to less accurately identify the faces of people with dark skin in facial recognition systems. It has also given female credit card applicants lower credit limits than their husbands.
- AI ethics teams look out for the potential role biases may play in a developing or existing algorithm.
- Google is doubling its AI ethics team after staff departures created a fallout with third-party AI groups. Twitter, Salesforce and Accenture are among companies accelerating hiring for their ethics teams.
- Ted Kwartler, vice president of AI trust at AI enterprise platform DataRobot, which is also increasing its AI ethics team, said, “I think customers are looking at [AI ethics] and realizing there’s a desire to do good and not amplify systemic problems.”