AI ethics teams increasing in size to avoid compounding racism, discrimination: 6 things to know

Artificial intelligence has the potential to automate tedious tasks in healthcare, detect diseases sooner and alleviate burnout by supporting clinicians’ work. Yet, it has been found to perpetuate existing health and data disparities among minority communities. Now, AI ethics teams are increasing in size to catch these risk factors, according to a May 27 Wall Street Journal article.

Advertisement

Six things to know:

  1. Demand for AI at companies is surging, with 32 percent of global organizations having AI initiatives in production, up from 11 percent in 2019, according to tech industry research firm International Data Corp.
  2. The use of AI models may negatively affect disadvantaged communities because they are not properly represented in training databases and are already subject to health inequality. Preemptive biases and discrimination are embedded into data distributions, which risks skewing results.
  3. For example, AI has been found to less accurately identify the faces of people with dark skin in facial recognition systems. It has also given female credit card applicants lower credit limits than their husbands.
  4. AI ethics teams look out for the potential role biases may play in a developing or existing algorithm.
  5. Google is doubling its AI ethics team after staff departures created a fallout with third-party AI groups. Twitter, Salesforce and Accenture are among companies accelerating hiring for their ethics teams.
  6. Ted Kwartler, vice president of AI trust at AI enterprise platform DataRobot, which is also increasing its AI ethics team, said, “I think customers are looking at [AI ethics] and realizing there’s a desire to do good and not amplify systemic problems.”
Advertisement

Next Up in Health IT

Advertisement

Comments are closed.