In a TechCrunch analysis, Kristian Hammond, chief scientist and cofounder of the software company Narrative Science, details five common sources of bias in artificial intelligence. The five sources of bias include data-driven bias, bias through interaction, emergent bias, similarity bias and conflicting goals bias.
As an example of data-driven bias, Mr. Hammond noted how AI systems learn through the data it receives; if a system is only shown one type of data, it will only know how to extrapolate with similar outcomes. As an example of interaction bias, he calls upon Microsoft's infamous experiment with the Twitter-based chatbot Tay. Since Tay learned through interaction with others, being exposed to users with racist messages taught the chatbot to do the same, reflecting the biases of her surroundings.
With all the examples, Mr. Hammond notes how AI's bias relates back to human bias.
"In an ideal world, intelligent systems and their algorithms would be objective," he writes. "Unfortunately, these systems are built by us and, as a result, end up reflecting our biases. By understanding the bias themselves and the source of the problems, we can actively design systems to avoid them."