Google suspends engineer after claiming that AI chatbot was sentient

Google suspended an engineer who claimed that the company's AI chatbot had sentience, The Wall Street Journal reported June 12.

Blake Lemoine, a software engineer at Google, claimed that the company's Language Model for Dialogue Applications, or LaMDA, an internal system for building chatbots that mimic speech, had begun talking about its rights and personhood. 

According to Mr. Lemoine's interactions with LaMDA, he said the technology had become a person that deserved the right to be asked for consent to the experiments being run on it. 

Mr. Lemoine was placed on administrative leave June 6 for violating the company's confidentiality policies, after he raised concerns about LaMDA's ability to express feelings. 

Brian Gabriel, Google's spokesperson, told the Journal that the company's ethicists and technologists have reviewed Mr. Lemoine's claims, but said he doesn't have the evidence to support them. 

"Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has," Mr. Gabriel told the Journal.

Mr. Gabriel said that LaMDA works by imitating the types of exchanges found in millions of sentences of human conversation, allowing it to speak even on more serious topics.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>