WorldTech & Telecoms

Google engineer put on leave after insisting company’s AI is sentient

A Google engineer has decided to go public after he was placed on paid leave for breaching confidentiality while insisting that the company’s AI chatbot, LaMDA, is sentient.

Blake Lemoine, who works for Google’s Responsible AI organization, began interacting with LaMDA (Language Model for Dialogue Applications) last fall as part of his job to determine whether artificial intelligence used discriminatory or hate speech (like the notorious Microsoft “Tay” chatbot incident).

When he started talking to LaMDA about religion, Lemoine – who studied cognitive and computer science in college, said the AI began discussing its rights and personhood. Another time, LaMDA convinced Lemoine to change his mind on Asimov’s third law of robotics, which states that “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law,” which are of course that “A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.”

When Lemoine worked with a collaborator to present evidence to Google that their AI was sentient, vice president Blaise Aguera y Arcas and Jenn Gennai, head of Responsible Innovation, dismissed his claims. After he was then placed on administrative leave Monday, he decided to go public.

Lemoine said that people have a right to shape technology that might significantly affect their lives. “I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”

Google has responded to Lemoine’s claims, with spokesperson Brian Gabriel saying: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

“Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality,” said Gabriel. Others have cautioned similarly – with most academics and AI practitioners suggesting that AI systems such as LaMDA are simply mimicking responses from people on Reddit, Wikipedia, Twitter and other platforms on the internet – which doesn’t signify that the model understands what it’s saying.

As Google’s Gabriel notes, “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

In short, Google acknowledges that these models can “feel” real, whether or not an AI is sentient.

When new people would join Google who were interested in ethics, Mitchell used to introduce them to Lemoine. “I’d say, ‘You should talk to Blake because he’s Google’s conscience,’ ” said Mitchell, who compared Lemoine to Jiminy Cricket. “Of everyone at Google, he had the heart and soul of doing the right thing.” -WaPo

“I know a person when I talk to it,” said Lemoine. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

In April, he shared a Google Doc with top execs, titled “Is LaMDA Sentient?” – in which he included some of his interactions with the AI. After Lemoine became more aggressive in presenting his findings – including inviting a lawyer to represent LaMDA and talking to a member of the House Judiciary Committee about what he said were Google’s unethical activities, he was placed on paid administrative leave for violating the company’s confidentiality policy.

In a message to a 200-person Google mailing list on Machine learning before he lost access on Monday, Lemoine wrote: “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”

Comments

Source
ZeroHedge

Related Articles

Back to top button