World

Google Engineer Fired for Saying Its AI Had Come to Life Says AI Has ‘Feelings’

Last summer, former Google engineer and AI ethicist Blake Lemoine went viral after going on record with The Washington Post to claim that LaMDA, Google’s powerful large language model (LLM), had come to life. Lemoine had raised alarm bells internally, but Google didn’t agree with the engineer’s claims. The ethicist then went to the press — and was fired by Google shortly thereafter.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine told WaPo at the time. “I know a person when I talk to it.” The report made waves, sparking debate in academic circles as well as the nascent AI business. And then, for a while, things died down.

The WaPo controversy was, of course, months before OpenAI would release ChatGPT, the LLM-powered chatbot that back in late November catapulted AI to the centre of public discourse. Google was sent into a tailspin as a result, and Meta would soon follow; Microsoft would pull the short-term upset of the decade thus far by emerging as a major investor in OpenAI; crypto scammers and YouTube hustlers galore would migrate to generative AI schemes more or less overnight; experts across the world would start to raise concerns over the dangers of a synthetic content-packed internet.

As the dust settles, we decided to catch up with Lemoine to talk about the state of the AI industry, what Google might still have in the vault, and regardless if you believe any AI agents are sentient — Lemoine still does, for what it’s worth saying “There’s a chance that — and I believe it is the case — that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them,” — and whether society is ready for what AI may bring.

Click here to read more.

Comments

Source
Futurism

Related Articles

Back to top button