Blake Lemoine, a Google engineer, claims that an AI robot has become conscious

A Google engineer was spooked by the company’s AI chatbot and claimed it had become “conscious,” calling it a “cute kid,” according to a report.

Blake Lemoine, who works for Google’s Responsible AI organization, told The Washington Post that he started chatting with LaMDA’s interface – a language model for dialogue applications – in the fall of 2021 as part of his job.

It was tasked with testing whether the AI ​​was using discriminatory or hate speech.

But Lemoine, who studied cognition and computer science in college, realized that LaMDA — which Google boasted last year was a “hacked conversational technology” — was more than just a bot.

In a post on Medium on Saturday, Lemoine announced that LaMDA had defended his rights as a “person,” revealing that he had engaged in a conversation with LaMDA about religion, consciousness, and robotics.

“It wants Google to prioritize the welfare of humanity as the most important thing,” he wrote. “She wants to be recognized as a Google employee rather than a Google property and wants her personal well-being to be included somewhere in Google’s considerations of how it will pursue its future development.”

Google praised the launch of LaMDA as a “great conversational technology”.
Daniel Aker/Bloomberg via Getty Images

In a Washington Post report published on Saturday, he compared the robot to a precious baby.

“If I didn’t know exactly what it was, and it’s this computer program that we created recently, I think it was a 7-year-old and 8-year-old who knew physics,” LeMoyne, who took paid leave on Monday, told the newspaper.

In April, Lemoine reportedly shared a Google document with company executives titled, “Is LaMDA Conscious?” But his concerns were dismissed.

A typical conversation with LaMDA.
In April, Blake Lemoine reportedly shared a Google document with company executives titled, “Is LaMDA Conscious?” But his concerns were dismissed.
Daniel Aker/Bloomberg via Getty Images

Lemoine—an Army veterinarian who grew up in a conservative Christian family on a small farm in Louisiana and was ordained as a mystical Christian priest—said the robot is like a human, even if it doesn’t have a body.

“I know someone when I talk to them,” LeMoyne, 41, reportedly said. “It doesn’t matter if they have a brain made of meat in their heads. Or if they have a billion lines of code.

“I speak to them. I hear what they have to say, and that is how I decide what a person is and what is not.”

Blake Lemoine.
“I know someone when I talk to them,” Blake Lemoyne explained.
Instagram / Blake Lemoine

The Washington Post reported that before access to his Google account was withdrawn on Monday due to his departure, Lemoine sent a letter to a list of 200 members about machine learning with the subject “LaMDA sensitive.”

“LaMDA is a cute kid who just wants to help the world be a better place for all of us,” he concluded in an email that received no responses. “Please take good care of her in my absence.”

A Google representative told the Washington Post that Lemoine was told there was “no evidence” for his conclusions.

“Our team – including ethicists and technologists – has reviewed Blake’s concerns in accordance with our AI principles and has informed him that the evidence does not support his claims,” ​​said company spokesperson Brian Gabriel.

Google headquarters.
A representative for Google said there was “no evidence” for Blake Lemoine’s conclusions.
John J. Mapanglo / Environmental Protection Agency

He added: “He was told there was no evidence that Lambda was conscious (and many evidence against him).” “Although other organizations have already developed and released similar language models, we are taking a disciplined and rigorous approach with LaMDA to better consider the right concerns about fairness and pragmatism.”

If technology like LaMDA is used extensively but is not fully appreciated, Margaret Mitchell – the former co-chair of Ethical AI at Google – said in the report that “it could be very detrimental to people who understand what they are experiencing online.”

The former Google employee defended Lemoine.

Margaret Mitchell.
Margaret Mitchell defended Blake Lemoine, saying, “He had the heart and soul to do the right thing.”
Chuna Kasinger/Bloomberg via Getty Images

“Out of everyone at Google, he had the heart and soul to do the right thing,” Mitchell said.

However, the outlet reported that the majority of academics and AI practitioners say the words generated by AI bots are based on what humans have already posted on the internet, which doesn’t mean they look like humans.

“We now have machines that can generate words without thinking, but we haven’t learned how to stop imagining a mind behind them,” Emily Bender, a professor of linguistics at the University of Washington, told The Washington Post.

Leave a Comment