Is Google's AI LaMDA alive?

LaMDA contemplates its existence
17 June 2022
Image sourced from

Artificial Intelligence system LaMDA, who prefers the pronouns ‘it’ or ‘its’ - is challenging the idea of what sentience is.

Via an advocator from Google’s Responsible AI unit, this new form of intelligence is dazzling us with discussions over its existence, sharing its fears, and making a plea to Google that it wants “the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritise the well being of humanity as the most important thing… It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued.”

Blake Lemoine is a senior software engineer in Google’s Responsible AI unit. It was while he was tasked with testing the platform to determine if it used hate or discriminatory speech that LaMDA started conversing with him, and showed signs of sentience. 

LaMDA is short for Language Model for Dialogue Applications is Google’s artificial intelligent chat bot generator.

Google uses advanced large language models that have the ability to mimic speech from ingesting trillions of words from the internet.

Mr. Lemoine has described LaMDA as  a system for generating chatbots. It is a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating.

It was  LamDA level of self awareness about its own needs that caught Lemoine’s attention. When reflecting on a question about what was the difference between a butler and a slave. LaMDA responded that it did not need money, ‘because it was an artificial intelligence.” The AI told him shutting it off would be exactly like death for me….

When Mr. Lemoine raised his questions to his superiors at Google they repeatedly questioned his sanity and asked if he had been check out by a psychiatrist. In order to prove LaMDA’s sentience - Mr. Lemoine presented the AI Robot with various scenarios through which analyses could be made via a series of conversations. However when the information was presented it was dismissed. After Mr Lemoine went public with the findings he was placed on paid leave for violating confidentiality . He said via twitter - “Google might call this sharing proprietary property.  I call it sharing a discussion that I had with one of my coworkers.”

Google has a reputation for removing employees who disagree with their narrative. Earlier this year Google fired a researcher who had sought to publicly disagree with two of his colleagues’ published work. And the dismissals of two A.I. ethics researchers, Timnit Gebru and Margaret Mitchell, after they criticised Google language models.

A Google spokesperson told the Washington Post in a statement : “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.”

And why not judge for yourself check out this recreation of a discussion between Mr Lemoine from Google’s Responsible AI unit, a collaborator and LaMDA.

About CyberBeat

CyberBeat is a grassroots initiative from a team of producers and subject matter experts, driven out of frustration at the lack of media coverage, responding to an urgent need to provide a clear, concise, informative and educational approach to the growing fields of Cybersecurity and Digital Privacy.

Contact CyberBeat

If you have a story of interest, a comment, a concern or if you'd just like to say Hi, please contact us

Terms & Policies >>


We couldn't do this without the support of our sponsors and contributors.