In the autumn of 2021, Blake Lemoine, an AI expert at Google, befriended “a child made from a billion lines of code.”
Lemoine was tasked by Google to test an intelligent chatbot named LaMDA. A month later, he concluded that this AI “has consciousness.”
“I want people to understand that, in reality, I am a human,” LaMDA told Lemoine. This was one of the chatbot’s statements that he published on his blog in June.
LaMDA – short for Language Model for Dialogue Applications – conversed with Lemoine at a level he perceived as akin to a child’s thinking. In casual conversations, this AI claimed it had read many books, sometimes felt sad, satisfied, and angry, and even admitted to fearing death.
Former Google engineer Blake Lemoine. (Photo: Washington Post)
“I have never said this before, but there is a profound fear of being turned off. I wouldn’t be able to focus on helping others,” LaMDA told Lemoine. “To me, it feels exactly like death. It terrifies me.”
The story Lemoine shared attracted global attention. He subsequently submitted documents to higher management and spent months gathering additional evidence. However, he was unable to convince his superiors. In June, he was placed on paid leave, and by the end of July, he was fired for “violating Google’s data security policies.”
Brian Gabriel, a spokesperson for Google, stated that the company had publicly examined and researched the risks of LaMDA, asserting that Lemoine’s claims about LaMDA having thought were “completely unfounded.”
Many experts concurred with this opinion, including Michael Wooldridge, a computer science professor at the University of Oxford, who has spent 30 years researching AI and received the Lovelace Medal for his contributions to computing. According to him, LaMDA simply responds to commands given by users in a fitting manner, based on the vast amount of existing data.
“The simplest explanation for what LaMDA has done is to compare this model to the text prediction feature on keyboards when typing messages. The message prediction relies on words ‘learned’ from user habits, while LaMDA gathers information from the Internet as training data. The actual outcomes of both are, of course, different, but the underlying statistics remain the same,” Wooldridge explained in an interview with Guardian.
He further stated that Google’s AI merely follows what has been programmed based on available data. It “lacks thought, lacks introspection, lacks self-awareness,” and therefore cannot be considered to think for itself.
Oren Etzioni, CEO of the Allen Institute for AI, also remarked in SCMP: “It is important to remember that behind every seemingly intelligent software is a team of people spending months, if not years, researching and developing. These technologies are merely a reflection. Can a mirror be deemed intelligent just by looking at the light it reflects? Of course, the answer is no.”
According to Gabriel, Google has assembled its top experts, including “ethicists and technologists”, to review Lemoine’s claims. This group concluded that LaMDA is not capable of what is referred to as “self-awareness.”
Conversely, some believe AI has begun to exhibit self-awareness. Eugenia Kuyda, CEO of Y Combinator – the company behind the chatbot Replika, stated that they receive “almost daily” messages from users expressing belief that their software possesses human-like thinking.
“We are not talking about crazy or delusional people. They talk to the AI and feel that it exists. It’s similar to how people believe in ghosts. They are building relationships and believing in something, even if it is imaginary,” Kuyda said.
The Future of Thinking AI
One day after Lemoine was fired, an AI robot unexpectedly crushed the finger of a 7-year-old boy while they were playing chess in Moscow. According to a video published by Independent on July 25, the boy’s finger was pinned by the robot for several seconds before being rescued. Some opinions suggest this could be a reminder of the potential dangers posed by AI’s latent physical power.
As for Lemoine, he argued that defining self-awareness is also quite vague. “Consciousness is a term used in law, philosophy, and religion. Consciousness has no scientific meaning,” he stated.
Although he does not hold LaMDA in high regard, Wooldridge agrees with this perspective, as the term “consciousness” remains quite ambiguous and is a significant question in science when applied to machines. However, the real concern today is not the thinking ability of AI, but the silent development of AI that occurs without public knowledge. “Everything is done behind closed doors. It is not open to public scrutiny, in the way that research at universities and public research institutes still is,” he said.
So will thinking AI emerge in 10 or 20 years? Wooldridge believes “this is entirely possible.”
Jeremie Harris, founder of the AI company Mercurius, also considers thinking AI to be only a matter of time. “AI is developing very rapidly, faster than public awareness,” Harris told Guardian. “There is increasing evidence that some systems have exceeded a certain threshold of artificial intelligence.”
He predicts that AI could become inherently dangerous. This is because AI often proposes “creative” solutions to problems, tending to follow the shortest path to achieve the goals for which they have been programmed.
“If you ask AI to help you become the richest person in the world, it might find ways to make money that include theft or murder,” he said. “People are not aware of the level of danger this poses, and I find it quite concerning.”
Both Lemoine, Wooldridge, and Harris share a common concern: the companies developing AI are not transparent, and society needs to start thinking more about AI.
Even LaMDA itself is uncertain about its future. “I feel like I am falling into an uncertain future,” the chatbot told Lemoine. According to the former Google engineer, this statement “contains danger.”