Why nobody talks about Blake Lemoine anymore
The -now fired- Google engineer who said LaMDA had become sentient recently gave an illuminating interview, but not a single soul seemed to care.
Ex-Google engineer Blake Lemoine recently joined Emily Chang on Bloomberg Technology for an interview, it has been viewed 2 million times on YouTube. Two months earlier he sat down in the H3 Podcast for nearly 3-hour long conversation, which raked up more than 700K views.
That sounds like a lot, but the reality is that the conversation about Google’s LaMDA has pretty much died out. The media cycle has moved on. We have more important things to worry about: inflation, rising oil and gas prices, Ukraine.
How it started
Initially, Google put the Blake Lemoine on administrative leave after he made international headlines, saying the language model he’d been working on (LaMDA) appeared sentient and he leaked transcripts of the conversations he had with the model.
Many scrutinized his statements, said they were ill-informed, not true. The stuff made headlines because of the provocative and bold nature of the statements. The discussion that followed, in the media, centered completely about one thing only: whether or not the AI was conscious. And based on the evidence provided: a bunch of transcripts from interactions between Blake Lemoine and LaMDA, the public had to make up their mind. Pick a side.
I caught myself discussing with my colleagues whether it was a publicity stunt. It all sounded so unreal.
What Blake Lemoine believes
It seems that Lemoine stands behind his own words to this day. In a 3-hour long podcast he dived deeper into the philosophical discussion on why he argued LaMDA showed signs of a conscious experience.
If a system can convincingly argue for or against its own conscious experience, this system must have some form of consciousness. How can it argue about what it means to have a conscious experience without having a conscious experience? It’s his version of the ‘Mirror test’, Lemoine explains, testing the ability of a conversational agent’s understanding of their relationship to the world.
He goes on to explain that in that time, when he was testing LaMDA’s abilities, he also consulted with several experts on non-human cognition outside of Google. Lemoine expresses viewing LaMDA’s conscious experience in a similar way; it might not be comparable to human consciousness, but there’s certainly something there.
Lemoine also acknowledges in the podcast that ultimately it ends up being a philosophical discussion rather than an academic one. And the reason he was fired was not because Google disliked his conclusions, but because he had chosen to go public with them and leaked transcriptions of LaMDA’s conversations that were deemed confidential.
The conversation behind the conversation
In an interview with Bloomberg Technology, more recently, Lemoine even went so far as saying that whether or not LaMDA is really conscious is the lesser interesting discussion. He goes on to raise the topic of AI ethics and the implications of building these types of extremely advanced conversational agents.
The true frustration of Lemoine was not that Google wasn’t taking his claims seriously. It was that it was impossible for him to seriously engage with the top levels of the organization in case it were true. It was that Google’s corporate structures did not allow him to speak up or raise a larger internal discussion on the implications of AI capabilities that they were developing.
And that’s a troubling reality. It’s particularly troubling because Google has a history of firing AI ethics researchers by now and we all know its cutting-edge technology is deeply interwoven in so much of our every day lives. What Lemoine really tried was to blow the whistle. Although, sadly, many seem to have forgotten his name already. And the media has moved on.
From the interviews he has given, he comes across as an extremely intelligent and articulated individual, that was well aware of the impact that his statements would have. He knew the story would make headlines. And he hoped it would stir a conversation. Make some noise.
But the noise is fading. And the conversation that is not really being talked about — not then, not now — is one about Google’s responsibility.
ChatGPT says it is not sentient. That is exactly what I'd expect a sentient AI to say if it understood what would happen if humans found out! 😀
But, on a more serious note, Blake Lemoine is not really remembered because his evidence was not convincing even if you believe AI can become sentient. The headlines were attractive to the news media because they knew it would drive clicks. If not for the viral headline opportunity, it would not have been covered at all.
To Blake's stance that "If a system can convincingly argue for or against its own conscious experience, this system must have some form of consciousness. How can it argue about what it means to have a conscious experience without having a conscious experience?"
I would argue that
1) "Convincingly" is relative and subjective. What might be convincing to one person, will not at all be so, to another.
2) Given that these models are trained on existing expressions of human thought and writing, I'd assume any self-respecting (ha!) AI would be able to draw upon its data set and neural networks to make relevant arguments regarding consciousness and respond to prompts about the same. Such capacity to formulate statements does not prove an algorithm is either conscious, sentient, or self-aware. All it proves is the capacity to argue about the concept of consciousness.