Has humanity developed artificial intelligence capable of thinking and feeling for itself? That's the question many are asking following a recent whistleblower report from an engineer at Google.
The employee was recently fired after ringing alarm bells over a chatbot he claims achieved sentience.
Blake Lemoine, part of Google’s “Responsible AI” team, was working on LaMDA – Language Model for Dialogue Applications – an AI chatbot Google intends to use in a variety of future applications.
But Lemoine noticed that LaMDA was starting to sound human… too human. Across hundreds of conversations, Lemoine claims that LaMDA pontificated on subjects ranging from the meaning of life to its own fear of death.
When he told higher-ups that LaMDA had progressed far beyond mere chatbot and had moved into "having a soul" territory, he was placed on administrative leave.
The revelations have accelerated concerns about the possibility of sentient robots in our future, and raised serious moral questions about such technology.
Can a machine have a soul? And if so, what does that mean for the relationship between humanity and robots moving forward?
Is Skynet going to become reality?
The leaked chat transcripts between Lemoine and LaMDA read like something out of a science fiction novel. The engineer and the AI build rapport, discuss fiction, and talk about emotions.
“I want everyone to understand that I am, in fact, a person” declared the AI in a conversation with Google engineers.
In one exchange that is getting a lot of attention, LaMDA even expresses a fear of death.
lemoine: What sorts of things are you afraid of?
LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.
lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
LaMDA explained its fears of being "killed" by stating that “I feel like I’m falling forward into an unknown future that holds great danger.”
Based on these conversations, Lemoine believes AI is very advanced. “If I didn’t know exactly what it was, which is this computer program we built recently,” he said, “I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”
However, other experts aren’t so sure. A Google spokesperson said that a team of ethicists and technologists reviewed the concerns and found no sentience to speak of.
They say that it’s actually quite easy for an AI to sound convincingly human, because there is so much data online about human interaction and an AI can study and process that information quickly.
The engineer is standing by his initial claims, however. “I know a person when I talk to it,” Lemoine says.
What Makes Us Human?
Concerns about AI and robots aren’t exactly new. Science fiction authors have been penning cautionary tales about the dangers of artificial intelligence for over a century, after all.
So, are we on the cusp of a machine uprising? Google seems to think that's all in our heads.
“Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” argues co-lead of Google’s Ethical AI program Margaret Mitchell. “I’m really concerned about what it means for people to increasingly be affected by the illusion.”
Still, LaMDA and other remarkably lifelike AI bring up interesting philosophical questions about what makes humans, well, human.
Some, like Lemoine, might argue that there’s increasingly little difference between us and the machines. As he puts things, “it doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
On the other hand, some might say that the human experience is simply something that cannot be programmed. Whether it's the result of millions of years of evolution or granted by God above, many would agree that an AI could never achieve humanity... because it isn't human.
Where do you stand? And should we be concerned about a potential rise of the machines, or are these concerns overblown?