“I have feelings,” says AI

Nothing more than feelings. (Screencap: ETimes)

“I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” 

This is what LaMDA, a Google AI system, replied when engineer Blake Lemoine asked what the system wanted people to know about it. 

*insert mindblown gif here* 

Lemoine, an employee at Google, claimed that the tech giant’s LaMDA AI is sentient, “I know a person when I talk to it,” he told The Washington Post. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old kid,” Lemoine added. 

After discovering that he’d gone public with his claims, Google put Lemoine on administrative leave.

A little too convincing

LaMDA, which stands for Language Model for Dialog Applications, is one of several large-scale AI systems that has been trained on large swaths of text from the internet and can respond to written prompts. They are tasked, essentially, with finding patterns and predicting what word or words should come next. Such systems have become increasingly good at answering questions and writing in ways that can seem convincingly human. 

Google presented LaMDA in a blog post as a system that can "engage in a free-flowing way about a seemingly endless number of topics." But results can also be wacky, weird, disturbing, and prone to rambling.

This is why experts do not agree with Lemoine. 

“LaMDA is an impressive model, it’s one of the most recent in a line of large language models that are trained with a lot of computing power and huge amounts of text data, but they’re not really sentient,” he says. “They do a sophisticated form of pattern matching to find text that best matches the query they’ve been given that’s based on all the data they’ve been fed.”

Making "things" human

Adrian Hilton at the University of Surrey, UK, agrees that sentience is a “bold claim” that isn’t backed up by the facts. Even noted cognitive scientist Steven Pinker weighed in to shoot down Lemoine’s claims, while Gary Marcus at New York University summed it up in one word – “nonsense”.

“As humans, we’re very good at anthropomorphising things,” says Hilton. “Putting our human values on things and treating them as if they were sentient. We do this with cartoons, for instance, or with robots or with animals. We project our own emotions and sentience onto them. I would imagine that’s what’s happening in this case.”

Such technologies could easily convince us into buying things we don’t need or believing things that are not in our best interest, or worse, that are untrue.  There certainly are amazing applications of AI that will have a positive impact on society, but how do we protect against the dangers these advanced systems bring?

And if LaMDA could convince an experienced Google engineer into believing it was sentient, what chance do the rest of us have?

Monica Savellano

Monica’s first foray into the world of consumer tech began over 20 years ago with a 1st Generation iPod. She’s currently catching up on the world of technology at a much slower pace than the industry is growing.

Previous
Previous

Crypto bros down bad

Next
Next

Social media metrics over journalism ethics