A Google engineer says AI has become sentient. What does that actually mean?

Has artificial intelligence finally come to life, or has it simply become smart enough to trick us into believing it has gained consciousness?

Google engineer Blake Lemoine’s recent claim that the company’s AI technology has become sentient has sparked debate in technology, ethics and philosophy circles over if, or when, AI might come to life – as well as deeper questions about what it means to be alive.

Lemoine had spent months testing Google’s chatbot generator, known as LaMDA (short for Language Model for Dialogue Applications), and had grown convinced it had taken on a life of its own, as LaMDA talked about its needs, ideas, fears and rights.

Google dismisses Lemoine’s view that LaMDA had become sentient, placing him on paid administrative leave earlier this month – days before his claims were published by The Washington Post.

Most experts believe it is unlikely that LaMDA or any other AI is close to consciousness, though they do not rule out the possibility that technology may get there in the future.

“My view is that [Lemoine] was taken in an illusion, “says Gary Marcus, a cognitive scientist and author Rebooting AItold CBC’s Front Burner podcast.

Front Burner26:15Did Google make conscious AI?

“Our brains aren’t really built to understand the difference between a computer that’s faking intelligence and a computer that’s actually intelligent – and a computer that fakes intelligence might seem more human than it really is.”

Computer scientists describe LaMDA as operating like a smartphone’s autocomplete function, albeit on a far grander scale. Like other large language models, LaMDA was trained on large quantities of text data to spot patterns and predict what might come next in a sequence, such as a conversation with a human.

Cognitive scientist and author Gary Marcus, pictured during a speech in Dublin, Ireland, in 2014. says LaMDA appears to have fooled a Google engineer into believing it was conscious. (Ramsey Cardy / Sportsfile / Getty Images)

“If your phone autocompletes a text, you don’t suddenly think it is aware of itself and what it means to be alive. You just think, well, that was exactly the word I was thinking of,” said Carl Zimmer, science columnist for the New York Times and author of Life’s Edge: The Search for What It Means to Be AliveGeneral Chat Chat Lounge

Humanizing robots

Lemoine, who is also ordained as a mystic Christian priest, told Wired he became convinced of LaMDA’s status as a “person” because of its level of self-awareness, the way it spoke about its needs and its fear of death if Google were to. delete it.

The insists he was not fooled by a clever robot, as some scientists have suggested. Lemoine maintains its position, and even appears to suggest that Google had ensured its AI system.

“Each person is free to come to their own personal understanding of what the word ‘person’ means and how that word relates to the meaning of terms like ‘slavery,'” he wrote in a post on Medium on Wednesday.

Marcus believes Lemoine is the latest in a long line of people to fall for what computer scientists call “the ELIZA effect,” after a 1960s computer program that chatted in the style of a therapist. Simplistic responses like “Tell me more about that” convinced users that they were having a real conversation.

“That was 1965, and here we are in 2022, and it’s kind of the same thing,” Marcus said.

Scientists who spoke to CBC News pointed out that humans’ desire to anthropomorphize objects and creatures – perceiving human-like characteristics that are not really there.

“If you see a house that has a funny crack, and windows, and it looks like a smile, you’re like, ‘Oh, this house is happy,’ you know? We do this kind of thing all the time,” said Karina Vold, an assistant professor at the University of Toronto’s Institute for History and Philosophy of Science and Technology.

“I think what’s going on often in these cases is this kind of anthropomorphism, where we have a system that tells us ‘I’m sentient,’ and saying words that make it sound like it’s sentient – it’s really easy for us to want to grasp onto that. “

Karina Vold, an assistant professor of philosophy at the University of Toronto, hopes the debate over AI consciousness and rights will spark a rethink of how humans treat other species that are known to be conscious. (University of Toronto)

Humans already have begun to consider what legal rights AI should have, including whether it deserves personhood rights.

“We are quickly going to get into the realm where people believe these systems deserve rights, whether or not they are actually doing internally what people think they are doing. And I think that is going to be a very strong movement,” said Kate Darling, an expert in robot ethics at the Massachusetts Institute of Technology’s Media Lab.

Defining consciousness

Given AI is so good at telling us what we want to hear, how will humans ever be able to tell if it truly comes to life?

That in itself is a subject of debate. Experts have yet to come up with a test of AI consciousness – or reach consensus on what it means to be conscious.

Ask a philosopher, and they’ll talk about “phenomenal consciousness” – the subjective experience of being you.

“Any time that you’re awake … it feels a certain way. You’re undergoing some kind of experience … When I kick a rock down the street, I don’t think there’s anything [that it feels] Like to be that rock, “said Vold.

For now, AI is seen more like rock – and it’s hard to imagine its disembodied voice being capable of having positive or negative emotions, as philosophers believe “sentience” requires.

Carl Zimmer, author and science columnist for The New York Times, says scientists and philosophers have struggled to define conscience. (Facebook / Carl Zimmer)

Perhaps consciousness cannot be programmed at all, says Zimmer.

“It is possible, theoretically, that consciousness is just something that emerges from a particular physical, evolved kind of matter. [Computers] are just on the outside of life’s edge, maybe. “

Others think humans can never truly be sure that AI has developed consciousness – and don’t see much point in trying.

“Consciousness can range [from] Anything from feeling pain when you step on a tack [to] Seeing a bright green field as red – that’s the kind of thing where we never know if a computer is conscious in that sense, so I suggest just forgetting consciousness, “said Harvard cognitive scientist Steven Pinker.

“We should aim higher than duplicating human intelligence, anyway. We should build devices that do things that need to be done.”

Harvard cognitive psychologist Steven Pinker, seen here in New York in 2018, says humans will never be able to tell if AI has achieved consciousness. (Brad Barket / Getty Images for Ozy Media)

Those things, Pinker says, include dangerous and boring occupations, and tasks around the house, from cleaning to child care.

Rethinking AI’s Role

Despite AI’s massive strides over the last decade, the technology still lacks another key component that defines humans: common sense.

“It’s not that [computer scientists] Think that consciousness is a waste of time, but we don’t see it as being central, “said Hector Levesque, professor emeritus of computer science at the University of Toronto.

“What we do see as being central is somehow getting a machine to use ordinary, common sense knowledge – you know, the kind of thing that you would expect a 10-year-old to know.”

Levesque gives the example of a self-driving car: it can stay in its lane, stop at a red light and help a driver avoid crashes, but when confronted with a road closure, it will sit there doing nothing.

“That ‘s where common sense would enter into it. [It] Would have to sort of think, well, why am I driving in the first place? Am I trying to get to a particular location? “Levesque said.

Some computer scientists say common sense, not consciousness, should be a priority in AI development, to ensure that technology like self-driving cars can proactively solve problems. This self-driving car is showing during a demonstration in Moscow on Aug. 16, 2019. (Evgenia Novozhenina / Reuters)

While humanity waits for AI to learn more street smarts – and perhaps take a day off of its own – scientists hope the debate over conscience and rights will extend beyond technology to other species known and think for themselves.

“If we think consciousness is important, it’s probably because we’re concerned that we’re building some kind of system that’s living a life of misery or suffering in some way that we’re not recognizing,” said Vold.

“If that is really motivating us, then I think we need to be reflective about other species in our natural system and see what kind of suffering we may be causing them. There is no reason to prioritize AI over other biological species that we know. There is a much bigger case of being aware. “

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker