how do you test if a bot like LaMDA is sentient?

350 views

how do you test if a bot like LaMDA is sentient?

In: 4

6 Answers

Anonymous 0 Comments

Consciousness, sapience, and sentience are pretty vague ideas. We use those terms to describe our own mind and sense of self as humans. And since we can talk to each other about how we think and feel, we can be reasonably sure that other people experience the same sense of self that we do — as sure as we can be about anything in a philosophical sense anyway.

But we just can’t measure the interior experience of something else any more than it can communicate that experience to us. The best we can really do is look at the behavior of, say, a dog and extrapolate that they don’t *seem* to have the same understanding of self (sapience) that we do. So when it comes to a machine, how do we know if it’s sentient, sapient, or conscious? Well, we don’t really. We can ask it and see what it says, and we can study its behavior and extrapolate, but we can’t know what it is *experiencing.*

The thought experiment we’ve used in computer science (and science fiction) is the Turing Test (named after Alan Turing who invented the idea). The Turing Test doesn’t actually test sentience, it just tests if a machine could “pass” as a human — if its responses are a close enough facsimile of sentience that a test subject could not tell if they were talking to a machine or not. At the end of the day, that’s the best we can prove. We can’t really know if the machine is actually sentient, if it is just programmed well enough to seem sentient, or if there’s even actually a difference between those two ideas. After all, who’s to say that your own experience of self is not just a programmatic result of your brain’s processes?

You are viewing 1 out of 6 answers, click here to view all answers.