We don’t really have a conclusive test for that. (In this case it absolutely isn’t, though)
We can’t really tell if something is sentient, or if it just *acts as if it is*.
We also have no way of knowing whether other people are sentient. Perhaps you’re the only sentient person on the planet and everyone else are just mindless automatons programmed to *look* like sentient beings. You can never know for certain.
Define “sentient”. The dictionary definition is the ability to feel or experience sensations. Does that mean simple response to stimulant or much more than that?
Using the first definition almost every robot and computer programme is “sentient”. Using the second none is nor ever can be, without a major new, currently unknown, domain of physics being discovered.
Consciousness, sapience, and sentience are pretty vague ideas. We use those terms to describe our own mind and sense of self as humans. And since we can talk to each other about how we think and feel, we can be reasonably sure that other people experience the same sense of self that we do — as sure as we can be about anything in a philosophical sense anyway.
But we just can’t measure the interior experience of something else any more than it can communicate that experience to us. The best we can really do is look at the behavior of, say, a dog and extrapolate that they don’t *seem* to have the same understanding of self (sapience) that we do. So when it comes to a machine, how do we know if it’s sentient, sapient, or conscious? Well, we don’t really. We can ask it and see what it says, and we can study its behavior and extrapolate, but we can’t know what it is *experiencing.*
The thought experiment we’ve used in computer science (and science fiction) is the Turing Test (named after Alan Turing who invented the idea). The Turing Test doesn’t actually test sentience, it just tests if a machine could “pass” as a human — if its responses are a close enough facsimile of sentience that a test subject could not tell if they were talking to a machine or not. At the end of the day, that’s the best we can prove. We can’t really know if the machine is actually sentient, if it is just programmed well enough to seem sentient, or if there’s even actually a difference between those two ideas. After all, who’s to say that your own experience of self is not just a programmatic result of your brain’s processes?
“it talks like a human” (aka Turing test) is not useful. There were chatbots running under on 1990’s PC’s that could pass it. But they are just parrots with a really big memory, doing keyword searches on a large collection of chat logs, looking for a response to a similar phrase:
https://en.wikipedia.org/wiki/Chinese_room#Chinese_room_thought_experiment
I believe true intelligence means operating on abstract concepts rather than pure words. That could be tested by asking AI questions that require “connecting the dots”; with AI only having info on dots, but not the connections.
there also should be programming that allows for handling of concept and connecting the dots, not just text searches.
PS “sentient”=”a person” is the way the term is used in Science Fiction. IN RL science, “sentient” means having senses like pain, so most animals with a brain fit that. Intelligence is called “sapience”
Latest Answers