Why does ChatGPT lie? And why can’t this be fixed easily?

730 views

I’ve tried asking it to write arguments and support them but the references are fake. It apologizes when confronted but does it over and over again even when I ask it not to provide fake references.

In: 5

45 Answers

Anonymous 0 Comments

ChatGPT doesn’t lie. To lie, you have to understand what is true and it doesn’t understand anything. All it does is guess the most probable next word in a string of words.

And that is the danger, that people incorrectly assume its output has any connection to the truth or to facts.

Anonymous 0 Comments

ChatGPT doesn’t lie. To lie, you have to understand what is true and it doesn’t understand anything. All it does is guess the most probable next word in a string of words.

And that is the danger, that people incorrectly assume its output has any connection to the truth or to facts.

Anonymous 0 Comments

ChatGPT doesn’t lie. To lie, you have to understand what is true and it doesn’t understand anything. All it does is guess the most probable next word in a string of words.

And that is the danger, that people incorrectly assume its output has any connection to the truth or to facts.

Anonymous 0 Comments

ChatGPT is not a general intelligence. For example, Jarvis in Ironman. We’re decades away.

It is a large language model. It collects and studies which words usually show up with one another and use statistics to predict the words that should come after the ones you provide. In other words, it’s similar to a fortune teller who uses your words and verbal languages to pretend to have psychic power.

If you give it common words that the internet likes to lie and joke about, it’ll give you cringy answers that people share on tiktok.

Anonymous 0 Comments

As others have said, ChatGPT was not designed to answer correctly. It was designed to answer in a way that sounds correct.

May I suggest reading the blog and/or book by [Janelle Shane](https://www.janelleshane.com/) on this topic? She is quite funny as she explains what AIs can and can’t do.

Anonymous 0 Comments

ChatGPT is not a general intelligence. For example, Jarvis in Ironman. We’re decades away.

It is a large language model. It collects and studies which words usually show up with one another and use statistics to predict the words that should come after the ones you provide. In other words, it’s similar to a fortune teller who uses your words and verbal languages to pretend to have psychic power.

If you give it common words that the internet likes to lie and joke about, it’ll give you cringy answers that people share on tiktok.

Anonymous 0 Comments

ChatGPT is not a general intelligence. For example, Jarvis in Ironman. We’re decades away.

It is a large language model. It collects and studies which words usually show up with one another and use statistics to predict the words that should come after the ones you provide. In other words, it’s similar to a fortune teller who uses your words and verbal languages to pretend to have psychic power.

If you give it common words that the internet likes to lie and joke about, it’ll give you cringy answers that people share on tiktok.

Anonymous 0 Comments

As others have said, ChatGPT was not designed to answer correctly. It was designed to answer in a way that sounds correct.

May I suggest reading the blog and/or book by [Janelle Shane](https://www.janelleshane.com/) on this topic? She is quite funny as she explains what AIs can and can’t do.

Anonymous 0 Comments

As others have said, ChatGPT was not designed to answer correctly. It was designed to answer in a way that sounds correct.

May I suggest reading the blog and/or book by [Janelle Shane](https://www.janelleshane.com/) on this topic? She is quite funny as she explains what AIs can and can’t do.

Anonymous 0 Comments

It doesn’t know what a real/fake reference is. It knows how to string words together into something that sounds like an argument, and it knows that when other people write arguments, they put a string of names/dates/titles/publishers at the end in APA format, but it doesn’t understand the connection between those things.

GPT is a baby. Beyond the superficial level, in a very real sense, it doesn’t know what it’s doing. It’s just good at mimicking people who do.

If I can anthropomorphize a bit, it doesn’t intend to lie or cheat here. In a few years it might understand those concepts but right now, it’s trying to play a game without knowing most of the rules. When my friend taught me basketball I committed double-dribbling, up-and-down and travelling multiple times in the first minute. It’s not that I set out to be a cheat, I was just trying to do what I’d seen basketball players do, without knowing any of the nuances.