Why does ChatGPT lie? And why can’t this be fixed easily?

734 views

I’ve tried asking it to write arguments and support them but the references are fake. It apologizes when confronted but does it over and over again even when I ask it not to provide fake references.

In: 5

45 Answers

Anonymous 0 Comments

It’s not googling those references or anything, because it’s not a search engine, it is literally a very good making-up engine. It makes up words and sentences that seem relevant based on all the data it was trained with, which is how it makes conversation so well. Don’t ask it for references.

Anonymous 0 Comments

Think of it this way: if a child watched you dial numbers on a phone, that child would figure out that dialing numbers is how you call people. The child might even be able to imitate you perfectly and dial the same number.

But what happens if you ask the child to call, let’s say, the local library? They’re just gonna hit random numbers. They’ve learned to imitate the behavior, but they have no idea how it actually works. That’s more or less what these AIs are currently doing. They’re extremely good at imitating us.

Anonymous 0 Comments

It understands the concept of a reference, but not the meaning. Things like this are why it’s free to use at the moment, they need peoppe doing this. It won’t be free forever.

Anonymous 0 Comments

ChatGPT does not lie, you can only lie if you have the intend to trick someone.

ChatGPT is false and that is somehow hyped up in the media as a lie.

Anonymous 0 Comments

Great question! Think of ChatGPT like this: It’s a big collection of learned associations of words. So when I’ve browsed Reddit for far longer than intended (as usual, dangit), I might see that “abusive” and “relationship” (as well as gyms and lawyers, apparently) are often paired and together have a negative connotation, whereas “relationship” can also be a positive thing in combination with other words. “Abusive”, however, is almost never positive in any combination. Imagine a massive collection that has combinations and connotations like that stored in itself, ready to understand sentences through that lense. That’s ChatGPT. Except, besides understanding language, it can also generate it.

So how does one generate words as a model? Well, basically, you do the same trick as before when trying to understand language, except now, you’ve been taught combinations of words in RESPONSE to sentences. You don’t just try to comprehend them: you’re trying to say meaningful things as a reaction to what users say or ask. The folks over at OpenAI trained ChatGPT to do just that, feeding it prompts and giving it feedback on the quality of its response. Makes sense, right? If you want to learn German, someone has to tell you “Nein. Das sagen wir so nicht.” for you to ponder blankly until you finally get the highly coveted “Gut. Jetz las mich in Ruhe.” every now and then; same goes for the model.

What’s important to understand, then, is that the model doesn’t actually have an comprehension of what its generating. They are just learned combinations of words that work well given the examples it has been shown/taught. It can generate some VERY accurate things all on its own from all the patterns that it has learned, but the further it moves away from what it has been taught, the less reference ChatGPT has on what it can do to correct mistakes. After all, it can’t learn much more beyond what it has been taught before being deployed to you. It isn’t so much lying as simply grasping at straws using words that should theoretically belong together to give you a very compelling story that’s, well, nonsense, because it simply hasn’t learned a correct answer to your prompt.

I hope that helps it make a little more sense to you!

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

Think of it this way: if a child watched you dial numbers on a phone, that child would figure out that dialing numbers is how you call people. The child might even be able to imitate you perfectly and dial the same number.

But what happens if you ask the child to call, let’s say, the local library? They’re just gonna hit random numbers. They’ve learned to imitate the behavior, but they have no idea how it actually works. That’s more or less what these AIs are currently doing. They’re extremely good at imitating us.

Anonymous 0 Comments

Great question! Think of ChatGPT like this: It’s a big collection of learned associations of words. So when I’ve browsed Reddit for far longer than intended (as usual, dangit), I might see that “abusive” and “relationship” (as well as gyms and lawyers, apparently) are often paired and together have a negative connotation, whereas “relationship” can also be a positive thing in combination with other words. “Abusive”, however, is almost never positive in any combination. Imagine a massive collection that has combinations and connotations like that stored in itself, ready to understand sentences through that lense. That’s ChatGPT. Except, besides understanding language, it can also generate it.

So how does one generate words as a model? Well, basically, you do the same trick as before when trying to understand language, except now, you’ve been taught combinations of words in RESPONSE to sentences. You don’t just try to comprehend them: you’re trying to say meaningful things as a reaction to what users say or ask. The folks over at OpenAI trained ChatGPT to do just that, feeding it prompts and giving it feedback on the quality of its response. Makes sense, right? If you want to learn German, someone has to tell you “Nein. Das sagen wir so nicht.” for you to ponder blankly until you finally get the highly coveted “Gut. Jetz las mich in Ruhe.” every now and then; same goes for the model.

What’s important to understand, then, is that the model doesn’t actually have an comprehension of what its generating. They are just learned combinations of words that work well given the examples it has been shown/taught. It can generate some VERY accurate things all on its own from all the patterns that it has learned, but the further it moves away from what it has been taught, the less reference ChatGPT has on what it can do to correct mistakes. After all, it can’t learn much more beyond what it has been taught before being deployed to you. It isn’t so much lying as simply grasping at straws using words that should theoretically belong together to give you a very compelling story that’s, well, nonsense, because it simply hasn’t learned a correct answer to your prompt.

I hope that helps it make a little more sense to you!

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

Not lying, just half formed connections in a massive jumble of linked information. The apology is more of a hard coded response as opposed to “thank you for providing direct negative feedback on this erroneous response, we will use this to improve in the future”

Like if someone said ” I remember consuming media, probably a paper, from a scientist, Neil DeGrasse Tyson, or Einstein, and it was talking about finches in the Galapagos and how the birds prove evolution is what killed the dodo.” You might be able to piece together that they saw Neil on JRE talking about Darwin and the Dodo’s lack of natural predators leading to them being unequipped when humans started eating them. And that person might even apologize for being wrong, but they didn’t lie to you