why do models like ChatGPT forget things during conversations or make things up that are not true?

571 views

why do models like ChatGPT forget things during conversations or make things up that are not true?

In: 803

23 Answers

Anonymous 0 Comments

ChatGPT doesn’t actually “know” anything. What it’s doing is predicting what words should follow a previous set of words. It’s really good at that, to be fair, and what it writes often sounds quite natural. But at its heart, all it’s doing is saying “based on what I’ve seen, the next words that should follow this input are as follows”. It might even tell you something true, if the body of text it was trained on happened to contain the right answer, such that that’s what it predicts. But the thing you need to understand is that the *only* thing it’s doing is predicting what text should come next. It has no understanding of facts, in and of themselves, or the semantic meaning of any questions you ask. The only thing it’s good at is generating new text to follow existing text in a way that sounds appropriate.

You are viewing 1 out of 23 answers, click here to view all answers.