why do models like ChatGPT forget things during conversations or make things up that are not true?

567 views

why do models like ChatGPT forget things during conversations or make things up that are not true?

In: 803

23 Answers

Anonymous 0 Comments

The model doesn’t “understand” anything. It doesn’t think. It’s just really good at “these words look suitable when combined with those words”. There is a limit of how many “those words” it can take into account when generating a new response, so older things will be forgotten.

And since words are just words, the model doesn’t care about them begin true. The better it trained, the more narrow (and close to truth) will be the “this phrase looks good in this context” for a specific topic, but it’s imperfect and doesn’t cover everything.

You are viewing 1 out of 23 answers, click here to view all answers.