why do models like ChatGPT forget things during conversations or make things up that are not true?

565 views

why do models like ChatGPT forget things during conversations or make things up that are not true?

In: 803

23 Answers

Anonymous 0 Comments

About forgetting: It has a limit on the number of words it takes into account when answering. So if it has a limit of 100 words and you told him a flower is red 101 words prior to you asking about the flower, he does not “remember” the flower is red.

Anonymous 0 Comments

Why does your Scarlet Macaw seem to constantly lose the thread or your conversation? Because it’s just parroting back what it’s learned.

Language models have read an uncountable number of human conversations. They know what words commonly associate with what responses. They understand none of them.

Language models are trained parrots performing the trick of appearing to be human in their responses. They don’t care about truth, or accuracy, or meaning. They just want the cracker.

Anonymous 0 Comments

As the heart, these models are functional [“Markov Chains”](https://en.wikipedia.org/wiki/Markov_chain). They have a massive database, generated by mining the internet, that tells them what words are likely to occur in a given order in response to a prompt. The prompts get broken down into a structure that the model can “understand”, and it has a fairly long memory of previous prompts and responses, but it doesn’t actually understand what the prompts says. If you make reference to previous prompts and responses in a way that the model can’t identify, it won’t make the connection. The Markovian nature of the chains also means that it doesn’t have a real understanding of what it is say and all it knows is what words are likely to occur in what order. For example, if you ask it for a web address of a article, it won’t actually search for said article, but generate a web address that looks right according to it’s data.