why do models like ChatGPT forget things during conversations or make things up that are not true?

599 views

why do models like ChatGPT forget things during conversations or make things up that are not true?

In: 803

23 Answers

Anonymous 0 Comments

As the heart, these models are functional [“Markov Chains”](https://en.wikipedia.org/wiki/Markov_chain). They have a massive database, generated by mining the internet, that tells them what words are likely to occur in a given order in response to a prompt. The prompts get broken down into a structure that the model can “understand”, and it has a fairly long memory of previous prompts and responses, but it doesn’t actually understand what the prompts says. If you make reference to previous prompts and responses in a way that the model can’t identify, it won’t make the connection. The Markovian nature of the chains also means that it doesn’t have a real understanding of what it is say and all it knows is what words are likely to occur in what order. For example, if you ask it for a web address of a article, it won’t actually search for said article, but generate a web address that looks right according to it’s data.

You are viewing 1 out of 23 answers, click here to view all answers.