why do models like ChatGPT forget things during conversations or make things up that are not true?

581 views

why do models like ChatGPT forget things during conversations or make things up that are not true?

In: 803

23 Answers

Anonymous 0 Comments

Very simply, they don’t know anything about the meaning of the words they use. Instead, during training, the model learned statistical relationships between words and phrases used in millions of pieces of text.

When you ask them to respond to a prompt, they glue the most probable words to the end of a sentence to form a response that is largely grammatically correct, but may be completely meaningless or entirely wrong.

You are viewing 1 out of 23 answers, click here to view all answers.