Very simply, they don’t know anything about the meaning of the words they use. Instead, during training, the model learned statistical relationships between words and phrases used in millions of pieces of text.
When you ask them to respond to a prompt, they glue the most probable words to the end of a sentence to form a response that is largely grammatically correct, but may be completely meaningless or entirely wrong.
Latest Answers