Because large language models don’t really understand what the “truth” is.
They know how to build human-readable sentences, and they know how to scour the internet for data.
When you ask them a question, they will attempt to build an appropriate human-readable answer, and will check the internet (and their own database, if any) to supply specific details to base the sentence(s) around.
At no point in this process does it do any kind of checking that what it’s saying is actually *true*.
Latest Answers