These systems lack any inherit knowledge, all they do is try to predict the next word in a sentence.
They are reasoning engines, not databases. You can to an extend avoid hallucinations by feeding it context relevant information (eg. the Wikipedia page on the topic you are asking questions about) together with your prompt, this is what many tools utilizing ChatGPT’s API do, but it may still invent stuff even if this is done.
Due to this risk there always needs to be a human in the loop who validates the output of these models, and you should never trust anything these models claim unless you can validate it.
Latest Answers