Why can’t LLM’s like ChatGPT calculate a confidence score when providing an answer to your question and simply reply “I don’t know” instead of hallucinating an answer?

120 viewsOtherTechnology

It seems like they all happily make up a completely incorrect answer and never simply say “I don’t know”. It seems like hallucinated answers come when there’s not a lot of information to train them on a topic. Why can’t the model recognize the low amount of training data and generate with a confidence score to determine if they’re making stuff up?

EDIT: Many people point out rightly that the LLMs themselves can’t “understand” their own response and therefore cannot determine if their answers are made up. But I guess the question includes the fact that chat services like ChatGPT already have support services like the Moderation API that evaluate the content of your query and it’s own responses for content moderation purposes, and intervene when the content violates their terms of use. So couldn’t you have another service that evaluates the LLM response for a confidence score to make this work? Perhaps I should have said “LLM chat services” instead of just LLM, but alas, I did not.

In: Technology

25 Answers

Anonymous 0 Comments

In order to generate a confidence score, it’d have to understand your question, understand its own generated answer, and understand how to calculate probability. (To be more precise, the probability that its answer is going to be factually true.)

That’s not what ChatGPT does. What it does is to figure out which sentence a person is more likely to say in response to your question.

If you ask ChatGPT “How are you?” it replies “I’m doing great, thank you!” This doesn’t mean that ChatGPT is doing great. It’s a mindless machine and can’t be doing great or poorly. All that this answer means is that, according to ChatGPT’s data, a person who’s asked “How are you?” is likely to speak the words “I’m doing great, thank you!”

So if you ask ChatGPT “How many valence electrons does a carbon atom have?” and it replies “A carbon atom has four valence electrons,” then you gotta understand that ChatGPT isn’t saying a carbon atom has four valence electron.
All it’s actually saying is that a person that you ask that question is likely to speak the words “A carbon atom has four valence electrons” in response. It’s not saying that these words are true or false. (Well, technically it’s stating that, but my point is you should interpret it as a statement of what people will say.)

**tl;dr: Whenever ChatGPT answers something you asked, you should imagine that its answer is followed by “…is what people are statistically likely to say if you ask them this.”**

You are viewing 1 out of 25 answers, click here to view all answers.