eli5 : Why does ai like ChatGPT or Llama 3 make things up and fabricate answers?

250 viewsOtherTechnology

I asked it for a list of restaurants in my area using google maps and it said there is a restaurant (Mug and Bean) in my area and even used a real address but this restaurant is not in my town. Its only in a neighboring town with a different street address

In: Technology

22 Answers

Anonymous 0 Comments

[deleted]

Anonymous 0 Comments

Chat GPT chooses the next word in a sentence but looking at how often different words come after the previous one based on the material that was used to train it. It doesn’t have the ability to evaluate whether the most probable word makes a true statement.

Anonymous 0 Comments

Because AI like ChatGPT is not **thinking** about the response, its basically glorified autocomplete. It has a huge dataset of words and the probability that a word will come after another word, it doesn’t “understand” anything its outputting, only variables and probabilities.

Never ever trust information given by an AI chatbot.

Anonymous 0 Comments

They are not “intelligent”. They are fancy-shmancy autocompletes, just like the basic autocomplete on your phone.

They are designed to *generate* text which *looks* human-written. That’s it.

Anonymous 0 Comments

It’s not actually thinking. It’s probabilistically associating. Which is often fine for writing but useless for technical questions without clear answers, or ones with multiple plausible answers like streets. 

Anonymous 0 Comments

Basically it asks itself “How would a human answer this question?” looking to it’s trainings data – which is all conversations online prior to 2022.

What that tells it is that a human would say something along the lines of “[male Italian name]’s Pizzeria”, “[Color] [Dragon, Tiger or Lotus] Restaurant”.

So it tells you that.
That’s what humans say when being asked for Restaurants.

Anonymous 0 Comments

It isn’t actually “making up” an answer, in that it isn’t some kind of deception or the like (that would require intent, and it does not have intent, it’s just a *very* fancy multiplication program).

It is collecting together data that forms a grammatically-correct sentence, based on the sentences you gave it. The internal calculations which figure out whether the sentence is grammatically correct have zero ability to actually know whether the statements it makes are *factual* or not.

The technical term, in “AI” design, for this sort of thing is a “hallucination.”

Anonymous 0 Comments

LLMs do not *know* anything and you should not use them to research or reference real facts. They simply predict what is likely to be the next word in a sentence.

Anonymous 0 Comments

Because large language models don’t really understand what the “truth” is.

They know how to build human-readable sentences, and they know how to scour the internet for data.

When you ask them a question, they will attempt to build an appropriate human-readable answer, and will check the internet (and their own database, if any) to supply specific details to base the sentence(s) around.

At no point in this process does it do any kind of checking that what it’s saying is actually *true*.

Anonymous 0 Comments

These systems lack any inherit knowledge, all they do is try to predict the next word in a sentence.

They are reasoning engines, not databases. You can to an extend avoid hallucinations by feeding it context relevant information (eg. the Wikipedia page on the topic you are asking questions about) together with your prompt, this is what many tools utilizing ChatGPT’s API do, but it may still invent stuff even if this is done.

Due to this risk there always needs to be a human in the loop who validates the output of these models, and you should never trust anything these models claim unless you can validate it.