why do models like ChatGPT forget things during conversations or make things up that are not true?

601 views

why do models like ChatGPT forget things during conversations or make things up that are not true?

In: 803

23 Answers

Anonymous 0 Comments

Very simply, they don’t know anything about the meaning of the words they use. Instead, during training, the model learned statistical relationships between words and phrases used in millions of pieces of text.

When you ask them to respond to a prompt, they glue the most probable words to the end of a sentence to form a response that is largely grammatically correct, but may be completely meaningless or entirely wrong.

Anonymous 0 Comments

Because ChatGPT is NOT A TRUTH MODEL. This has been explained from day 1. ChatGPT is not “intelligent” or “knowledgeable” in the sense of understanding human knowledge. It is “intelligent” because it knows how to take natural language input and put together words that look like a response to that input. ChatGPT is a language model – it has NO ELEMENT IN IT that searches for “truth” or “fact” or “knowledge” – it simply regurgitates output patterns that it interpret from input word patterns.

Anonymous 0 Comments

It is only made to make things that look like they could be written by a person. It is not tested on how true something is, and thus gives it no value

Anonymous 0 Comments

The model doesn’t “understand” anything. It doesn’t think. It’s just really good at “these words look suitable when combined with those words”. There is a limit of how many “those words” it can take into account when generating a new response, so older things will be forgotten.

And since words are just words, the model doesn’t care about them begin true. The better it trained, the more narrow (and close to truth) will be the “this phrase looks good in this context” for a specific topic, but it’s imperfect and doesn’t cover everything.

Anonymous 0 Comments

Because it’s not *artificial intelligence* despite mainstream media labeling it as such. There’s no actual intelligence involved.

They don’t think. They don’t rely on logic. They don’t remember. They just compare what text you’ve given it to what has been in their training sample.

They just take your input and use statistics to determine which string of words would be the best answer. They just use huge mathematical functions to imitate speech, but they are not intelligent in any actual way.

Anonymous 0 Comments

All Machine Learning models (often called artificial intelligence) take a whole bunch of data and try to identify patterns or correlation about that data. ChatGPT does this with language. It’s been given a huge amount of text and so based on a particular input, it guesses what the most likely word to follow that prompt is.

So if you ask ChatGPT to describe how to make pancakes, rather than actually knowing how pancakes are made, it’s using whatever correlation it learnt about pancakes in its training data to give you a recipe.

This recipe could be an actual working recipe that was in its training data, it could be an amalgamation of recipes from the training data, or it could get erroneous data and include cocoa powder because it also trained on a chocolate pancake recipe. But at each step, it’s just using a probability calculation for what the next word is most likely to be.

Anonymous 0 Comments

It’s called a “Generative AI” for a reason: you ask it questions, and it generates reasonable-sounding answers. Yes, this literally means it’s *making it up*. The fact that it’s able to make things up which *sound* reasonable is exactly what’s being shown off, because this is a major achievement.

None of that means that the answers are real or correct…because they’re made up, and only built to *sound* reasonable.

Anonymous 0 Comments

ChatGPT doesn’t actually “know” anything. What it’s doing is predicting what words should follow a previous set of words. It’s really good at that, to be fair, and what it writes often sounds quite natural. But at its heart, all it’s doing is saying “based on what I’ve seen, the next words that should follow this input are as follows”. It might even tell you something true, if the body of text it was trained on happened to contain the right answer, such that that’s what it predicts. But the thing you need to understand is that the *only* thing it’s doing is predicting what text should come next. It has no understanding of facts, in and of themselves, or the semantic meaning of any questions you ask. The only thing it’s good at is generating new text to follow existing text in a way that sounds appropriate.

Anonymous 0 Comments

ChatGPT is basically a text predictor: you feed it some words (whole conversation, both user’s words and what ChatGPT has responded previously) and it guesses one next word. Repeat it a few times until you get a response and then send it to user.

The goal of its guessing is to sound “natural” – more precisely: similar to what people write. “Truth” is not an explicit target here. Of course, to not speak gibberish it learned and repeats many true facts, but if you wander outside of its knowledge (or confuse it with your question), ChatGPT gonna make up things out of thin air – they still sound kinda “natural” and fitting into the conversation, which is the primary goal.

The second reason is the data it was trained on. ChatGPT is a Large Language Model, and they require really *huge* amount of data for training. OpenAI (the company which make ChatGPT) used everything they could get their hand on: millions of books, Wikipedia, text scraped from the internet, etc etc. Apparently important part was Reddit comments! The data wasn’t fact checked, there was way too much of it, so ChatGPT learned many stupid thing people write. It is actually surprising it sounds reasonably most of the time.

The last thing to mention is the “context length”: there is a technical limit on the amount of previous words in a conversation you can feed it for predicting next word – if you go above, the earliest ones will not be taken into account at all, which seems as ChatGPT forgot something. This limit is about 3000 words, but some of it (maybe a lot, we don’t know) is taken by initial instructions (like “be helpful” or “respond succinctly” – again, a guess, actual thing is secret). Also, even below context length limit, the model probably pays more attention to recent words than older ones.

Anonymous 0 Comments

They’re not actually intelligent. They’re kind of like a theoretical “Chinese Room” operating on a word or phrase basis.

Chinese Room is a longstanding AI thought experiment where you have someone who knows zero Chinese behind a door. One slides them Chinese characters and they respond with what should be the answer from a chart. They have no idea what they’re reading or writing.