Eli5, What is a Large Language Model, and does it mean that the computer is thinking like a human?

450 views

Eli5, What is a Large Language Model, and does it mean that the computer is thinking like a human?

In: 1

3 Answers

Anonymous 0 Comments

A large language model is a model that has been trained on a large dataset, a very very very large dataset. Think of a big part of the internet. Based on that, the model has started to learn what texts should look like as he has seen a lot of them.

Then, models like chatGPT are able to infer what is the next probable word in a sentence. So when you type your question to chatGPT, the model doesn’t understand the question. It even doesn’t understand that it’s a question. All it does is to predict what is the most probable word that should follow the question, then the next one and the one after and so on. Until it creates the answer.

The models creating images are the same but instead of predicting words, they predict pixels.

In both cases, the models are “dumb” as they don’t analyze the question/prompt. They just based their answers on probabilities computed over an enormous amount of data

Anonymous 0 Comments

A Large Language Model is basically a really advanced auto-complete. If I show you a hundred fairy tales, all of which begin with “Once upon a time”, and I ask you to give me the first word of the next fairy tale, you’ll probably guess “Once” and be right. When I tell you the first word is “Once” and ask you to predict the next word, “upon” is a virtual certainty, and so on. You can do that even if you don’t know what any of those words mean. You just know it’s the right next word because you’ve seen it many times.

LLMs are like that, but a million times more complex. They have enough knowledge based on internet content to recognize many different kinds of text—stories, questions and answers, essays, legal briefs—and so when you ask them to do their “auto-complete” task, they can do a very good job of writing things that look like things they’ve been trained on.

But LLMs don’t really *understand* what they’re saying. They’re just mimicking patterns they’ve been trained on. It’s just that they’re *so good at it* that it looks like (and usually *is*) knowledgeable and relevant information. Even when you ask it basic math problems, it’s not actually doing the math, it’s just breaking the math problem down into words and sentences, and then using auto-complete to end up at something that may or may not be the right answer. And often it’s wrong.

This also means it can’t really handle prompts it hasn’t seen on the internet before, like a “count how many words are in this sentence” task, or “write a paragraph about economics that doesn’t use the letter C”. There’s a lot of knowledge that comes out in its auto-completions, but there isn’t any *intelligence* giving it a goal in deciding what it should say.

Anonymous 0 Comments

a LLM is a very advanced “AI” supposedly mimicking really good speech, vomiting the most likely answers to an input

it does not think. Just because it seems to doesnt mean it does. Itd be like saying your autocorrect is sentient. No, it just do probable words.

I can get the second question, because ive met humans so petty and previsible i could emulate them with a bot