Eli5: how does a program like chatgp actually “learn” something

474 views

Is there a fundamental difference between how a program like chatgp learns and how a human learns?

In: 2

18 Answers

Anonymous 0 Comments

At the basic level, different weightings in its giant neural network are adjusted to get an expected (or roughly similar to expected) output.
Simplified a bit, it’s “guessing algorithm” is retuned so that it gives a specific output for a given input.

Anonymous 0 Comments

At the basic level, different weightings in its giant neural network are adjusted to get an expected (or roughly similar to expected) output.
Simplified a bit, it’s “guessing algorithm” is retuned so that it gives a specific output for a given input.

Anonymous 0 Comments

At the basic level, different weightings in its giant neural network are adjusted to get an expected (or roughly similar to expected) output.
Simplified a bit, it’s “guessing algorithm” is retuned so that it gives a specific output for a given input.

Anonymous 0 Comments

Machine-learning systems like GPT “learn” through a number of techniques, none of which is anything like Human learning. It’s more like breeding animals.

Imagine an AI as a grid of numbers.

Start with 100,000 of these AI grids and fill them with random numbers.

Ask these grids to do something useful, like answer a question that you already know the answer to.

“Kill” all the AIs that get the answer wrong.

Take the ones that got the answer right and let them “breed”, creating new AIs with mixes of the numbers that got that answer right, plus some randomness added in.

Repeat that thousands or millions of times, with millions of questions that you already know the answers to.

The “descendants” (keeping with the animal metaphor) that survive can reliably answer questions correctly.

For Chat GPT, the “questions” are what do various types of writing on a variety of topics look like, and the answers are text that looks like a valid sentence, short essay, white paper, etc., but doesn’t have to be factual or correct because correctness isn’t what they’re looking for.

Anonymous 0 Comments

Machine-learning systems like GPT “learn” through a number of techniques, none of which is anything like Human learning. It’s more like breeding animals.

Imagine an AI as a grid of numbers.

Start with 100,000 of these AI grids and fill them with random numbers.

Ask these grids to do something useful, like answer a question that you already know the answer to.

“Kill” all the AIs that get the answer wrong.

Take the ones that got the answer right and let them “breed”, creating new AIs with mixes of the numbers that got that answer right, plus some randomness added in.

Repeat that thousands or millions of times, with millions of questions that you already know the answers to.

The “descendants” (keeping with the animal metaphor) that survive can reliably answer questions correctly.

For Chat GPT, the “questions” are what do various types of writing on a variety of topics look like, and the answers are text that looks like a valid sentence, short essay, white paper, etc., but doesn’t have to be factual or correct because correctness isn’t what they’re looking for.

Anonymous 0 Comments

Machine-learning systems like GPT “learn” through a number of techniques, none of which is anything like Human learning. It’s more like breeding animals.

Imagine an AI as a grid of numbers.

Start with 100,000 of these AI grids and fill them with random numbers.

Ask these grids to do something useful, like answer a question that you already know the answer to.

“Kill” all the AIs that get the answer wrong.

Take the ones that got the answer right and let them “breed”, creating new AIs with mixes of the numbers that got that answer right, plus some randomness added in.

Repeat that thousands or millions of times, with millions of questions that you already know the answers to.

The “descendants” (keeping with the animal metaphor) that survive can reliably answer questions correctly.

For Chat GPT, the “questions” are what do various types of writing on a variety of topics look like, and the answers are text that looks like a valid sentence, short essay, white paper, etc., but doesn’t have to be factual or correct because correctness isn’t what they’re looking for.

Anonymous 0 Comments

Chatgpt has for reference the entire internet as it existed up to 2021. You can go look at google for the string “Is there a fundamental difference between…” Now, in several examples (all of them actually,) look at the text after that. Further, look at how that phrase ‘leads to’ the words chatgpt and human. Chatgpt makes a summary of the text it finds related to the words you’re interested in, and returns that to you.

Chatgpt is Kim Peek meets Chauncey Gardner — it knows everything, and can relate and discuss it, but doesn’t know What it knows, or Why.

Anonymous 0 Comments

It’s important to note that it’s dicey to talk about AI with the same words we apply to humans. “Learning” really isn’t the same thing for a computer program, but it’s analogous.

There are different kinds of AI, and ChatGPT is one type called a Large Language Model (LLM). Oversimplifying, it takes giant amounts of text, especially conversations, and builds a model of what an actual conversation looks like. Plus, it takes all that text as a database, kind of like Google does when it indexes to give you search results. Google doesn’t really “know” anything, it just has a good database and a good algorithm for returning search results.

It’s also important to note that these LLMs are a bit better at having a conversation that looks right than at providing accurate information. In fact, if the text they’re trained on is consistently wrong about something, they will be too. They also make up text that fits their model of what an answer should look like, and sometimes what they make up is nonsense.

It’s a cool advancement, and they’ll keep getting refined and improved, but there are for sure shortfalls.

Anonymous 0 Comments

It’s important to note that it’s dicey to talk about AI with the same words we apply to humans. “Learning” really isn’t the same thing for a computer program, but it’s analogous.

There are different kinds of AI, and ChatGPT is one type called a Large Language Model (LLM). Oversimplifying, it takes giant amounts of text, especially conversations, and builds a model of what an actual conversation looks like. Plus, it takes all that text as a database, kind of like Google does when it indexes to give you search results. Google doesn’t really “know” anything, it just has a good database and a good algorithm for returning search results.

It’s also important to note that these LLMs are a bit better at having a conversation that looks right than at providing accurate information. In fact, if the text they’re trained on is consistently wrong about something, they will be too. They also make up text that fits their model of what an answer should look like, and sometimes what they make up is nonsense.

It’s a cool advancement, and they’ll keep getting refined and improved, but there are for sure shortfalls.

Anonymous 0 Comments

It’s important to note that it’s dicey to talk about AI with the same words we apply to humans. “Learning” really isn’t the same thing for a computer program, but it’s analogous.

There are different kinds of AI, and ChatGPT is one type called a Large Language Model (LLM). Oversimplifying, it takes giant amounts of text, especially conversations, and builds a model of what an actual conversation looks like. Plus, it takes all that text as a database, kind of like Google does when it indexes to give you search results. Google doesn’t really “know” anything, it just has a good database and a good algorithm for returning search results.

It’s also important to note that these LLMs are a bit better at having a conversation that looks right than at providing accurate information. In fact, if the text they’re trained on is consistently wrong about something, they will be too. They also make up text that fits their model of what an answer should look like, and sometimes what they make up is nonsense.

It’s a cool advancement, and they’ll keep getting refined and improved, but there are for sure shortfalls.