How GPT solve logic and math problems

224 views

My very limited understanding of GPT is that it’s basically a text generator. Why and how could it solve logic and math problems? Or is it just an emergent ability from LLM that nobody understands?

In: 4

14 Answers

Anonymous 0 Comments

Chat GPT is a computer program that has been trained using a lot of data, including examples of logic and math problems. When you ask Chat GPT to solve a problem, it uses the patterns it has learned from the data to come up with a solution. It doesn’t actually “think” like a human does, but it is able to give you an answer based on the data it has learned.

Anonymous 0 Comments

Chat GPT is a computer program that has been trained using a lot of data, including examples of logic and math problems. When you ask Chat GPT to solve a problem, it uses the patterns it has learned from the data to come up with a solution. It doesn’t actually “think” like a human does, but it is able to give you an answer based on the data it has learned.

Anonymous 0 Comments

It doesn’t, that’s the trick

It’ll do its bestest to convince you that it has but it does not work on precise data. Go ask ChatGPT and Wolfram each of your math problems, unless its already on the internet then ChatGPT is going to guess and be off by a fair amount while Wolfram will actually solve the problem

Similarly, most logic problems have been on the internet for a longgg time so they were likely in the training set but that’s not ChatGPT solving the logic problem, its just rephrasing the google search results

Neural networks are probabilistic not deterministic. Go ask ChatGPT to divide some decimal numbers and you’ll get a different answer every time because its just figuring out from its data set what the most probable next digit is, its not actually executing division to get a firm singular answer

Anonymous 0 Comments

It doesn’t, that’s the trick

It’ll do its bestest to convince you that it has but it does not work on precise data. Go ask ChatGPT and Wolfram each of your math problems, unless its already on the internet then ChatGPT is going to guess and be off by a fair amount while Wolfram will actually solve the problem

Similarly, most logic problems have been on the internet for a longgg time so they were likely in the training set but that’s not ChatGPT solving the logic problem, its just rephrasing the google search results

Neural networks are probabilistic not deterministic. Go ask ChatGPT to divide some decimal numbers and you’ll get a different answer every time because its just figuring out from its data set what the most probable next digit is, its not actually executing division to get a firm singular answer

Anonymous 0 Comments

It doesn’t solve a logic/math problem anymore than your brain solves the calculus behind the trajectory of a baseball.

LLM algorithms are pattern matching algorithms. They take a bunch of initial states and a bunch of ending states and builds a statistical model that it can use to extrapolate answers from based on the initial state. You feed a new initial state to the LLM and it looks for the ending state that is most likely.

All of these AI/ML uses are the result of emergent ability from a relatively “simple” process. Modern AI/ML tools just do it at an absolutely massive scale that makes it impractical (not impossible) for a human to pick apart because it involves the tedious process of examining how millions/billions of data points individually changes the statistical model.

Anonymous 0 Comments

It doesn’t solve a logic/math problem anymore than your brain solves the calculus behind the trajectory of a baseball.

LLM algorithms are pattern matching algorithms. They take a bunch of initial states and a bunch of ending states and builds a statistical model that it can use to extrapolate answers from based on the initial state. You feed a new initial state to the LLM and it looks for the ending state that is most likely.

All of these AI/ML uses are the result of emergent ability from a relatively “simple” process. Modern AI/ML tools just do it at an absolutely massive scale that makes it impractical (not impossible) for a human to pick apart because it involves the tedious process of examining how millions/billions of data points individually changes the statistical model.

Anonymous 0 Comments

Because someone else has data shared or out datat related to math question you are asking on the web.

Math is fundamentally a process of arriving at a result using a lot of different methods.

Chatgpt cannot do that unless it is out there in the web.

Chatgpt doesn’t understand math, it pretends to make links in a semantic web.

In short doing math is not same as understanding math. Math is a beautiful subject just too complex for a machine to truly understand with our current tech.

Anonymous 0 Comments

Because someone else has data shared or out datat related to math question you are asking on the web.

Math is fundamentally a process of arriving at a result using a lot of different methods.

Chatgpt cannot do that unless it is out there in the web.

Chatgpt doesn’t understand math, it pretends to make links in a semantic web.

In short doing math is not same as understanding math. Math is a beautiful subject just too complex for a machine to truly understand with our current tech.

Anonymous 0 Comments

Yes GPT is an advanced generator designed to predict the next “symbol”, but that doesn’t mean that it can’t “learn” underlying principles.

It is certainly able to answer simple math problems that it hasn’t seen before, so in some sense has “figured out” the basic principles of, say, addition, because it’s seen enough examples that it can generalise.

This doesn’t mean it’s going to get everything correct though. I watched a video from the creator where he said that it had “learned” to add any two 40 digit numbers together, but if you give it a 35 digit and a 40 digit it would sometimes (confidently) get it wrong.

Of course, a human might know that they are bad at that sort of thing (we also make mistakes), but knows enough to use a calculator. With the Wolfram etc. plug-ins, this is exactly what future versions will do.

“ChatGPT can add two 40 digit numbers, so now it “mostly” understands how to add, but if you try a 40 digit number plus a 35 digit number, sometimes it gets it wrong. So it’s still deriving how math works..” Greg Brockman, founder of #OpenAI at #TED2023

EDIT: Added quote

Anonymous 0 Comments

Yes GPT is an advanced generator designed to predict the next “symbol”, but that doesn’t mean that it can’t “learn” underlying principles.

It is certainly able to answer simple math problems that it hasn’t seen before, so in some sense has “figured out” the basic principles of, say, addition, because it’s seen enough examples that it can generalise.

This doesn’t mean it’s going to get everything correct though. I watched a video from the creator where he said that it had “learned” to add any two 40 digit numbers together, but if you give it a 35 digit and a 40 digit it would sometimes (confidently) get it wrong.

Of course, a human might know that they are bad at that sort of thing (we also make mistakes), but knows enough to use a calculator. With the Wolfram etc. plug-ins, this is exactly what future versions will do.

“ChatGPT can add two 40 digit numbers, so now it “mostly” understands how to add, but if you try a 40 digit number plus a 35 digit number, sometimes it gets it wrong. So it’s still deriving how math works..” Greg Brockman, founder of #OpenAI at #TED2023

EDIT: Added quote