Why can’t we use the most powerful computers to solve the hardest math problems?

157 views

[ad_1]

So there are currently tons of unsolved math problems such as The Collatz Conjecture, The Riemann Hypothesis, Goldbach’s conjecture and so on… I get that they are so hard that being good at mathematics isn’t enough, but why can’t computers solve them? Or at least solve some parts of the problem, getting a chunk of the work done for the mathematicians that work on them?
Will computers be able to eventually solve this problems in the future as we’ll develop better technology?

In: Mathematics
[ad_2]

They aren’t problems that can be solved by just crunching numbers. For a mathematical proof to be valid, it has to be valid for all numbers and you cannot possibly check all numbers because there are an infinite number of them!

It is true we have programs that can generate proofs for some sorts of problems, but they aren’t necessarily better than the best people (certainly faster in some cases).

We do that when we can. A computer can go through a bunch of calculations and if you have a complete set, then it can brute force the problem. This actually happened with proving that the minimum number of clues needed to solve a sudoku is 17. They limited the set to a few billion unique possible states, and set the computer running for a couple months, and they proved it.

But let’s say you’re trying to find a counterexample to the Riemann hypothesis. You plug in .1+.00000000000001i then .1+.00000000000002i etc. It’s a literal infinite amount of calculations to be made. That will not do at all. You need a systematic solution that doesn’t go through every possible value. And it’s equally hard to write a program that can make a systematic solution itself, because not only do you have to prove that that solution is true, but also that your program to find that solution validly connects your solution to the rest of mathematics.

First of all, many of the fastest supercomputer have been used to solve math problems, that’s what a computer does! But they are the problems that the people who own the computers are interested In solving. It may not be these “unsolved” math problems, they may be more interested in near term real world applications

They may be working on wild stuff such as weather modeling, molecular modeling and analysis, testing nuclear weapons, aerodynamics, cryptography, and others.

Answer: computers are really good at doing one thing over and over, with minor variations, like adding actual numbers. Some math problems are more about the structure of math, rather than just adding up numbers. And finally, “just adding up numbers” isn’t a guarantee that a math problem is true for all numbers, everywhere, including infinity. So for some things like, say, calculating prime numbers, you can just set a computer to dividing every number with all the numbers that came before it, and eventually get an answer, but it might take longer than the lifetime of the universe to calculate, which isn’t helpful. Figuring out how to predict whether a number will be prime is a much more useful thing, but it only takes one number that you predict to be prime to turn out not to be, and you’re back to square one. And what if there’s a prime you didn’t predict? So theoretical math spends a lot of time in places where computing isn’t as helpful as it could be.

This is actually being done and there are a number of famous math problems which have been solved by computers. One things that can work to prove some problems is to just use computers to search for numbers to prove or disprove the theory. But this means that there needs to be a solution within the numbers that even our computers can search. And a lot of these problems might require a computer to search more then is physically possible in our universe. Another approach which have been researched a bit is to have computers do the logic calculations that is involved in some of these long proofs. A computer can apply logic reasoning much faster then a human and can find logical fallacies much faster. So you can program the rules and definitions of mathematics and logic into the computer and have it search through until it can find a logical reasoning that either proves or disproves a theorem. This technique have been used a few times and have produced extremely long logical reasoning for the proof it presents. A computer is able to write hundreds of pages of new theories and mathematical proofs that will eventually end up with a conclusion. The problem is that these are generally very trivial on their own as the computers is not able to come up with completely new concepts and techniques. The algorithms are being made better and better but they still lack the same abilities for critical thinking that humans have.

And although the techniques being developed for these computer generated proofs can be very useful on their own, for example being able to give a computer a program and then ask it to prove that it works or eventually how it can fail. They kind of miss the point of these math problems. They are usually not interesting on their own and have no real world applications. However in order to solve them we need to come up with new ideas and concepts which themselves are very important. It is not the destination that is the goal but the friends we make on the way. For example the discoveries within exponents, prime numbers and eventually elliptic curves that were done while trying to prove Fermat’s last theorem is the foundation of modern asymmetric encryption that allows us to communicate safely and securely today. A computer could not have come up with all these new concepts on its own.

You’re thinking of “Math problem” in similar terms as the problems you’d find in a high-school textbook: neat equations with a definite answer that can be found if you just rearrange and calculate them. Academic mathematics seldom actually works that way.

Mathmatics ask abstract questions that often involve little to no actual calculation to do in the first place. Questions where brute force simply doesn’t work. If I ask you “Is there a largest prime number, and if so what is it?” then you can’t just go try and calculate the largest prime via brute force because for all you know the answer could be “no” and you’d never stop finding prime numbers. However if you go calculate as many as you can be bothered to find and then go “Ok, I’ve found a lot, there are an infinite number of them” you could just as easily have stopped three primes short of the final one. We have examples of patterns that go on for millions of members before just abruptly stopping.

Computers are good at one thing: following instructions really quickly. That’s great if you know what instructions to have it follow, but for things like the Collatz Conjecture we don’t even know that. We can’t write a program for a computer to find the answer because we don’t know what that program would look like or if it even exists.

Imagine you ask computer “Do purple swans exist?”

It can find the local park and check swans there – nope, they’re all white. But that’s not ALL swans.

It can check the local zoo, and find some white and black swans. But that’s still not all swans.

It can check every zoo on the planet, and every lake on the planet, but even if it doesnt find purple swans that doesnt mean purple swans dont exist. Just means it hasnt found one yet.

A computer can check zoos a lot faster than a human would – but it still faces same problem. Just because you check a billion lakes, doesnt mean you have checked ALL locations where a purple swan can be found. Maybe there’s a purple swan hiding on mars?

A computer is powerful because its fast. It can do number operations really fast – imagine a human who can run between lakes much faster, or a human who can see further distance away to check swans from further away. But that doesnt solve the problem of figuring out if Purple swans exist.

The wildest math problems that we would like to solve today involve irrational numbers – a number that cannot be accurately described with a finite number of digits but instead must be represented through non-numerical means. Pi is a good example of this – “pi” is not so much a “number” as it is a representative of the relationship between a circle’s diameter and circumference. A number rounded to any number of digits can, for almost all intents and purposes, be substituted in for “pi” to solve any number of problems. However, it’s not “pi”, it’s just “close enough to pi to not mess up the intended application of this math problem”. Pi is not the only irrational number, it is merely a common and practical example that we all use multiple times every day, from the spinning parts of your car to the propagation of light and sound waves.

The problems we are talking about however are not looking for an exact answer rounded to so many digits. What we are looking for are *relationships* between different things that can be defined with an equation that will inevitably use irrational numbers. Often the questions we are asking are not so much “how many pi are in this thing”, we are asking “why is pi in this thing” or “why does this part of a phenomena act so much like a circle, wave, etc.”, and just like a dictionary the definition shouldn’t include the word you’re trying to define.

Computers are great at crunching numbers. However, even the best computers only have a finite number of transistors, great as that number may be. However a computer will never have logic, which is a very animal concept that humans have the best grasp on. However, even humans have a finite number of neurons, so we must find ways to describe our world in ways beyond finite things but in ways that finite beings can understand/use – to outsource our unanswered questions to representations and proofs and reasoning. In this way the hardest math problems could be described as philosophy in a way just as much as they could be described as math. And computers don’t do philosophy.

Depends on the problem.

> The Collatz Conjecture, The Riemann Hypothesis, Goldbach’s conjecture

None of these can be solved by just crunching numbers. For example lets simply state the Collatz Conjecture (also referred to as the 3n+1 problem):

Start by picking a positive integer *n*. If *n* is even, divide it by 2, otherwise triple it and add 1. Does *n* **always** go to 1?

In this case, we *could* in theory use a computer to prove the Collatz Conjecture (that every number will eventually collapse to 1) false by finding a number that breaks the pattern. The problem is you can’t prove the Collatz conjecture true by just sheer computation because you’d have to check an infinite amount of numbers. Even though using computer’s we’ve verified every number up to 2^68 does fall to 1, it’s a possibility that 2^68 + 1 could break the pattern.

Goldbach and Riemann are similar types of problems, we could prove them false with a computer finding a single number that breaks the pattern, but we could never prove it true.

Computers are amazing machines. They can perform calculations many millions of times faster than any human could. If we want to prove that some mathematical statement is true for numbers from 0 to *n*, then a computer can certainly calculate up to *n* much more quickly. But if you want to check all the numbers to infinity – maybe even including numbers with an infinite number of decimals – it doesn’t matter how fast you compute that, it will still take an infinite amount of time. The brute force way doesn’t work.

Instead, “solving” for many maths problems means finding some way to prove it will be true no matter what number is fed into it. Doing this requires creative problem solving ability. Computers…compute, they follow instructions, they aren’t capable of being creative.

computer aided proofs are real already but pretty basic at least what I’ve seen. If the human brain is a type of computer (I think it is) then yes computers will in the future write math proofs. More complicated mathematics like the problems you listed are math problems that are about proofs (symbolic) not numerical techniques you can’t plug and chug to solve them except to prove finitely many cases of them which most mathematicians won’t be satisfied with. What they prefer instead is a structural argument that proves something for a general case.