How does one “invent new maths”? Like Isaac Newton inventing Calculus, or John Napier logs. How does one answer a mathematical question that’s never been answered?


How does one “invent new maths”? Like Isaac Newton inventing Calculus, or John Napier logs. How does one answer a mathematical question that’s never been answered?

In: Mathematics


It’s about seeing things in a different way (and studying math of course). You want to solve a problem, or even just see things in a different way… so you define new things and apply them (as long as they are consistent with known math) to see if something interesting pops out.

For example, Newton was trying to describe the speed of a falling object. Since gravity changes the speed of falling object, he *defined* the mathematical concept of “rate of change” (derivative) of a function. He also found out that you can *apply* this concept also if you want to compute the area under a curve.

You start with knowing pretty much all that’s known to that point, and then, when you have really thought about it long enough, and in depth enough, you suddenly have a light bulb light up over your head and you realize you came up with something new. (Well, except for the light bulb)

* Numbers are just representations of data. There’s 1, or 2 oranges.
* Math is just the manipulation of that data. John has 2 oranges + 3 oranges
* Inventing new maths, is just figuring out **how to manipulate** that data to solve a **specific question, or type of question**. How many oranges does John have? 2+3=5!

You could say that the question has never been answered before, or you could also say the question has never been asked **properly** before. It’s never been broken down to smaller parts we can easily understand.

Like… how big is a triangle (for simplicity we’ll use a Right Angle Triangle)? How do I calculate all that area inside of it? Well I don’t know… but there must be some way to find out. So let’s see what makes up a triangle, break it down to simple parts we can already measure, parts we can use as data.

Well… we can measure the base of it, we can also measure the height, so we have that data. But now what?

Hm… well you know what’s interesting that **I** noticed? If I take another triangle of the exact same size, I turn it and put the two together, I’ve made a square! That’s kind of neat, and it happens every time.

It’s easy to calculate the area of a square… So to figure out how big a Right Angle Triangle is, I actually just need to make that a square and chop it in half. I just invented new math!

So the math for how big a triangle is, is ((The **B**ase times the **H**eight) divided by **2**). So the question for how big a triangle is, more specifically is what is ((BxH)/2)?

You can now use this new math, we’ll call it **Remote-Waste’s Amazing Triangle Triumph**, and you can praise me as the smartest person ever.

Mathematics is the process of coming up with new ideas for things where each thing has a defined set of logical behaviour or rules, then combining these things to see how the combination of rules or behaviour for the combination works out. Basically inventing new mathematics is a way of coming up with a new set of rules that you can apply to things, and seeing if the result of the combination of the new set of rules you have made up with all the things you have already done, gives any useful insight, or ability to solve problems.

Take a really simple question: if I add two odd numbers together, is the result odd or even? It sounds like a hard problem, and it is not possible to go through by trial and error. What I can do is come up with a new way of describing odd and even numbers. I can say that for any even number, a, it can be represented as double some other number, n, so a=2n. For any odd number, b, I can say it is one more than an even number, so b=2n+1. I can then go back to the original question, and look at the result of adding two odd numbers:


I can then apply normal rules to the right hand side, and say


and take a factor of two out:


Because n, m and 1 are all integers, I can say they are some other integer, p, so


Because my original definition of an even number applies for any integer n, I can see that x has the form of an even number, 2n, and conclude that the sum of two odd numbers is even, for any two odd numbers.

This is a really simple result, and not one I came up with myself. The point is what happens is I use a new definition for even and odd numbers that didn’t exist at the start of the question. The definition of an even number as 2n and an odd one as 2n+1 is something that someone invented, and by using that invention, it became possible to actually answer the question for all odd numbers.

This is the basic idea behind inventing things in mathematics. You come up with a new way to describe something, be it a kind of number, a kind of process, function, operator, or whatever, and then work with that new thing you have created using all the existing standard rules for mathematics to see if you can find a path to answering the question.

I am aware that algebra is a bit more advanced than your average 5 year old, but I hope this demonstrates the idea.

It’s basically teaching a tecnique, for example in a world where people only knew addition, I could teach them multiplication, instead of doing 5+5+5 just do 5*3. This is much easier for more advanced calculations, it might not even be useful right away, but maybe in a few years some mathematician will find a use for it

New mathematics comes about when someone has a new insight into a problem. A good example is topology. In the 18th century someone posed a question about the Prussian city of Königsberg. The city is on the river Pregel and it had seven bridges joining the two banks via a couple of islands. The question was, “Is there a path which goes over each bridge once and only once?”.

Nobody could devise such a route, but nobody could show that it was completely impossible. A mathematician named Leonhard Euler decided to try the problem. Euler’s great breakthrough was to realise that none of the usual measures made any difference to the problem – not the length of the bridges, or their distance apart, or their angles to each other. He realised that the key thing is how many paths into and out of each bridge there are: If you want to cross each bridge only once there must be as many paths out as there are in. Another way of saying this is that the total of paths in and out must be an even number (divisible by 2). Using this concept he showed that the Königsberg question had no solution – it was impossible. But he was able to go further and come up with a general rule for such problems – based on how many bridges have even numbers of entrances and exits.

Euler had distilled a certain problem into new abstract concepts: edges (the bridges), nodes (land masses) and the degrees of the nodes (paths in + paths out). This new view of the world turned out to be a very powerful tool for many different problems and developed into whole new branches of maths such as graph theory and topology.

See [Wikipedia](önigsberg) for a better explanation of the problem and solution.

Here is a story from the 20th century. Hilbert, optimistic as usual, ask for people to find an algorithm to solve all logical problems. This is an interesting question that people want an answer to.

The issue is, that sounds impossible. Unfortunately, there is no ways to prove that it’s impossible, because the concept of algorithm have not been invented. That doesn’t mean people don’t know about algorithm at all; but more like, the ideas of algorithm is too vague. The same ways words in normal language are vague; if someone ask you “what is the precise definition of ‘porn’?” you might just answer “I don’t know, but I know it when I see it” (actual quote from a judge, btw, it’s a famous case).

For many other problems in the past, if people want an algorithm, you just give an algorithm, and everyone can check and agree that this is an algorithm. But to show that an algorithm doesn’t exist, you need to know exactly what is an algorithm. So Hilbert’s question is not a complete question, you have to fill in the definition of an algorithm to make it a complete question.

Turing and Church did this. They invented the concept of computability, to answer the question of what is an algorithm. What they invented isn’t a proof. They make concise and precise their intuitive idea of algorithm, and offer it to other people. Whether people agree or not, the key thing is, there is at least *a* definition. With this, Turing and Church had finally form a new question that nobody asked before. Then they show that the question have no answers, there are no algorithm. This is the start of computer science.

It’s more like answering a question that noone asked before. Geniuses are on such another level that they ask questions noone else would ever think to ask, and since they’re the only people intelligent enough to ask, they’re the only ones intelligent enough to answer.

They ask a question and answer it themselves.

Usually just looking at a problem from a different angle, or taking a path nobody explored before.

“I know you can only take the square root of a positive number but what would happen if we create a hypothetical number, let’s call it ‘i’, and claim that squaring it equals -1”.

Discover isn’t really the word. Maybe “invent.” They basically came up with extensions to the language of math so that they could describe new problems.

Lets say that maths is just a big lego set.
Now a friend of yours wants to build a robo-mech from the parts, but it turns out there
there are no suitable parts for it.
So your friend modifies the existing parts and uses them to build the robo-mech.

We often thing that math is a bunch of rules and we can only do the math that the rules dictate. But it develops the other way around. There is some kind of math that we want to be able to do, and we just need to create rules that allow us to do that math.

Newton wanted to do a specific kind of math related to physics where he needed to divide things into infinitely small chunks. You can’t just do that without well thought out rules without it going crazy and incomprehensible. So the rules to do that were not fully developed around that time (though, there was already a lot of work done on Calculus before Newton), and so Newton just found out the rules needed to do what he wanted to do.

Largely it is about identifying patterns, and connecting what you see to what we already know.

Sometimes what you see doesn’t match up with what you know, so you analyze it. You study it. Sometimes this allows you to extend what we know to include the new information and other times it allows you to create something new that we know.

When we get into the real abstract or theoretical stuff, it may be things we can’t observe. In this case, maybe you ask “what if this changed?” or some similar question, and then you explore how that will connect to what we know.

I know that wasn’t really an ELI5 answer, and was very vague, but it is a difficult concept to explain.

Also, Newton is not the sole inventor of Calculus credit is also due to Gottfried Leibniz who discovered it independently from Newton at around the same time.

First, if you’ve ever solved a mathematical problem that was not one that you were specifically *taught* how to solve, then you experienced a very small taste of what mathematical research is like. Because even if that particular question has been answered before by other people, it has never been answered before *by you* so for you it is a discovery which is a small-scale version of discovering “new” math.

You might think that is nothing like creating calculus, but no mathematical discovery comes out of thin air. Studying math gives you access to a variety of tools and ideas, and when you hit a problem that your tools/ideas are unable to solve, you modify them slightly to create new ones. Big mathematical discoveries are just particularly large or creative modifications. In the paragraph above, whatever you did to solve the “new” problem can also be used to solve other similar problems. If you were to try to “formalize” whatever trick or idea you used, that’s where new math comes from.

Since you mentioned calculus, I’ll try to use it as an example (though obviously this will have to go a bit beyond age 5). Since the ancient Greek Archimedes, we’ve been able to think of the area A inside a circle as follows: It has to be larger than the area L of any polygon inside the circle and smaller than the area U of any polygon that encloses the circle. That is, L < A < U, so L is a lower bound and U is an upper bound. By calculating areas of polygons with more and more sides, you can get L and U to be closer together, which means you can effectively compute A to any level of precision you want. This leads to a proof for why the standard formula for the area of circle is correct.

This idea makes sense for any “curvy” shape, not just circles. One thing that Newton did was *formalize* these ideas and use the formalization to help him calculate the area A for an astounding variety of shapes. In particular, he came up with the idea that A is the “limit” of those lower estimates L as those estimates become more accurate. This means that computing areas of curvy shapes boils down to being able to compute these limits, which led him to develop lots of tricks for computing such limits. In retrospect, it’s such a natural outgrowth of ancient Greek math that imho the surprising thing should be that it took so *long* for calculus to be invented.

Just rip off some old math and put your name on it. Right Pythogoras?

Was mathematics invented or discovered?

Another angle most people don’t consider is that the very basis of mathematics is that if you have A) a new idea, and B) a body of already-accepted ideas, then what you need to do is 1) assume your new idea is correct and then 2) find where it either contradicts itself or contradicts what you already know to be true. That is how you might say somebody “invents” new math (sorry, not British). If your new idea violates what you already know, then that is generally a good sign that it *might* be incorrect. But if you can invent a theory of Calculus, and 1 + 1 still equals to 2, then you might be on to something.

For example, something most people don’t know is that the reason that division by zero is considered an invalid mathematical operation is because, you can get the result that any number equals any other number. The easiest place to see this is the graph of 1/x, where when x=0, y seems to be = ∞ AND -∞. Thus ∞=-∞. Clearly, this is infinitely incorrect. In fact, you can algebraically manipulate a division by zero to say anything you want, like 1=2. 1=2 doesn’t line up with what we already know to be true, so clearly there is something wrong with our new idea (lest there be something wrong with the old ideas– though the cumulative effect of this whole process is that we tend to suss out the right ones)

Here’s one way you might discover logarithms on your own. (From now on, log = log base 10.)

Suppose you’re back in the olden days before calculators and computers and your job sometimes involves multiplying numbers with a lot of digits. One day your boss asks you to find the area of a rectangle with dimensions 15432.54 by 827361.37. The area is 15432.54 * 827361.37. Calculators haven’t been invented yet, and doing this with pencil and paper would take ages. Is there a way to simplify this?

You could make a multiplication table, like the one you memorized in grade school. Then you could just look up the product in the table! Except that such a multiplication table would be HUGE if you wanted one which would be actually useful. Is there a way to shrink the amount of information required to make this reasonable?

Then you remember a neat fact about exponents: b^(x+y) = b^x b^y. In a way, exponents turn addition (an easy operation) into multiplication (a very hard operation). What if we could do the opposite? What if we could turn multiplication into addition?

Enter logarithms, which are just reverse exponents. These have the neat property that log(xy) = log(x) + log(y). i.e. they turn multiplication into addition.

So let’s use this to find 15432.54 * 827361.37.

First, we will find log(15432.54 * 827361.37). Then at the end we can take 10 to the power of whatever the answer is to get the final result.

In scientific notation, this becomes log(1.543254 * 10^(4) * 8.2736137 * 10^(5)).

Using what we found earlier, we know that this is equal to log(1.543254) + log(10^5) + log(8.2736137) + log(10^5).

This is log base 10, so we have now log(1.543254) + 4 + log(8.2736137) + 5.

We can look up those logarithms in a logarithm look-up table, which is what people used back in the days before computers. By using logs and scientific notation, we only need to know the values of logs for numbers between 1 and 10. Much more feasible than a giant multiplication table.

If you calculate this, you would find that log(15432.54 * 827361.37) = 10.10613. Therefore, 15432.54 * 827361.37 = 10^(10.10613265) = (10^(0.10613265)) * 10^(10) which you can solve using a different lookup table.

Y’all remember when the actor Terrence Howard “invented” his own math called Terryology? What a weird news story that was.