What do mathematicians mean when they say that the calculus that Newton and Leibniz invented was not rigorous?

180 views

Was it that Newton and Leibniz didn’t really know what they were doing? I presume that they had to create proofs for how their integrals and derivatives worked.

Were their calculations incorrect because they weren’t rigorous?

In: 3

6 Answers

Anonymous 0 Comments

Newton and Leibniz knew what they were doing, but they were making some assumptions about how real numbers work.

For example, consider the following statement:

* Given 0 and a number x > 0, there exists some number y such that 0 < y < x.

In more everyday terms, this statement says “there is no smallest positive number”. This statement is true for real numbers and for rational numbers, but it isn’t true for integers (there is no integer y such that 0 < y < 1).

Now consider the following, trickier statement:

* Consider an infinite sequence of numbers x1, x2, x3, … with the following two properties. First, each number is larger than the previous number. And second, every number is smaller than some single other number. Then this sequence necessarily approaches a limit.

(For you real analysis students out there, this is, essentially, the [monotone convergence theorem](https://en.wikipedia.org/wiki/Monotone_convergence_theorem).)

In everyday-ish terms, this says “a sequence that moves in only one direction can have one of two behaviors. Either it runs off to infinity, or it approaches some number more and more closely.” For example, the sequence 3, 3.1, 3.14, 3.141, 3.1415, 3.14159, 3.141592… approaches the value pi, while the sequence 1, 2, 4, 8, 16 … runs off to infinity.

This statement is true for real numbers, but it is **not** true for rational numbers. The sequence 3, 3.1, 3.14… described above is a sequence of rational numbers that has the two properties given (every number is bigger than the last, and all the numbers are less than, say, 4), but it does *not* approach a rational number as a limit.

Newton and Leibniz were implicitly assuming statements like this one in their development of calculus. But since they didn’t have a rigorous definition of the real numbers, they didn’t have any rigorous way to prove these statements, so their calculus rested on the assumption that those statements were true. It turns out that they are, in a proper formalism of the real numbers, but Newton and Leibniz did not know (and could not have known, since they did not have a rigorous definition of the real numbers) that that was true.

Anonymous 0 Comments

Newton and Leibniz knew what they were doing, but they were making some assumptions about how real numbers work.

For example, consider the following statement:

* Given 0 and a number x > 0, there exists some number y such that 0 < y < x.

In more everyday terms, this statement says “there is no smallest positive number”. This statement is true for real numbers and for rational numbers, but it isn’t true for integers (there is no integer y such that 0 < y < 1).

Now consider the following, trickier statement:

* Consider an infinite sequence of numbers x1, x2, x3, … with the following two properties. First, each number is larger than the previous number. And second, every number is smaller than some single other number. Then this sequence necessarily approaches a limit.

(For you real analysis students out there, this is, essentially, the [monotone convergence theorem](https://en.wikipedia.org/wiki/Monotone_convergence_theorem).)

In everyday-ish terms, this says “a sequence that moves in only one direction can have one of two behaviors. Either it runs off to infinity, or it approaches some number more and more closely.” For example, the sequence 3, 3.1, 3.14, 3.141, 3.1415, 3.14159, 3.141592… approaches the value pi, while the sequence 1, 2, 4, 8, 16 … runs off to infinity.

This statement is true for real numbers, but it is **not** true for rational numbers. The sequence 3, 3.1, 3.14… described above is a sequence of rational numbers that has the two properties given (every number is bigger than the last, and all the numbers are less than, say, 4), but it does *not* approach a rational number as a limit.

Newton and Leibniz were implicitly assuming statements like this one in their development of calculus. But since they didn’t have a rigorous definition of the real numbers, they didn’t have any rigorous way to prove these statements, so their calculus rested on the assumption that those statements were true. It turns out that they are, in a proper formalism of the real numbers, but Newton and Leibniz did not know (and could not have known, since they did not have a rigorous definition of the real numbers) that that was true.

Anonymous 0 Comments

**TL;DR** It’s not that Newton was “wrong” *per se*, but that the playground in which he was “right” was too small and we needed new rules before we could expand the playground.

For the assumptions under which they were working, their stuff works just fine. And, in their time, there was no real reason to search for additional rigor because these new tools dropped and there was a lot of work that could be done without issue. It took about 200 years, and many great mathematicians using this “unrigorous” version of calculus, before the exceptions and pathologies built up to the point of needing better foundations for it.

For instance, there was just a general assumption that if a function is continuous then – outside of isolated points – it should also be differentiable. For the most part this is true for things like orbits of planets and trajectories of ballistics. But it’s just not a true statement thanks to the [Weierstrass Function](https://en.wikipedia.org/wiki/Weierstrass_function) which is continuous everywhere but differentiable nowhere.

But, eventually, we began to use Calculus in more sophisticated settings and the pathologies were too much for this broader use. This is when limits really began to make a difference because they are able to rigorously formulate a lot of the more wishy-washy parts of Newtonian/Leibnizian Calculus. This allows us to articulate more precisely what we can do and when we can do it. They’re also insanely practical for approximations.

As far as we know, limit calculus is pretty self consistent but it isn’t the end of the story. We have much more broadly expanded the context in which we want to do “Calculus stuff” to places where classical formulations of limits don’t make sense. We want to use that intuition, but it would be unrigorous to apply those rules. So we create even more nuanced formulations of stuff like continuity so that we can do Calculus in places where it shouldn’t make sense. This is not unlike expanding from simple Newton-esque functions to more general differentiable functions that we need limits for.

Anonymous 0 Comments

**TL;DR** It’s not that Newton was “wrong” *per se*, but that the playground in which he was “right” was too small and we needed new rules before we could expand the playground.

For the assumptions under which they were working, their stuff works just fine. And, in their time, there was no real reason to search for additional rigor because these new tools dropped and there was a lot of work that could be done without issue. It took about 200 years, and many great mathematicians using this “unrigorous” version of calculus, before the exceptions and pathologies built up to the point of needing better foundations for it.

For instance, there was just a general assumption that if a function is continuous then – outside of isolated points – it should also be differentiable. For the most part this is true for things like orbits of planets and trajectories of ballistics. But it’s just not a true statement thanks to the [Weierstrass Function](https://en.wikipedia.org/wiki/Weierstrass_function) which is continuous everywhere but differentiable nowhere.

But, eventually, we began to use Calculus in more sophisticated settings and the pathologies were too much for this broader use. This is when limits really began to make a difference because they are able to rigorously formulate a lot of the more wishy-washy parts of Newtonian/Leibnizian Calculus. This allows us to articulate more precisely what we can do and when we can do it. They’re also insanely practical for approximations.

As far as we know, limit calculus is pretty self consistent but it isn’t the end of the story. We have much more broadly expanded the context in which we want to do “Calculus stuff” to places where classical formulations of limits don’t make sense. We want to use that intuition, but it would be unrigorous to apply those rules. So we create even more nuanced formulations of stuff like continuity so that we can do Calculus in places where it shouldn’t make sense. This is not unlike expanding from simple Newton-esque functions to more general differentiable functions that we need limits for.

Anonymous 0 Comments

In short, Newton and Leibniz invented calculus for real numbers, but they didn’t have an actual definition of what a real number is. Unlike whole numbers and fractions, defining real numbers is tricky, and it took until the 20th century to get it right. Another thing is that Newton and Leibniz used something ill-defined that they called “infinitely small” numbers, which don’t actually exist on the standard real number set. Those were replaced with the theory of limits in the 19th century.

Anonymous 0 Comments

In short, Newton and Leibniz invented calculus for real numbers, but they didn’t have an actual definition of what a real number is. Unlike whole numbers and fractions, defining real numbers is tricky, and it took until the 20th century to get it right. Another thing is that Newton and Leibniz used something ill-defined that they called “infinitely small” numbers, which don’t actually exist on the standard real number set. Those were replaced with the theory of limits in the 19th century.