Limits in Mathematics?

30 views
0

I only ever encountered the limit while learning derivation by first principles in calculus. I understood all the theory behind first principles, but we were never told what happens to the limit h -> 0. Our teacher just said that it goes away after we divide by h, and that’s all I got.

I understand that the limit h -> 0 represents the gap between x and (x + h) getting smaller and smaller. But how does this gap disappear at the end? From searching online I’ve learned that limits are not *equality*, h never *equals* zero, it just gets closer and closer to it. But then why does it equal zero at the end? How is h -> 0 no longer intrinsic to f'(x)? This might be a dumb question but it has stumped me for ages now.

In: 8

This is a great question and one that prompted much thought during the 18th and 19th centuries. Essentially you want to know why should we believe in limits? It turns out that the real numbers are a special type of space called a complete metric space, which means that if there is a Cauchy sequence (something that gets arbitrarily closer and closer together), then the limit of the sequence exists as part of the space. When you talk about the limit of a sequence you’re not talking about any given point in the sequence – you’re talking about the point that you get arbitrarily close to. And since it’s a complete metric space, the limit also must be a real number.

In other words, the limit is not a process, it’s a target. A sequence that converges to the limit is not the limit itself.

So when you look at a limit, if you were to “read it aloud” you’d say, for example, “The limit as x approaches 0 of….”

The key being “approaches”. The limit equals a value because the limit is telling you the value that a function is approaching. The function is equal to that value – it’s approaching it.

There are two ways to answer one is handwavey the other is how it’s done formally in maths.

Handwavy: first you need to convince yourself that a limit from an infinite sequence of steps actually can exist.

Consider the number 2. Well it’s real right? You can count it, there it is. Now consider the sequence 1.9 1.99 1.999 1.9999… etc. This sequence has nth term 1 + sum k=1 to n (9×10^-k)

The infinite sequence has a limit it is 2. As n increases the sequence converges closer and closer to 2. We say the limit as n tends to infinity is 2. Asking what the limit of the gradient chord is as h gets smaller and smaller is similar. The gradient chord has a gradient for each h. It approaches the tangent gradient (as h to 0) in a similar way to the sequence 1.999…n times approaches 2.

The reason why you can effectively make h zero at the end of differentiation first principles is because at the start (with h=0) you have 0/0 which is an indeterminate form. At the end of the process you have the gradient function plus an expression multiple by a power of h. This can be evaluated if h is zero.

For instance x^3 from first principals ends with 3x^2 + 3xh + h^2. If h is zero here the last two terms disappear.

Now the above is handwavey. The proper way is via mathematical real provides rigorous epsilon delta definitions of limits etc. If you follow this logic it is clear why a limit is attained but you do lose some intuition.

> I understand that the limit h -> 0 represents the gap between x and (x + h) getting smaller and smaller. But how does this gap disappear at the end?

It doesn’t. Let’s work an example: let’s compute the derivative of f(x) = x^2 from first principles.

—–

By definition, f'(x) = lim(h->0) f(x-h)-f(x) / h

Since our function is f(x) = x^(2), we can write this out explicitly: lim(h->0) (x-h)^2 – x^2 / h, where I’m writing without parens with the understanding that everything left of the slash is the numerator.

Expand out (x-h)^2 to get x^2 – 2xh + h^(2). Then the numerator is x^2 – 2xh + h^2 – x^(2), and the x^(2)’s cancel. So our numerator is 2xh + h^(2), all over h.

Since we are considering values near, but not equal to, zero, h != 0. And that means it’s valid to cancel the common factor of *h* on both top and bottom. That gets us 2x + h.

So, what’s lim(h->0) 2x + h? Well, this is a continuous function in *h*, so **now** we are allowed to plug in h = 0 (because by definition, the limit of a continuous function at a point is just the value of the function there). We couldn’t do this before because it would have resulted in 0/0, an indeterminate form that doesn’t give us anything useful.

h is never actually 0 here. But the continuity of the function we got at the end makes the value of the limit equal to what the value *would be* if h *were* 0 (and if the original expression didn’t have a removal discontinuity at 0).

While it is not the correct definition of a limit, in most everyday cases of using limits to deal with functions, you’re basically asking “What is the right value of this function to fill in at this point so that it becomes continuous (i.e. its graph becomes connected)? Or is there even one?”.

Suppose you’re computing something like lim_(x→1) x+1. When you just plug in x=1 to get 2, the often unstated logic behind this is roughly as follows: x+1 is already a continuous function, and it already has a value at x=1, which is 2. Hence, the right fill-in to make it continuous is just the original value, 2.

Suppose you’re computing something like lim_(x→1) (x²-1)/(x-1). Now you originally have no value at x=1, because that gives you 0/0. But you notice everywhere other than x=1, (x²-1)/(x-1) and x+1 are in fact exactly the same. So they should certainly approach the same value at x=1. And then the substitution of x=1 into x+1 is justified with the same logic as above: x+1 is already a continuous function, so the correct “fill-in” it approaches as x approaches 1 is just the original value at x=1.

(Again: if you start getting technical, the above explanation is not completely correct. It for instance assumes that your function is nice and continuous near the point you’re taking a limit at, and it applies somewhat circular logic in explaining limits through continuity while continuity is formally defined through limits. But it should at least somewhat demystify the precise logic behind these kind of computational manipulations that are typically done with limits.)