Limits in Mathematics?

209 views

I only ever encountered the limit while learning derivation by first principles in calculus. I understood all the theory behind first principles, but we were never told what happens to the limit h -> 0. Our teacher just said that it goes away after we divide by h, and that’s all I got.

I understand that the limit h -> 0 represents the gap between x and (x + h) getting smaller and smaller. But how does this gap disappear at the end? From searching online I’ve learned that limits are not *equality*, h never *equals* zero, it just gets closer and closer to it. But then why does it equal zero at the end? How is h -> 0 no longer intrinsic to f'(x)? This might be a dumb question but it has stumped me for ages now.

In: 8

12 Answers

Anonymous 0 Comments

This is a great question and one that prompted much thought during the 18th and 19th centuries. Essentially you want to know why should we believe in limits? It turns out that the real numbers are a special type of space called a complete metric space, which means that if there is a Cauchy sequence (something that gets arbitrarily closer and closer together), then the limit of the sequence exists as part of the space. When you talk about the limit of a sequence you’re not talking about any given point in the sequence – you’re talking about the point that you get arbitrarily close to. And since it’s a complete metric space, the limit also must be a real number.

In other words, the limit is not a process, it’s a target. A sequence that converges to the limit is not the limit itself.

Anonymous 0 Comments

So when you look at a limit, if you were to “read it aloud” you’d say, for example, “The limit as x approaches 0 of….”

The key being “approaches”. The limit equals a value because the limit is telling you the value that a function is approaching. The function is equal to that value – it’s approaching it.

Anonymous 0 Comments

There are two ways to answer one is handwavey the other is how it’s done formally in maths.

Handwavy: first you need to convince yourself that a limit from an infinite sequence of steps actually can exist.

Consider the number 2. Well it’s real right? You can count it, there it is. Now consider the sequence 1.9 1.99 1.999 1.9999… etc. This sequence has nth term 1 + sum k=1 to n (9×10^-k)

The infinite sequence has a limit it is 2. As n increases the sequence converges closer and closer to 2. We say the limit as n tends to infinity is 2. Asking what the limit of the gradient chord is as h gets smaller and smaller is similar. The gradient chord has a gradient for each h. It approaches the tangent gradient (as h to 0) in a similar way to the sequence 1.999…n times approaches 2.

The reason why you can effectively make h zero at the end of differentiation first principles is because at the start (with h=0) you have 0/0 which is an indeterminate form. At the end of the process you have the gradient function plus an expression multiple by a power of h. This can be evaluated if h is zero.

For instance x^3 from first principals ends with 3x^2 + 3xh + h^2. If h is zero here the last two terms disappear.

Now the above is handwavey. The proper way is via mathematical real provides rigorous epsilon delta definitions of limits etc. If you follow this logic it is clear why a limit is attained but you do lose some intuition.

Anonymous 0 Comments

While it is not the correct definition of a limit, in most everyday cases of using limits to deal with functions, you’re basically asking “What is the right value of this function to fill in at this point so that it becomes continuous (i.e. its graph becomes connected)? Or is there even one?”.

Suppose you’re computing something like lim_(x→1) x+1. When you just plug in x=1 to get 2, the often unstated logic behind this is roughly as follows: x+1 is already a continuous function, and it already has a value at x=1, which is 2. Hence, the right fill-in to make it continuous is just the original value, 2.

Suppose you’re computing something like lim_(x→1) (x²-1)/(x-1). Now you originally have no value at x=1, because that gives you 0/0. But you notice everywhere other than x=1, (x²-1)/(x-1) and x+1 are in fact exactly the same. So they should certainly approach the same value at x=1. And then the substitution of x=1 into x+1 is justified with the same logic as above: x+1 is already a continuous function, so the correct “fill-in” it approaches as x approaches 1 is just the original value at x=1.

(Again: if you start getting technical, the above explanation is not completely correct. It for instance assumes that your function is nice and continuous near the point you’re taking a limit at, and it applies somewhat circular logic in explaining limits through continuity while continuity is formally defined through limits. But it should at least somewhat demystify the precise logic behind these kind of computational manipulations that are typically done with limits.)

Anonymous 0 Comments

> I understand that the limit h -> 0 represents the gap between x and (x + h) getting smaller and smaller. But how does this gap disappear at the end?

It doesn’t. Let’s work an example: let’s compute the derivative of f(x) = x^2 from first principles.

—–

By definition, f'(x) = lim(h->0) f(x-h)-f(x) / h

Since our function is f(x) = x^(2), we can write this out explicitly: lim(h->0) (x-h)^2 – x^2 / h, where I’m writing without parens with the understanding that everything left of the slash is the numerator.

Expand out (x-h)^2 to get x^2 – 2xh + h^(2). Then the numerator is x^2 – 2xh + h^2 – x^(2), and the x^(2)’s cancel. So our numerator is 2xh + h^(2), all over h.

Since we are considering values near, but not equal to, zero, h != 0. And that means it’s valid to cancel the common factor of *h* on both top and bottom. That gets us 2x + h.

So, what’s lim(h->0) 2x + h? Well, this is a continuous function in *h*, so **now** we are allowed to plug in h = 0 (because by definition, the limit of a continuous function at a point is just the value of the function there). We couldn’t do this before because it would have resulted in 0/0, an indeterminate form that doesn’t give us anything useful.

h is never actually 0 here. But the continuity of the function we got at the end makes the value of the limit equal to what the value *would be* if h *were* 0 (and if the original expression didn’t have a removal discontinuity at 0).

Anonymous 0 Comments

I appreciate the explanations by everyone so far but could someone eli stupid? Like, pretend I’m shit at math type deal.

This is not my field (seemingly obvious) but I like learning.

Anonymous 0 Comments

It’s mathematicians cheating. We can’t divide by zero so we divide by something really, really close to zero but never *actually* reaching zero. However, at the same time we say it’s so close to zero that we can pretend it is zero compared to the value of x.

Basically we’re having our cake and eating it, making something simultaneously zero and not zero as it suits us.

Anonymous 0 Comments

f'(x) = lim a->0 (f(x+a) – f(x-a))/a

Being the definition of a derivative is what I assume you’re talking about. You may have seen it as

f'(x) = lim h->x (f(x)-f(h))/x-h

Both are essentially the same thing. You take a point on f(x) and you want the slope at that point. You simply do rise over run, m = Δy/Δx

That works just fine when calculating the slope between two point, but in this case we only have one.

It’s pretty easy to imagine what’s happening in the first one. We are taking the slope between two points, x+a and x-a. We just make the points infinitely close together to get the slope at x.

In the second, we do the same but we take the points x and h and we just move h infinitely close to x to get the slope at x

This is also how we get dy/dx to be the symbol for a derivative. It comes from Δy/Δx

As long as the function is continuous at x, this works fine because the limit of a function is equal to the value of the function. This is why you can’t take the derivative of a function at a noncontinuous point

Anonymous 0 Comments

The limit of an expression like 5x+3-x^2 as x approaches a value, like 2, is just the value the expression goes toward as x goes toward 2.

If you plug in 1.9, then 1.99, then 1.999, etc., you will see the values that the expression takes get closer and closer to 9. It just so happens that you can get 9 by simply plugging in x=2 to the expression as well. This shortcut works for “nice” expressions, which is why you will often see the limit disappear as something is plugged in for a value.

If you take the limit as h goes to 0 of 3x+h, you can just plug in h=0 in order to compute the limit as 3x because 3x+h is one of those “nice” expressions. In reality computing the limit is done by seeing what 3x+h moves toward as you plug in values that get closer and closer to 0 for h.

There is a more technical definition for calculating the limit, which is not usually taught in calculus the first time around, which defines exactly what people mean by “gets closer.”

Anonymous 0 Comments

Yeah, limits tend to get hand waved a bit in elementary calculus, especially in high school. If you get further into math, you’ll come across a more rigorous definition, which I think helps clarify things.

Let’s consider the function [f(x) = sin(x) / x](https://www.wolframalpha.com/input/?i=plot+sin%28x%29%2Fx). Looking at the graph, it seems like f(0) = 1, but if you plug in 0 you get 0 / 0 which is undefined. And indeed, the function is defined for all x except 0, and skipping ahead a bit, lim[x->0] f(x) = 1. If you’ve learned L’Hôpital’s rule, a quick application of it confirms this.

But if we back up a bit, what’s a general way we could convince ourselves that this limit should be 1? The idea is to think about how close we can get to 1, and make a little game out of it. Let’s say I challenge you to find a value of x such that f(x) is within 0.01 of 1. Can you do it? It turns out, you can, x=0.000001 does it. What about getting f(x) within 0.0000001 of 1? Sure, just make x=0.000000000000000001. No matter how close I challenge you to get to 1, you can find an x that does it, and you have to get close to 0 to find it. This is the [epsilon-delta](https://en.wikipedia.org/wiki/Limit_of_a_function#Functions_of_a_single_variable) definition.

The idea works the same with lim[x->infinity] 1/x = 0. You can’t “plug in” infinity, but however close to 0 I challenge you to get, you can find an x that achieves it, and you have to get arbitrarily large to do so.