Limits in Mathematics?

233 views

I only ever encountered the limit while learning derivation by first principles in calculus. I understood all the theory behind first principles, but we were never told what happens to the limit h -> 0. Our teacher just said that it goes away after we divide by h, and that’s all I got.

I understand that the limit h -> 0 represents the gap between x and (x + h) getting smaller and smaller. But how does this gap disappear at the end? From searching online I’ve learned that limits are not *equality*, h never *equals* zero, it just gets closer and closer to it. But then why does it equal zero at the end? How is h -> 0 no longer intrinsic to f'(x)? This might be a dumb question but it has stumped me for ages now.

In: 8

12 Answers

Anonymous 0 Comments

The formal way of thinking about limits is called the epsilon-delta construction. The idea is that, even if we can’t define the behaviour exactly at the point, we can look at its behaviour as we get closer and closer. More specifically, when we say that the limit as x approaches some point c of some function f(x) approaches some value L (or lim x->c f(x) = L), what we mean is that for any small positive value (which we call ε, or epsilon) we can find another small positive value (which we call δ, or delta) such that for all the x-values closer than δ to c (i.e. 0 < |x – c| < δ), f(x) is closer than ε to L (i.e. |f(x) – L| < ε). We say that 0 < |x – c| because if it were *equal* to 0, we’d be at the point c, and that might not give a well-defined value for f(x).

The way you’d actually do this in practice is to pick δ to be some appropriate function of ε and then find f(x) for 0 < |x – c| < δ, and see whether those are within ε of L. For example, let’s take the derivative of x^(2). We know that this is lim h->0 ((x+h)^2 – x^(2))/h and we can expand brackets and rearrange to get lim h->0 (2xh + h^(2))/h.

Now, we can’t just put in h = 0, because that would get us 0/0, which is undefined, and we also can’t cancel the h out and then set it to 0, because when you cancel out a variable you intrinsically assume it’s not 0, and if we cancelled h out and then set h to 0 we’d be contradicting ourselves. Let’s use the ε-δ approach. Now, we know that the limit here should be 2x, so that’s our value of L, and we’re approaching 0, so that’s our value of c, so we want to find some small positive value δ such that for 0 < |h| < δ, |f(h) – 2x| < ε for all small positive values ε.

First, let’s look at what f(h) is. We have f(h) = (2xh + h^(2))/h. Now, and this is the clever bit, because we said that |h| is greater than 0, that means h is not 0, which means it’s totally ok to cancel it out. That gives us f(h) = 2x + h, so |f(h) – 2x| = |2x + h – 2x| = |h|. If |h| < δ, then we have |f(h) – 2x| < δ, and we want this to be less than ε, so we just have to choose something like δ = ε/2. ε/2 is always smaller than ε for positive ε, so we’ve successfully found a δ-value that guarantees we’ll be within ε of 2x. Therefore, the limit is 2x, and at no point did we do anything illegal like dividing by 0 or getting an undefined expression like 0/0.

Anonymous 0 Comments

ELI5: imagine you have a cup of hot choclate and your mom says “when you finish your hot chocolate you have to go to bed”. Now, you are smart and you don’t want to go to bed, so everytime you take a sip, you make sure that you leave some hot chocolate in the cup. You can drag this out forever and never finish the hot chocolate, so the amount left is always more than nothing. Now imagine that mom gets annoyed and starts looking how much is left in the cup with a ruler. The longer you do it, the closer to 0 the lines of the ruler will become that stick out of the chocolate that’s left. Eventually she could draw new lines of the ruler, ever closer to 0 and be sure that at some point the amount of hot chocolate is less than that. In fact, she could pick or add any line to the ruler, anywhere, and be sure that after you play this game for some time this line on the ruler will forever stick out of the hot chocolate left. So in short: The hot chocolate will never be empty, because you always leave some in the cup. But you can pick any level, however close to empty, and you know that after some sip the hot chocolate will always be less than that. So the limit of the hot chocolate is empty but it will still never be empty.

Anonymous 0 Comments

Yeah, limits tend to get hand waved a bit in elementary calculus, especially in high school. If you get further into math, you’ll come across a more rigorous definition, which I think helps clarify things.

Let’s consider the function [f(x) = sin(x) / x](https://www.wolframalpha.com/input/?i=plot+sin%28x%29%2Fx). Looking at the graph, it seems like f(0) = 1, but if you plug in 0 you get 0 / 0 which is undefined. And indeed, the function is defined for all x except 0, and skipping ahead a bit, lim[x->0] f(x) = 1. If you’ve learned L’Hôpital’s rule, a quick application of it confirms this.

But if we back up a bit, what’s a general way we could convince ourselves that this limit should be 1? The idea is to think about how close we can get to 1, and make a little game out of it. Let’s say I challenge you to find a value of x such that f(x) is within 0.01 of 1. Can you do it? It turns out, you can, x=0.000001 does it. What about getting f(x) within 0.0000001 of 1? Sure, just make x=0.000000000000000001. No matter how close I challenge you to get to 1, you can find an x that does it, and you have to get close to 0 to find it. This is the [epsilon-delta](https://en.wikipedia.org/wiki/Limit_of_a_function#Functions_of_a_single_variable) definition.

The idea works the same with lim[x->infinity] 1/x = 0. You can’t “plug in” infinity, but however close to 0 I challenge you to get, you can find an x that achieves it, and you have to get arbitrarily large to do so.

Anonymous 0 Comments

The limit of an expression like 5x+3-x^2 as x approaches a value, like 2, is just the value the expression goes toward as x goes toward 2.

If you plug in 1.9, then 1.99, then 1.999, etc., you will see the values that the expression takes get closer and closer to 9. It just so happens that you can get 9 by simply plugging in x=2 to the expression as well. This shortcut works for “nice” expressions, which is why you will often see the limit disappear as something is plugged in for a value.

If you take the limit as h goes to 0 of 3x+h, you can just plug in h=0 in order to compute the limit as 3x because 3x+h is one of those “nice” expressions. In reality computing the limit is done by seeing what 3x+h moves toward as you plug in values that get closer and closer to 0 for h.

There is a more technical definition for calculating the limit, which is not usually taught in calculus the first time around, which defines exactly what people mean by “gets closer.”

Anonymous 0 Comments

f'(x) = lim a->0 (f(x+a) – f(x-a))/a

Being the definition of a derivative is what I assume you’re talking about. You may have seen it as

f'(x) = lim h->x (f(x)-f(h))/x-h

Both are essentially the same thing. You take a point on f(x) and you want the slope at that point. You simply do rise over run, m = Δy/Δx

That works just fine when calculating the slope between two point, but in this case we only have one.

It’s pretty easy to imagine what’s happening in the first one. We are taking the slope between two points, x+a and x-a. We just make the points infinitely close together to get the slope at x.

In the second, we do the same but we take the points x and h and we just move h infinitely close to x to get the slope at x

This is also how we get dy/dx to be the symbol for a derivative. It comes from Δy/Δx

As long as the function is continuous at x, this works fine because the limit of a function is equal to the value of the function. This is why you can’t take the derivative of a function at a noncontinuous point

Anonymous 0 Comments

It’s mathematicians cheating. We can’t divide by zero so we divide by something really, really close to zero but never *actually* reaching zero. However, at the same time we say it’s so close to zero that we can pretend it is zero compared to the value of x.

Basically we’re having our cake and eating it, making something simultaneously zero and not zero as it suits us.

Anonymous 0 Comments

I appreciate the explanations by everyone so far but could someone eli stupid? Like, pretend I’m shit at math type deal.

This is not my field (seemingly obvious) but I like learning.

Anonymous 0 Comments

While it is not the correct definition of a limit, in most everyday cases of using limits to deal with functions, you’re basically asking “What is the right value of this function to fill in at this point so that it becomes continuous (i.e. its graph becomes connected)? Or is there even one?”.

Suppose you’re computing something like lim_(x→1) x+1. When you just plug in x=1 to get 2, the often unstated logic behind this is roughly as follows: x+1 is already a continuous function, and it already has a value at x=1, which is 2. Hence, the right fill-in to make it continuous is just the original value, 2.

Suppose you’re computing something like lim_(x→1) (x²-1)/(x-1). Now you originally have no value at x=1, because that gives you 0/0. But you notice everywhere other than x=1, (x²-1)/(x-1) and x+1 are in fact exactly the same. So they should certainly approach the same value at x=1. And then the substitution of x=1 into x+1 is justified with the same logic as above: x+1 is already a continuous function, so the correct “fill-in” it approaches as x approaches 1 is just the original value at x=1.

(Again: if you start getting technical, the above explanation is not completely correct. It for instance assumes that your function is nice and continuous near the point you’re taking a limit at, and it applies somewhat circular logic in explaining limits through continuity while continuity is formally defined through limits. But it should at least somewhat demystify the precise logic behind these kind of computational manipulations that are typically done with limits.)

Anonymous 0 Comments

> I understand that the limit h -> 0 represents the gap between x and (x + h) getting smaller and smaller. But how does this gap disappear at the end?

It doesn’t. Let’s work an example: let’s compute the derivative of f(x) = x^2 from first principles.

—–

By definition, f'(x) = lim(h->0) f(x-h)-f(x) / h

Since our function is f(x) = x^(2), we can write this out explicitly: lim(h->0) (x-h)^2 – x^2 / h, where I’m writing without parens with the understanding that everything left of the slash is the numerator.

Expand out (x-h)^2 to get x^2 – 2xh + h^(2). Then the numerator is x^2 – 2xh + h^2 – x^(2), and the x^(2)’s cancel. So our numerator is 2xh + h^(2), all over h.

Since we are considering values near, but not equal to, zero, h != 0. And that means it’s valid to cancel the common factor of *h* on both top and bottom. That gets us 2x + h.

So, what’s lim(h->0) 2x + h? Well, this is a continuous function in *h*, so **now** we are allowed to plug in h = 0 (because by definition, the limit of a continuous function at a point is just the value of the function there). We couldn’t do this before because it would have resulted in 0/0, an indeterminate form that doesn’t give us anything useful.

h is never actually 0 here. But the continuity of the function we got at the end makes the value of the limit equal to what the value *would be* if h *were* 0 (and if the original expression didn’t have a removal discontinuity at 0).

Anonymous 0 Comments

There are two ways to answer one is handwavey the other is how it’s done formally in maths.

Handwavy: first you need to convince yourself that a limit from an infinite sequence of steps actually can exist.

Consider the number 2. Well it’s real right? You can count it, there it is. Now consider the sequence 1.9 1.99 1.999 1.9999… etc. This sequence has nth term 1 + sum k=1 to n (9×10^-k)

The infinite sequence has a limit it is 2. As n increases the sequence converges closer and closer to 2. We say the limit as n tends to infinity is 2. Asking what the limit of the gradient chord is as h gets smaller and smaller is similar. The gradient chord has a gradient for each h. It approaches the tangent gradient (as h to 0) in a similar way to the sequence 1.999…n times approaches 2.

The reason why you can effectively make h zero at the end of differentiation first principles is because at the start (with h=0) you have 0/0 which is an indeterminate form. At the end of the process you have the gradient function plus an expression multiple by a power of h. This can be evaluated if h is zero.

For instance x^3 from first principals ends with 3x^2 + 3xh + h^2. If h is zero here the last two terms disappear.

Now the above is handwavey. The proper way is via mathematical real provides rigorous epsilon delta definitions of limits etc. If you follow this logic it is clear why a limit is attained but you do lose some intuition.