: Epsilon delta definition of limits

203 views

I mean why would someone deliberately create such confusing definition of limits. Why would existance of delta affects epsilon, why in proof can i put any value of delta (usually 1) and then i have to overwrite that to delta = min {1, something}. Also, proof of any question that says to prove the limit of this fx is (insert number), does not make any sense. Please explain.

In: 0

3 Answers

Anonymous 0 Comments

> such confusing definition of limits

It’s only confusing because you aren’t familiar with it. In reality, it’s *rigorous* and provides a way for mathematicians to precisely describe what it means for a limit to exist.

The simplest way to wrap your head around it is like this: someone hands you an interval on the y-axis, called epsilon. Whatever that interval is, you need to be able to choose an interval on the x-axis, called delta, small enough that *for x values within the interval delta, all of the y-values are within the interval epsilon*. When you set delta to the minimum of two things, it’s usually because e.g. 1 is “good enough” for large values of epsilon, but you need smaller x-intervals for smaller values of epsilon.

Without a concrete example to work through, it’s hard to give a better explanation than that. The point is that you want to choose your delta so that for any x value you pick, the value of f(x) is within epsilon of your limit.

Anonymous 0 Comments

The formal epsilon-delta definition of a limit is a very sophisticated and subtle piece of technology which took hundreds of years to develop and doesn’t get the justice it deserves in a typical Calculus class.

The biggest thing to keep in mind about limits is that they are, effectively, instructions on how to get arbitrarily precise approximations. For example, if you want a decent approximation for pi then you can use the classic limit sin(pi*x)/x -> pi as x goes to zero. The idea is that if you plug in a small enough number for x, then the closer the value is going to be to pi.* But all you know from this description above is that “When x is small, sin(pi*x)/x is close to pi.” That’s not particularly useful and is *very* imprecise. What does “small” mean? What does “close to pi” mean? Will it be close enough for your purposes?

If I’m making a hyper precise construction, then I can’t necessarily know that the value I have for pi is a good enough approximation for my purposes just because I plugged in a “small feeling” number. So I would go back to the mathematician and say: “I need an approximation for pi that is accurate to 5 decimal places.” And so the mathematician has to answer the following problem: “How small does my x need to be before I can *guarantee* that the resulting approximation is correct up to 5 decimal places?” Or, more technically:

* How much of a bound do I need to put on the size of |x| in order to guarantee that |sin(pi x)/x – pi| < 10^(-5) ?

That is, given a pre-determined allowable error 𝜀 for my approximation, what is the biggest size (or, wiggle room) 𝛿 that I have for the input value? This is the underlying idea of the 𝜀-𝛿 definition of a limit. The statement for a limit is a bit more powerful though. The function sin(pi x)/x can not only approximate pi to 5 decimal places, but to ANY precision we can think about. That is for ANY error 𝜀, I know for a fact that I can find a number 𝛿 so that if |x|<𝛿 then my approximation will be within error 𝜀. That’s what a limit is.

Continuity is a slightly different thing. Intuitively, a continuous function has no radical jumps and can be drawn by a pencil without lifting it off the page. What this means is that if you have your pencil stopped while drawing the graph, then the proceeding values can’t really do anything crazy since they have to connect to where your pencil is. What we might say, then, is that in a continuous function, the value of f(a) is *approximated* by the values of f(x) for x *near* x=a. That is, a function approximates itself. This brings the question to limits, and so we say that the limit at a point must be the value at the point.

Thinking of Calculus in terms of approximation make things much more conceptually grounded than trying to wrap your brain around infinitesimals or whatever, it’s more formally correct, and it’s much more practical as well. But getting from Newton and Leibniz’s calculus with all these vague notions of fluxions and infinitesimals to today’s rigorous and useful understanding of them takes a lot of work, and it’s a good story. If you want to read a bit of it, check out [this paper](https://maa.org/sites/default/files/pdf/upload_library/22/Ford/Grabiner185-194.pdf).

—-

**Side note, this is related to what Archimedes did for his approximation for pi as he plugged x=1/96 into it (which he was able to directly compute using half-angle formulas for sine).**

Anonymous 0 Comments

Suppose I want to convince you that the limit of sin(x)/x as x -> 0 is 1.0 (when x is in radians). We can’t simply compute sin(0)/0 because 0/0 is meaningless. Instead, we play a game. You tell me how close you want to get, and I tell you how small x has to be to get that close. Perhaps after a few iterations of the game you are convinced. Or perhaps you want to know how I pick my answers, and I can give you a formula that always works. The latter becomes a rigorous definition of the limit.