Yes, the difference is that 1/x² approaches zero faster.
If you make rectangles of width 1 under the curve of 1/x, you end up adding the areas 1/1 + 1/2 + 1/3 +… This sum diverges, but just barely. Pretty much any other curve that approaches zero any faster will have a smaller enough area that you can add up all the rectangles and get a finite number.
>whats the threshold for something to be considered finite or infinite?
It’s not a matter of opinion, things either are finite or they are not.
If you are asking about the integral from 1 to N of these functions as N tends to infinity, then the answer is, at an ELI5 level, just “because that’s how the functions behave”.
* The integral of 1/x is ln(x), and the limit of ln(N) as N tends to infinity is undefined – i.e. the integral is not finite.
* On the other hand, the integral of 1/x^(2) is -1/x, and the limit of 1/N as N tends to infinity is zero. This means that the integral from 1 to N converges to 1.
It’s the speed of how fast area is added.
An example I’ve always liked is 1/(10^x). Add these up:
0.1
0.01
0.001
And so on. Add an infinite number of these, and it’s a finite number: 1/9. Because each next term is so much smaller that the previous one, adding an infinite number of these positive numbers yields a finite number, which seems counter intuitive.
The answer comes from calculus. We can find a function to represent the area under the curve, called the integral. We can look at the integral and see if it keeps getting bigger or if it eventually stops.
1/x integrates to ln|x|+c. We can see that ln|x| just keeps getting bigger as x gets bigger – there’s no limit where it stops.
1/x^2 integrates to -1/x + c which approaches c as x increases. Measuring the area starting from x=1 to x=inf, we get -1/inf+c – -1/1 – c which simplifies down to 1.
> So whats the threshold for something to be considered finite or infinite?
This was a big problem for pre-17th-century mathematicians; how to deal with things that appear finite or not finite.
The trick turns out to be to use “limits.” The idea of saying that even if we can’t get to a finite number, we can get arbitrarily close to it.
Let’s take these integrals/areas.
Suppose we’re finding the area under these curves, “to the right” of x = 1.
For 1/x^2 our text books tell us we get an area of 1. For 1/x we get an undefined answer.
Let’s see how we could approach this using limits. We could change the “upper bound” of our area. Instead of going “all the way to the right” (whatever that means, given infinities are involved), what if we only go from x = 1 to x = 2?
For 1/x^2 we get an area of 0.5, for 1/x we get an area of log(2) = 0.69315…
We could do the same going up to x = 3, x = 4 and so on:
| Upper Bound | Area under 1/x^2 | Area under 1/x
| :— | –: | –:
| 2 | 0.5 | 0.69…
| 3 | 0.66… | 1.09…
| 5 | 0.75… | 1.38…
| 10 | 0.9 | 2.30…
| 20 | 0.95 | 2.99…
| 100 | .99 | 4.60…
| 1,000 | 0.999 | 6.90..
and so on.
From what we can see so far it looks like the 1/x^2 area is heading towards 1, but the 1/x one is getting bigger and bigger.
If we put in “infinity” as our upper limit… we can’t, because infinity isn’t really a number. But what if we sneak up on it using limits? We can say we think the limit of this sequence of numbers (for 1/x^(2)) is 1, while there isn’t a limit for the 1/x sequence of numbers.
The way we prove limits usually is by a challenge or game. You give me some number (bigger than 0), as small as you like. I then have to find some point in our sequence so that all numbers from that point onwards are closer to my proposed limit that the number you’ve given me.
So for example, if you give me the number 0.0001, I have to find some value for the upper bound for which all subsequent areas will be within 0.0001 of 1. Which I can do for the 1/x^2 area. The formula we get for those areas is 1 – 1/x, so I just need to find an x for which:
> 1 – (1 – 1/x) < 0.0001
If we solve this we get:
> 1/x < 0.0001
> x > 1 / 0.0001
> x > 10,000
So I can pick x = 10,001 and I win.
No matter how small a target you give me I can always find a point in our sequence where that number, and every subsequent number, will give me an area closer to 1 than your target. Our sequence never actually gets to 1 but it can get *arbitrarily close to 1.*
So we say that this sequence (1 – 1/x) converges to 1 as x tends to infinity. It has a finite value.
We cannot do that with the 1/x areas, as that gives us a sequence of log(x). log(x) does not converge to a value as x goes to infinity; playing a similar game, if you give me a target, no matter how high, I can find a value for x for which log(x) is bigger than your target.
> log (x) > n
> x > e^n
So try x = e^n + 1, and I win.
——————-
While in both cases the areas increase as we go further to the right, the amount with which they increase gets smaller.
For 1/x^2 the amount we are “adding on” as we increase the upper bound gets closer and closer to zero (if we are increasing our upper bound from A to B, the new area is (B-A)/AB, which will get smaller as B gets bigger). But with 1/x the amount we are “adding on” as we increase the upper bound doesn’t get closer to zero (we get log(B/A) which will increase as B increases).
The area under 1/x is closely related to the sum of all numbers 1/n which you can prove goes to infinity as follows.
You can sum the numbers in chunks:
* 1/2
* 1/3 + 1/4
* 1/5 + 1/6 + 1/7 + 1/8
* …
1/3 > 1/4, so 1/3 + 1/4 > 1/4 + 1/4 = 1/2.
1/5, 1/6, 1/7 are each greater than 1/8. So 1/5 + 1/6 + 1/7 + 1/8 > 4 * 1/8 = 1/2.
Each chunk has twice as many numbers as the previous, and each chunk sums to some value greater than or equal to 1/2.
For the sum to be finite, there is a number k such that that for all n, the sum is less than k.
So take a number k. If we sum up the first 2*k chunks (about 2^((2*k)) numbers) of the sequence, we get at least 2*k*(1/2) = k. So there is no limit to this sum.
If you try to do the equivalent for 1/(2^(x)) you get the following sequence:
1. 1/2
2. 1/2 + 1/4 = 3/4
3. 3/4 + 1/8 = 7/8
4. 7/8 + 1/16 = 15/16
And you can see that the nth number is always 1 – 1/(2^(n)), which will always be less than 1.
I used a different sequence there, because it’s a very neat one, but you get the idea I think.
This is equivalent to the reason why the sum of the harmonic series 1 + 1/2 + 1/3 + 1/4 + 1/5 … diverges, but the sum of the geometric series 1 + 1/2 + 1/4 + 1/8 … converges to 2. The reason for this was explained to me very intuitively: with the harmonic series, you can always find a finite number of subsequent terms in the series that are larger than the current term. With the geometric series I mentioned, that is not true.
Because of how these integrals are *defined*.
First off, be careful. If you start integrating at 0, they’re both infinite. If you start at 1, only one is.
The integral from 1 to infinity is a limit : you integrate from 1 to x, this defines a function F(x) -the antiderivative if defined- and if F(1)-F(x) has a finite limit as x goes to infinity then that value becomes *by definition* the integral from 1 to infinity.
With 1/x, the antiderivative is the logarithm, which grows infinite with x, so no finite limit. With 1/x^2, you obtain a finite limit, hence the value of the integral.
Yes, the reason is that one approaches zero fast enough and the other doesn’t. There is a threshold in the sense that if you consider 1/x^a then it has finite area for a>1 and infinite otherwise. On the other hand they are not considered finite or infinite, they are actually and factually finite or infinite. Also, anything that approaches zero faster than 1/x^2 has finite area under the curves, but the converse is not generally true. You can even have functions with finite area that don’t approach 0 at all (at least on some points).
Latest Answers