eli5 how does the sum of infinite terms turns to be a finite value

1.36K views

does this apply only for terms that are less than 1 eg 1 + 1/2 + 1/4…. or does this apply to all ap/gp. I remember studying this, but it’s been so long I remember only the gist.

In: 248

30 Answers

Anonymous 0 Comments

Some infinite sums like the example you gave are relatively easy to wrap your head around in different ways.

Depending on how your brain is wired, one of the easiest for that is to simply graph it.

You can tell that you get a curve that will approach 2 the longer you add things and end up at 2 in infinity.

You can also visualize it by stacking rectangles of the appropriate size. You have one square that is 1 by 1, one rectangle that is 1 by 0.5 one that is 0.5 by 0.5 and so on and as you draw them you will find that you end up with a rectangle that is 1 by 2 with a series of increasingly tiny boxes in whatever corner you stacked them.

You can also look at the fractions that come out at every step: 1/1, 3/2, 7/4, 15/8, 31/16, 63/32 … and see that this gets ever and ever closer to just being two.

Other infinite sums are just weird and counterintuitive and sum up in really unexpected ways that are hard to visualize. Some don’t have any solution at all.

It can get pretty out there.

Anonymous 0 Comments

> does this apply only for terms that are less than 1 eg 1 + 1/2 + 1/4…. or does this apply to all ap/gp

The sum can only reach a finite value if your terms get increasingly smaller. If they didn’t, you’ll just keep adding stuff until you overshoot your estimated limit.

If you say *”the limit of infinitely adding 1s is 10″*, when you reach 8, you still have to add stuff (1, then 1, then 1) that is going to overshoot your target.

So a sum of infinite terms can only be finite if its terms trend towards zero.

Additionally, that condition is only necessary, not sufficient. `1/1 + 1/2 + 1/3 + 1/4 …` has terms that trend towards zero (for each ε > 0, you can find a natural number A so that 1/A < ε), but the infinite sum is not finite (for each L > 0, you can find a natural number N sot that SUM(for i from 1 to N of 1/i) > L)

* E.g. for ε = 0.00002, for any natural A >= 50000 you have 1/A < ε
* E.g. [for L = 10, for any natural N >= 30000](https://www.wolframalpha.com/input?i=sum+of+1%2Fn+for+n+in+%281..30000%29) you have SUM(for i from 1 to N of 1/i) > L

Anonymous 0 Comments

It doesn’t turn to a finite value. We assign the expression 1 + 1/2 + 1/4 + … some finite value based on what we think makes sense. In your case if you create a sequence out of each “partial sum” (the sum of first 1, 2, 3 terms etc) we get 1, 1.5, 1.75, … and we can then say this approaches 2.

The important thing to remember is that there are also other methods of assigning finite values to expressions like this, not just the one I give above.

Anonymous 0 Comments

Many sums do not converge to a finite value.

For 1+1/2+1/4…, it works. But most famously, 1+1/2+1/3+1/4… does not. This one keeps increasing to infinity.

Anonymous 0 Comments

1 + 1/2 + 1/4 ….
You can create this sum with a piece of paper which you cut in half at every step. This may help you visualize that you end up with 2

Anonymous 0 Comments

you have a 2L bottle

you pour 1L of water – 1 to go

then half of that, 0,5L – 0,5 to go

then half of that, 0,25 – 0,25 to go

when you repeat, you will always have a bit more to go

Anonymous 0 Comments

Walk halfway across the room. Walk half of what’s left. Walk half of what’s left. Keep doing that. You’ll get close to the other side of the room, you won’t travel an infinite distance.

If you view it in reverse, you can take a finite amount and slice it into an infinite number of parts using some rule or pattern. Adding up those parts just gives you the finite amount again.

But that’s only true for some infinite series that they add up to something finite. If an infinite series “converges”, it adds up to a finite number. If it “diverges”, then it adds up to infinity. You can show that it converges if every term is smaller than another infinite series that you know converges. Likewise, you can show it diverges if every term is larger than another infinite series that you know diverges.

Anonymous 0 Comments

A better way to think of it is that you can divide a finite sum infinitely.

Take a square and cut it in half. Then cut one of those halfs in half. Now cut the half of the half in half, and so on. You can cut the remaining square in half indefinitely, but the total amount of square never changes. What this means is that you have an infinite sum adding up to a finite value. If you start with a finite term, you can find ways to infinitely divide it.

Conversely, if you know how to add infinite divisions, you can calculate if it converges to a value.

Anonymous 0 Comments

Lots of good answers for the S1 = 1 + 1/2 + 1/4 + … series.

But I’ll offer a general answer:
– under what conditions does a series converge on a finite value?

There are several “tests” we use.

One is looking at the Partial Sum – does the total sum of the series get closer and closer to a value without any back-stepping? S1 = 1 + 1/2 + 1/4 + … does this, as been demonstrated by others here.

But what about S2 = 1 – 1/2 + 1/3 – 1/4 + 1/5 – … and so on? It goes back and forth. Does S2 converge? This osculating partial sum needs to be treated differently.

And what about S3 = 1 + 1/2 + 1/3 + 1/4 + … does it converge? Not this one! This one is quite special – we call it the https://en.wikipedia.org/wiki/Harmonic_series_(mathematics) We know it doesn’t because we can see the sum of the first 1, 2, 4, 8, … terms each of themselves never goes below 1.

Now what about S4 = 1 – 1 + 1 – 1 + 1 … ? Well, it never blow up to infinity, but it also never “settles” on 1 value. So no.

There is a deep pool of math exploring which series converge and which do not. And then those that do not, what do do with them.

Anonymous 0 Comments

I’m a bit late here, but most of the answers seem to be missing the point.

The sums of some infinite series give finite values because we define them to do so. Which is how maths works.

The question we’re trying to ask is “what do we get when we add an infinite number of things together?”

We are asking what we even mean when we say “+ … = ”

Normal algebra and number theory cannot handle this. We cannot actually add together an infinite number of things. But geometry suggests there should be a way of doing this (we get Zeno’s paradoxes, and things like 1/2 + 1/4 + 1/8 +…). So we need to come up with a way! And the magic trick we come up with is to use **limits**.

At the risk of getting into the maths too much, if we have some sum-to-n terms (or “partial sums), Sn, we say our infinite sum S∞ converges to a limit L if:

> for any ε>0 there exists some N so large that n≥N implies that |Sn−L|<ε

Translating this from maths into real words, we are setting up a game here. You challenge me by giving me some ε – as small as you like (but bigger than 0), and I have to find some point in the sequence of partial sums (our Sn) for which *all subsequent terms* are within ε of the limit L.

Taking our classic 1/2 + 1/4 + 1/8 + … + 1/2^n + … = 1 result, Sn = 1/2 + 1/4 + … + 1/2^n

Let’s say you give me ε = 0.001. I have to find an N for which SN is somewhere between 0.999 and 1.001, and Sn will be within that rage for all subsequent terms.

In this case, with a bit of sneaky maths, I can pick N = 10.

> S10 = 1/2 + 1/4 + … + 1/2^10 = 0.9990234375

And we can show that Sn will always be within our target range for any larger n.

Generalising, it will turn out that given any ε, provided N > -ln(ε)/ln(2) we’re good.

We are basically saying that the limit of an infinite series or sequence exists if there is some value we can get arbitrarily close to, even if we never actually get there.

In practice we won’t use this formal definition for every sequence. Instead we’ll come up with all sorts of tests for convergence, which tend to rely on showing that our sequence is more converge-y than a sequence we know converges (like the one above), or less converge-y than one we know doesn’t converge.

————

With a bit of algebra we can see that the only **arithmetic progression** that converges will be the trivial one (0 + 0 + 0 + …). Our formula for the partial sum-to-n terms of an arithmetic series with starting number a and common difference d (so a + a+d + a+2d + a+3d + …) is

> Sn = n/2 [2a + (n − 1) × d ]

As n goes to infinity, this will only be nicely behaved if a = d = 0.

For a **geometric progression**, with starting number a and common ration r (so a + ar + ar^2 + ar^3 + …) we get a sum-to-n terms of:

> Sn = a(1-r^(n))/(1-r)

Which will converge nicely to a/(1-r) if |r| < 1

For other series we’d have to mess around with stuff to figure it out. Or just look it up.

————

But what about series which *don’t* converge? A classic example is:

> 1 – 1 + 1 – 1 + … = ?

What does it mean to say “+ … =” for a series that doesn’t converge?

That series above will have partial sums of either 1 or 0. So it cannot converge to a particular value. But what if we still want an answer?

For this example we can use a geometric series (where a = 1, r = -1). We’re not supposed to put this value into our sum-to-infinity term for geometric progressions, but what if we do?

> S∞ = a/(1-r) = 1/(1–1) = 1/2

Which… is surprisingly reasonable. Sure, our partial sums will never actually be 1/2, they’ll either be 1 or 0. But they’ll bounce between 1 and 0 forever, which averages out to 1/2.

This definition:

> 1 – 1 + 1 – 1 + … = 1/2

doesn’t make sense in terms of numbers or algebra, and it doesn’t even work in terms of limits (our new fancy maths), but on some level it does still make sense. So it might be useful, and is definitely interesting.

And if we explore this, getting into complex analysis, we can get some neat results, such as:

> 1 + 1 + 1 + 1 + … = -1/2

> 1 + 2 + 3 + 4 + … = -1/12

and so on.

These sums of infinite terms don’t make sense in normal number theory, or in algebra, or with limits. But that’s because using those areas of maths “+ … =” doesn’t make any sense for these series. We need a new definition and a new area of maths. A way of extending the idea of limits of infinite series to divergent series.

There are a whole bunch of ways of coming up with answers to the question “what does + … = mean for divergent series?” and what makes it really interesting is that for many of these series, and many methods, we end up getting the same results. Which suggests there is something meaningful to all of this.