What exactly is Analytic Continuation?


I’m basically a layman trying to understand the Riemann Hypothesis, and in my journey so far I’ve made it past the zeta function, somewhat understand the part about using complex numbers as exponents, and up to the part about ‘extending the domain of an infinite series using analytic continuation’.

How exactly are mathematicians finding values for diverging series using it? Are they basically winging it by using geometry or is there a specific way?

In: 1

When you use complex numbers, things get very very nice. Specifically, there are very few functions with derivatives and so no two such functions can overlap except at isolated places. This means that if you have such a function defined on one part of the domain then there is basically exactly one way to extend it to larger parts of the complex plane. That is, you can extrapolate data perfectly – as long as you have the right tools. This is what Analytic Continuation *is*.

But that’s not really your question. Your question is about *computing* these values for – specifically – the Riemann Zeta Function. In general, there are two ways to do this, The first is pretty straightforward as there are formulas that hold for EVERY complex value (except 1, where it always diverges), such as [this integral formula for it](https://en.wikipedia.org/wiki/Riemann_zeta_function#Integral). This isn’t in terms of infinite sums, but in terms of integrals which should be understood as easy to calculate because computers can compute pretty much any integral numerically.

The other way is a bit more useful theoretically and we’ll assume that we can compute the infinite series in the places where is DOES converge for this. There are two important regions that would be considered in the “divergent” range for it. The first is everything left of the imaginary line (that is Re(z)<0) and the critical strip between (0<=Re(z)<=1) and these have slightly different ways to compute them. In the first region, we use the [Functional Equation](https://en.wikipedia.org/wiki/Riemann_zeta_function#Riemann's_functional_equation) to compute things. If you know Zeta(s), then you can use it to directly compute Zeta(1-s) which is evaluating at the point when you reflect it about the vertical line Re(s)=1/2. For the second region, the critical strip, we can actually use the [Dirichlet Eta Function](https://en.wikipedia.org/wiki/Dirichlet_eta_function), which is an “alternating” version of the Riemann Zeta Function. In the places where the zeta function converges, we have the formula Eta(s)=(1-2^(1-s))Zeta(s) but the eta function converges in the critical strip and so this can be used to extrapolate what Zeta(s) should be in the critical strip.


As for the Riemann Hypothesis (and even less ELI5), these functions actually can initially be a distraction from what it is actually saying. In undergrad curriculum we have a heavy focus on calculus and analysis – including Complex Analysis – but very little emphasis in number theory and so the number theory is often avoided when introducing it. It’s easy to say “All the zeros of the Riemann Zeta Function are on the critical line” to people who have taken complex analysis but it’s harder to make it clear why this matters if you haven’t taken number theory. It’s all at the same “level”, it’s just that undergrad number theory isn’t really number theory.

But it isn’t all that hard. The real place to start with the Riemann Hypothesis is NOT zeta functions or anything, but the [Prime Number Theorem](https://en.wikipedia.org/wiki/Prime_number_theorem) which will introduce the Riemann Zeta Function as a natural thing linked to prime numbers. Roughly, the Prime Number Theorem says that the Nth prime is approximately located near the number Nln(N). Intuitively, you can understand the Riemann Hypothesis as follows: The Prime Number Theorem says that the Nth prime is about N*ln(N) with moderately sized error bars and the Riemann Hypothesis says that the Nth prime is about N*ln(N) with slightly more constrained sized error bars.

The Riemann Zeta Function helps with the Prime Number Theorem because it is [built from prime numbers](https://en.wikipedia.org/wiki/Riemann_zeta_function#Euler's_product_formula) and so has information about how they are distributed since how they are distributed controls how it converges. You can use it to get lots of info about prime numbers. For instance, that is diverges at s=1 means that there are infinitely many primes and *how fast* it diverges is connected to how many prime numbers there are (it gives a much better approximation than Euclid’s original proof of infinitely many prime numbers does).

So, effectively, the more we know about the Riemann Zeta Function, the more we know about primes. Specifically, because zeros of analytic functions in a way determine the function, the zeros of the Riemann Zeta Function are explicitly tied to the distribution of prime numbers. Like, there *is* a formula for the prime numbers and it uses the zeros of the Riemann Zeta Function in it. Basically, the zeros control periodic frequencies in the distribution of the primes and the more spread out the zeros are the more wildly these frequencies behave and so the more uncontrolled the primes are allowed to be. The Prime Number Theorem is proved by putting literally the most minimal constraints on where the zeros can be, which means that they are not allowed to be as wild as possible – there is some level of tameness to them. The Riemann Hypothesis is merely the hypothesis that the zeros are as constrained as they can possibly be, meaning that the primes have minimal wildness in how they are distributed.

The spirit: expanding the domain of an analytic function from R to C that also preserves the analyticity of the function in C.

Example: e^x to e^z

this preserves analyticity, because we have the famous identity:
e^iy = cos y + i sin y

Generalizing the idea, it’s an expansion of domain from X to Y, preserving the analyticity.

When you have a function on part of the complex plane, there are at most **one** reasonable way to extend it to the entire plane. “Extend” means we make a new function that have the same value as the old function, except that for certain input where the old function say “undefined” the new function actually give an output. And “reasonable” means the function is analytic, which, in simple but somewhat inaccurate terms, mean you can describe it using one nice formula that works everywhere. The process of making that new function is called analytic continuation.

The fact that there is at most only 1 way to extend has crucial important consequence: the values of a function at **any** tiny piece on the plane contains **all** information about the function everywhere else. This is what make analytic continuation very special. You can study the newly function potentially anywhere and obtain useful information about the old function.

In particular, for Riemann zeta function, the infinite sum is a partial function: it is only defined for input where the real part is >1. We extend the function by analytic continuation. The new function is NOT defined by this same infinite sum. However, all information we could know about this infinite sum can be obtained from anywhere on this new function.

So the question is which part of the new function is useful to study? By reflection formula, everything that happen when the real part of the input is <0 is pretty boring and completely predictable: there is a direct formula that relate the real part >1 side and the real part<0 side, point-by-point, so it’s not that useful. But the useful part is the critical strip, when the real part is between 0 and 1 (inclusive). We have a formula that relate behavior of each zero of the function on this region to aggregate behavior of the infinite sum across different input. Which basically let us transform a problem in one context into a problem in completely different context. The aggregate behavior of the sum is related to the distribution of primes, and instead of studying this aggregated behavior directly, we study the zero of the zeta function.

In fact, we had had small successes with this. The prime number theorem was proved by showing that there are no zeros occur on the boundary of the critical strip.

> extending the domain of an infinite series using analytic continuation

The phrase “using analytic continuation” is pretty misleading. There is not in fact a specific “method” of analytic continuation. Analyticity is simply a property that implies a function satisfying this property has **at most one** extension to an analytic function on a larger region. It does *not* assure you that such an extension is possible, and it might not be.

A function defined on a subset of the complex plane is called analytic when it can be expressed around every point in that subset as a power series. It turns out that two analytic functions on a subset of C that are equal on some tiny disc in that subset must be equal everywhere in the subset. (For the experts, I am glossing over the issue of connectedness of the subset for simplicity.) This may not be true for functions that only satisfy some weaker conditions than being analytic, such as continuity: if two continuous functions on a subset of C are equal on a tiny disc in that subset, it does *not* mean they have to be equal on the whole subset. Mathematicians say the property of being analytic is much more “rigid” than the property of being continuous.

Now for “analytic continuation”… if I have a subset S of the complex plane containing a disc and a larger subset T, and I can extend an analytic function on S to an analytic function on T by two methods, then the two functions I get on T *must be the same* because they are equal everywhere on the original set S (which contains a tiny disc) and two analytic functions on T that are equal on a tiny disc must be equal everywhere on T. Have we shown analytic functions on S can always be extended to analytic functions on T? Not at all. We only showed each analytic function on S can have *at most one* extension to an analytic function on T. We did not show there must be an analytic extension of the function on S to T at all, and maybe there isn’t. The term “analytic continuation” just refers to this “at most one” property of the extended function on T. It does not tell you at all how to find the extended function.

The demonstration that a specific analytic function on S can be extended to an analytic function on T requires serious work. There’s no guarantee it can be done “by analytic continuation”. All that we can say ahead of time is that if we can extend an analytic function on S to an analytic function on T, then there is just one result: two approaches to doing this must lead to the same function on T. Actually building an analytic function on T that extends some analytic function on S may be easy or it may be hard. In Riemann’s paper on the zeta function, he gave two methods of extending the zeta function from the half plane Re(s) > 1 to C – {1}. His methods look quite different, but the two functions they lead to on C – {1} have to be the same function (so his methods lead to the same function on C – {1} given by two rather different-looking formulas) because of the “at most one extension” property of analytic functions being extended to larger subsets.

The extension of the zeta function beyond the half plane Re(s) > 1 does not use the infinite series expression for the zeta function, since that series doesn’t converge elsewhere. Instead we use other (more complicated-looking) formulas for the zeta function, typically in terms of integrals instead of infinite series, and those other formulas have the advantage that by massaging them in the right way we can see they make sense on C – {1}. We don’t have to worry about different formulas for the zeta function on Re(s) > 1 leading to different extensions of the function on C – {1}, even if the resulting extension formulas look different, because if we can check the extended functions are analytic then we are guaranteed that those different-looking formulas on C – {1} must be the same function by what I wrote about in previous paragraphs.

Verifying that a specific analytic function on a specific subset of C can be extended to an analytic function on some larger subset of C is often hard work (and in many cases is still an unsolved problem). We can’t wave some magic wand and say “we did it by analytic continuation”. The property of analyticity assures us that an extension of the function is unique if it exists, but it doesn’t tell us an extension of the function exists (while maintaining the property of analyticity).

Are you familiar with [Taylor series](https://en.wikipedia.org/wiki/Taylor_series)? An infinite polynomial, made from all derivatives of a function at one point, which converges to said function around that point?

Analytic continuation basically uses the Taylor series to “bunny-hop” around “bad points” (like `zeta(1)`, where it blows up) to define the function everywhere, except at those “bad” points.

You take a point where your function and derivatives exist and construct a Taylor series there. The series gives you values around that point (including complex), but eventually diverges. You pick one point, where it still converges, and construct a second Taylor series, which defines more points, you pick one and construct third Taylor series… and so on, until you reach any point you want.

I recommend [this video](https://youtu.be/CjSKmcWRFzE?t=523) about analytic continuation. Actually, I recommend the whole channel: they have good videos exactly on the Riemann Hypothesis. They even [have a video](https://youtu.be/oVaSA_b938U), that explains the hypothesis without zeta function.

Edit: Zeta function can be extended a bit even without analytic continuation. You just need to isolate “bad point” at `zeta(1)`. See [eta function](https://en.wikipedia.org/wiki/Dirichlet_eta_function), it is defined for x>0, and has a formula, that gives zeta.