Why do the same Conjugate Pairs appear in Fourier Transforms, Commutators, The Uncertainty Principle, and Noether’s Theorem?

352 viewsMathematicsOther

When studying time series analysis, the answer is usually ‘the FT variables are defined such that they are duals of one another’ but this isn’t a very satisfying answer and doesn’t explain why the same pairs come up again and again.

Are the uncertainty principle pairs the same because the uncertainty principle relies on Fourier transforms in some way or vice versa?

Or is there some underlying mechanism or symmetry which causes then to pop up all over the place, and if so, how does it cause them to appear?

In: Mathematics

3 Answers

Anonymous 0 Comments

Damn. Is there an ask like I’m five sub I should subscribe to?

Anonymous 0 Comments

They’re all essentially equivalent. I’ll cover one way to see the relationship between them all, though it’s definitely not the most general.

Let’s say you have some state defined in the basis of some coordinate (for example x position) and you want to find the operator that translates the state by s. In the that coordinate basis, the translation operator will end up being T(s) = exp(-s∂/∂x) = exp((s/ih)p) where p is some hermitian operator called the *generator of translation in x*. In the case of x position, that’s x momentum. In the case of angle, it’s angular momentum, etc…

Now **Noether’s Theorem:** Assume the hamiltonian of your system does not change with translation in your chosen coordinate. Then HT(s) = T(s)H and so the commutator [H, T(s)] = 0. You can prove that this must imply [H, p] = 0 as well, and as the change in the expectation value of an observable over time is proportional to this commutator, it implies that <p> is constant.

For the **commutator** relationship, it’s relatively simple to prove that the commutator of a coordinate with the generator of its translation is [x, p] = [x, -ih∂/∂x] = ih.

The **uncertainty principle** is actually defined in terms of commutators. It’s not too hard to prove that for any two operators A and B, stdev(A)stdev(B) ≥ |E([A, B])|/2. Note that since [x, p] = ih, this implies the usual uncertainty principle you’re probably used to, but it’s more general.

Finally, for **fourier trasnforms**, using the whole p=-ih∂/∂x thing we found earlier, you can try to find eigenfunctions of this operator. If you do, you end up seeing that plane waves at different frequencies are each eigenfunctions of p. Pretty much the entire point of Fourier transforms is seeing how you can decompose arbitrary functions into integrals of plane waves, so it’s quite straightforward to see why you swap back and forth between the x and p bases using Fourier transforms.

Btw, in the future, r/AskPhysics might be better for this kind of question!

Anonymous 0 Comments

The uncertainty principle works out the way it does because of the Cauchy-Schwarz inequality. If you define the variance of two operators (which also requires you to define expectation value) when you multiply the variances of the operators you can start approximating it through the CS inequality and that will yield the commutator of the two operators. So if thats 0 the physical quantities commute there is no limit. Which means both quantities can be measured simultaneously and it also means that there exists a common eigen function system for them. So you can force the system into the eigenstate of both of the operators at the same time because such eigenstate exists.

For example with spin or rotations in general J² (think of it as magnitude of spin) commutes with all directors in 3D J1 J2 and J3. But J3 doesn’t commute with J1 and J2 and vica versa. This means that there is a common eigenstate for J3 and J². But you can’t measure J3 and J2 simultaneously as a system cannot be in both a J3 and J2 eigenstate. If thats the case then you have a limit to the variances. For example x and p don’t commute and the limit is hbar/2.

Fourier transforms came in at the wave package solution for free particles. A palne wave is not normalsizable but what if we make a sum of plane waves with different “frequencies” to get a function that has a finite norm. You can do that and the result is a good old wave package. If I tell you the position wavefunction at t=0 you can do an FT to get the sorta momentum function its a function of k the wavenumber vector which is practically momentum. So then you can do an FT with time evolution on the k function which gives you the time evolved wavefunction. And why you can immediately discover the uncertainty principle is because “the FT of a wide function is a narrow function and vica versa”. In the extreme the momentum is a dirac delta and so position will be a constant which is obviously not normalsizable. And so if they are FTs of each other then they can’t be both narrow functions, which means that the product of their spread has to be greater than something.

As far as conserved quantities go, in the Heisenberg picture instead of time evolving the wavefunction you time evolve the operators. Because all we can measure is the absolute square, all we need to keep invariant are scalar products. (Not even that they can be changed up to a unit absolute value complex number, aka phase.) So you can transform time evolution on the operators without changing physics. And why is that good because we can see a cool result. If you take the time derivitive of an operator which you of course introduce properly and if thats 0 the operator doesn’t change over time, so the physical quantity it describes is a constant. And as it turns out because the Hamiltonian doesn’t change over time because time evolution is the exponential of the Hamiltonian and an operator in piece times commute with a function of that operator you have an invariant Hamiltonian and this will be true for any operator that commutes with the Hamiltonian. So if an operator commutes with the Hamiltonian it describes a conserved quantity.