If you have some linear operator, in some finite dimensional space like in good old 3D euclidean geometry an operator is some transformation. The 3D space is a vector space where each point is described by a vector pointing to it from the origin.
The operator transfroms these vectors, rotates, mirrors, stretches, squishes, projects, etc or any combination of these. The eigen values are special numbers. For every transformation there are sets of vectors that only get streached or squished into a number multiple of themselves. This number multiple is the eigenvalue.
Av=sv
Where A is an operator that transforms a v vector in a way so that the effect of A in v is to take v into s×v. Such sets of vectors are eigenvectors.
There is always a subspace which transform like this so for an s eigenvalue the eigenvectros are a line. So one vectors and any number multiple is an eigenvector. There are so called degenerate cases where you find that multiple eigenvalues are equal. The 3D identity that does nothing has 3 eigenvalues all 1. In these cases the subspace of eigenvectors is greater than 1. Like A takes a full plain of vectors to the eigenvalue multiple of themselves.
An operator can be represented as a matrix given some coordinate system (or rather basis) in this case we can subtract the eigenvalue from the main diagonal of the matrix. What this does is simple the main diagonal tells you how that operator stretches the vectors if you take an eigenvalue out of it it will squished a direction to 0. This means that your lets say 3D space transforms onto a 2D plain. The determinant of the operator tells you how much it morphs a volume. Lets look at a cube described by 3 perpendicular vectors. The cube has unit volume. A determinant of 2 means the operator transformations a unit volume into a 2 unit volume. For any shape. If we 0 out a direct the determinant will be 0 as the operator flattens any volume. So for eigenvalue s we get an equation called the characteristics polynomial since for an N×N matrix detA=0 will be an N order polynomial.
So for the 3D case we will have 3 solutions for s give the fundamental theory of algebra any N order polynomial has N solutions and sometimes these solutions end up as the same, thats the degenerate case. So we have 3 eigenvalues and now we just have to actually subtract it from the diagonal and solve the (A-id×s)v=0 equation. So the A operator with s subtracted takes some vector to 0 and this is a system of N equations of which 2 are independent since we will have 1 freedom as any number multiple of v is also a solution. The dependent equation means that it holds no additional information. And now we have the eigenvectors.
So yeah the solution give a matrix is algorithmic. But the problem comes up all the time. If you want to apply functions to operators you have to make projector decomposition which requires you to solve the eigenvalue problem. In physics you can solve a system of coupled springs using a matrix-vector formalism this requires you to solve an eigenvalue problem. Schrodinger’s equation is the eigenvalue problem of the Hamilton operator where Hv=Ev where v is the wavefunction and energy will be the eigenvalues of H. These problems also show up frequently in data analysis.
You can think of eigenvectors as direction that are natural for that operator. So the way to look at its applicability is that if you have an operator its eigenvalue problem will be of interest to you more often than not.
Latest Answers