That is not something that is generally true.
A matrix equation is a simple way to set up some mathematical problem. It can result in a fasw way to solve a problem because modern computer hardware often has hardware that can do multiple calculations in parallel so you do multiple operations at the same time and that means it is faster.
There are problems that cant be formulated like that and then it can’t help.
That is not something that is generally true.
A matrix equation is a simple way to set up some mathematical problem. It can result in a fasw way to solve a problem because modern computer hardware often has hardware that can do multiple calculations in parallel so you do multiple operations at the same time and that means it is faster.
There are problems that cant be formulated like that and then it can’t help.
Matrix calculations involve several multiplications at the same time with the same products and then a quick addition at the end. This is things which can easily be implemented in hardware. Firstly you can use several hardware multiplication circuits and secondly they can share a lot of the circuit because they share one of the input numbers. So a hardware matrix multiplicator can perform all the calculations in the same time a normal multiplicator can do one calculations.
The other trick that is being done is that a lot of times you want to do multiple matrix operations on many vectors. Instead of doing one matrix operation on all the vectors before moving on to another matrix operation it is possible to multiply the matrixes together first in one small operation, and then apply this single matrix to all the vectors.
Matrix calculations involve several multiplications at the same time with the same products and then a quick addition at the end. This is things which can easily be implemented in hardware. Firstly you can use several hardware multiplication circuits and secondly they can share a lot of the circuit because they share one of the input numbers. So a hardware matrix multiplicator can perform all the calculations in the same time a normal multiplicator can do one calculations.
The other trick that is being done is that a lot of times you want to do multiple matrix operations on many vectors. Instead of doing one matrix operation on all the vectors before moving on to another matrix operation it is possible to multiply the matrixes together first in one small operation, and then apply this single matrix to all the vectors.
If you have to apply a bunch of linear transformations repeatedly, you can represent them in a matrix. That in itself doesn’t make the math any faster, but the next thing you can (usually) do is diagonalize the matrix. Now you can apply the transformation as many times as you want, for more or less the same amount of computations as applying and inverting the diagonalization, which is just two matrix multiplications (once you know the diagonalization for it).
If you have to apply a bunch of linear transformations repeatedly, you can represent them in a matrix. That in itself doesn’t make the math any faster, but the next thing you can (usually) do is diagonalize the matrix. Now you can apply the transformation as many times as you want, for more or less the same amount of computations as applying and inverting the diagonalization, which is just two matrix multiplications (once you know the diagonalization for it).
Latest Answers