Why backpropagation is preferred for optimising neural networks?

574 views

I am messing with neural networks for a really long time now and as far as I know the way to optimize a neural net is backpropagation.

But why everyone is using this method? Do other ways of doing such task exist? If yes then why nobody mentions them?

In: Technology

Anonymous 0 Comments

Backpropagation is the cornerstone on which the entire success of ANN rests. There is no substitute for it.

It was long suspected that with enough elements in a a representation, even very simple elements, you can principle represent any desired problem-solver. But the number of possible element weights is so utterly unimaginably enormous that it’s inconceivable that you would ever find one of those weight vectors.

The backpropagation algorithm is a method of computing these weights from scratch iteratively. It runs sort of efficiently, and it is entirely differentiable (that is, you can tell at any point which way to adjust which weight, even with many. many dimensions). Without this algorithm, the entire deep learning revolution could not have happened.