Back Propagation in the context of Neural Networks

265 views

Back Propagation in the context of Neural Networks

In: 1

3 Answers

Anonymous 0 Comments

Imagine the layers of a NN ist just simple Math operations, but many of them. You now get a input to the network, that has so far learned nothing. You can now initially initialize the networks weights and biases at random. You start with the first learning iteration. You show the network the first data and after that it tries to predict according to the training data. You have both, the prediction and the real value at hand and compare both these values. The network now has a way to compare these and see how much it is wrong and where it needs to adjust. Now the network does back propagation: that means it takes the way back and adjust every weight and bias in the network according to what it has learned from the comparison of the predicted value and it’s real value. It adjust every weight and bias in the network according to a learning rate you define. The learning rate is mostly very small to minimize down the prediction error. It repeats the process so often until it hast minimized the „loss“ (prediction error) to a satisfactory level.
Now the bias and weights are fit to the training data you showed them. It has therefore „learned“.
This is way it is also a part of supervised learning, because you have training data that is labeled (has the true value).

You are viewing 1 out of 3 answers, click here to view all answers.