Share & grow the world's knowledge!

- unwantedaccount56 on Why do tunnels through mountains not collapse with the weight of thousands of tons of rock on top of them?
- PaxUnDomus on Aren’t we paying for all the ads we see?
- PeterHorvathPhD on Aren’t we paying for all the ads we see?
- fghhjhffjjhf on What is cashflow?
- TpMeNUGGET on What is a “game engine”? For example EAs Frostbite engine or Unreal Engine.

Copyright © 2023 AnswerCult

Imagine the layers of a NN ist just simple Math operations, but many of them. You now get a input to the network, that has so far learned nothing. You can now initially initialize the networks weights and biases at random. You start with the first learning iteration. You show the network the first data and after that it tries to predict according to the training data. You have both, the prediction and the real value at hand and compare both these values. The network now has a way to compare these and see how much it is wrong and where it needs to adjust. Now the network does back propagation: that means it takes the way back and adjust every weight and bias in the network according to what it has learned from the comparison of the predicted value and it’s real value. It adjust every weight and bias in the network according to a learning rate you define. The learning rate is mostly very small to minimize down the prediction error. It repeats the process so often until it hast minimized the „loss“ (prediction error) to a satisfactory level.

Now the bias and weights are fit to the training data you showed them. It has therefore „learned“.

This is way it is also a part of supervised learning, because you have training data that is labeled (has the true value).