Back-propagation in Neural Networks

141 views

I had a conversation recently with one of my friends who is a Sci-fi enthusiast, especially when it comes to artificial intelligence, but has no background in AI/ML. I attempted to explain how basic neural networks work but struggled to make the back-propagation method intuitive. I wonder if anyone here can describe it without going into the details of probability distributions, activation functions, gradient descent and the like.

In: 6

3 Answers

Anonymous 0 Comments

Back propagation is the process of transferring (“propagating”) updates through the neural network based on current performance and is the core technique used in gradient descent learning.

Imagine you are trying to find the highest point on a mountain, but there is a very heavy fog, so you can only see 1 metre in front of you. In addition to your eyes, you could also use your ears to listen for echoes, or your phone’s compass combined with an old map to work out the best way to go. Gradient descent is the process of taking one step at a time in the “best” direction that you can identify currently. Back propagation is taking the direction of the “best step” to calibrate your ears, eyes and compass, allowing you to slowly realise that your compass is consistently off by 45 degrees, or that an echo that lasts 5 seconds means you’re going in the wrong direction and building that into your next calculation of the “best step”.

You are viewing 1 out of 3 answers, click here to view all answers.