Back-propagation in Neural Networks

139 views

I had a conversation recently with one of my friends who is a Sci-fi enthusiast, especially when it comes to artificial intelligence, but has no background in AI/ML. I attempted to explain how basic neural networks work but struggled to make the back-propagation method intuitive. I wonder if anyone here can describe it without going into the details of probability distributions, activation functions, gradient descent and the like.

In: 6

3 Answers

Anonymous 0 Comments

Draw a line on a graph – y = x^2 +5
Or even a 3rd or 4th order line. Doesn’t matter.

Pick a spot on the line. An ML neural net is being “trained” when you move that dot to a lower and lower spot on the line. Once it stops moving the training is done. That’s GD in 1 dimension.

2d would be like finding the lowest spot on a think lumpy comforter.

3d is trickier – find the lowest temperature spot in a room.

All the same process. Pick a spot, move to where it’s lower. Repeat. (Obviously you would do better and avoid local mins if you picked 1000s of points and added some randomness to the decent … but that’s an optimization).

ML nets are in the 1000s or even millions of dimensions. But they all do the same. Give it enough input and find the lowest error array of values that will give the most correct answers.

It’s not HAL 9000 / CyberDyne black magic.

You are viewing 1 out of 3 answers, click here to view all answers.