How does the ‘black box’ in a neural network work?

351 views

Trying to understand Deep Learning but all the resources I’m finding are like: “and inside this black box is where the magic little goblin twists his dials and out comes your probability!”

Ugh.

In: 1

7 Answers

Anonymous 0 Comments

Neural networks are driven by linear algebra. The black box is divided into several layers. In each layer is a collection of nodes. The value of each node is produced by multiplying the value of each node in the previous value by a different constant and summing them together. This creates a chain of dependencies which map the values of the inputs to the values of the outputs. Each layer get represented as a matrix and they get multiplied together to perform the mapping.

Now, the “little goblin stuff” refers to how the constants in the layers get determined, and the actual math involved is well beyond an ELI5. The basic process goes like this: You have a set of training data for which you know what the output should be for those inputs. You feed the training data into your model, and compare the outputs it produces to what it should be. Based on the differences, you adjust the values in the layers. Then you test it again and adjust the layers again. You keep repeating this iterative process until the model processes the results you want.

Reason this is called a black box is because this process can be entirely automated and the resulting values probably won’t make sense to a human being.

You are viewing 1 out of 7 answers, click here to view all answers.