The learning algorithms that power neural networks are procedures for finding complex mathematical functions that correctly predict the relationship between connected data. So if you have a bunch of pictures of letters and a bunch of labels (annotations of the letter it’s a picture of), a learning algorithm can automatically find a mathematical function that correctly predicts, from the pixel values, what the label should be, by iteratively refining the function/network to reduce error on the data.
The techniques used to refine the function (the learning algorithm) are well understood (and not very complicated). However, the functions that are produced by these procedures are extremely complex and don’t come with any guide to interpreting them. They work well in practice, but figuring out how they work is very hard and often just not possible with any level of fidelity.
Think of it like evolution. Evolution is very simple: try random variations on what you’ve got, keep the versions that work for the next generation, and discard the ones that don’t. Anyone can understand evolution. The *products* of evolution are staggeringly complex and nobody fully understands them. Trying to work out what that simple procedure had done is the entire field of biology.
Latest Answers