Neural networks are a method of machine learning that tries to mimic how a brain would work. It’s made up of nodes. A node just takes an input, performs some function, and produces an output. These nodes are arranged in layers. A layer of nodes can take several inputs and produce several ouputs. You can take these ouputs and use them as inputs in another layer and have many connected layers of nodes in a network.
A neural network doesn’t necessarily have to be good at performing its assigned function. The way the nodes perform starts as random. A neural network has to be trained. Basically, you give the network a task, a set of test data, and a way to score how well it did and let it adjust its own settings according to its score. It then tries multiple settings for the nodes testing what works and what doesn’t work until you’re left with a neural network that can perform a task for no apparent reason. People refer to neural networks as a “black box” because you can’t tell how it works it’s just a bunch of seemingly random functions.
Latest Answers