Easiest way to properly explain it, is by explaining genetic algorithms first.
Imagine you take an electrical circuit, and just throw electric components onto it *completely randomly*. It’s a complete meaningless mess that makes no sense. You send electricity to it and you see the output on one of the wires – the circuit probably explodes or does nothing. This is fine.
You repeat the same process ten, hundred, thousand times. Finally you find a random combination of components that has some sort of output – 2 volts on the other end. You designate this circuit a successful one. Even though your goal is to get 10v output.
You take the successful one, and randomly replace few components on it. You do this a million times with the same “parent”, this creating a million slightly different “offsprings”. You try all of them out.
Many burn out, some do nothing, but there are a few that produce a number closer to the 10v you are looking for. Some might be 6, other 3, other 90.
You pick the closest one (6), designate it as parent, and repeat the process again.
You continue doing this until you get an answer with 10v.
Now, your circuit will be stupidly complicated, it will have a weird mess of things, paths that are isolated and never powered, connections that make no sense at all – but in the end it gives you the right answer.
This is the fundamental idea behind modern AI (neural networks). It’s very different too – but fundamentally you have the same garbled mess that somehow works and somehow produces the right answer.
Latest Answers