It’s because the neural networks you are seeing are ones that we’ve found to be useful. There are plenty of different types but to make them practical to compute and train optimization gets done that doesn’t tie in well with more interesting architectures.
I mean for a long time the single hidden layer fully connected network was the only real practical path.
It’s not like they don’t exist through. If you want to see more interesting stuff you need to look into some of the deep neural network architectures.
Here’s a description of YOLOv3 one of the really popular object detectors.
https://bestinau.com.au/yolov3-architecture-best-model-in-object-detection/amp/
And some variant of YOLOv3 specializing in pedestrians I think.
https://www.spiedigitallibrary.org/ContentImages/Journals/JEIME5/29/5/053002/FigureImages/JEI_29_5_053002_f004.png
The key word here is “illustrate”. They want be able to introduce the concept in a way that is easy to understand and doesn’t result in much confusion. Showing a network with a more complex interrelationship between layers is just going to make understanding a hard ideal even worse.
Now, mechanically most neural networks layers are implemented as matrices, and their communication is done through matrix multiplication, communication through multiple isn’t really possible. Instead, you create a dummy node that exists just to pass through the value unchanged from one layer to the next. From a technical standpoint, any design with communication between multiple layers can be implements using communication between single layers.
Latest Answers