On the back of our eye is our retina, which contains the cells that detect light and pass those signals to our brain. Let’s first imagine that our eyes were just open sockets with the retina on the back. Light would hit the cells in the retina from all directions, so each light detector would be getting light from everywhere, and it would be impossible to distinguish which light was coming from which direction, so there’d be no way to form an image.
Because the light passes through a tiny hole (our pupil), this filters the light so that each light detector on our retina is only receiving light from a specific direction, and thus an image can be interpreted. This is exactly how cameras work, too. But if you imagine a light ray coming from above us, it would be entering our eye on a downward direction, so it would hit the bottom area of our retina, and vice versa for light rays coming from below us. So the image is actually hitting our retina upside-down (and mirrored left-right).
The detectors in our retina are wired to our brain such that the bottom detectors are interpreted as the top of our eyesight, the left is interpreted as our right, etc. I wouldn’t say that our brain actually sees things upside-down, it’s just that the image hits our retina upside-down, but is wired into our brain so that we interpret it correctly.
because the brain evolved to be adaptive.
to read it’s input neurons and construct your perception of the world.
There had been simple tests with a switching google which turned the view again. the participants could adapt after a few days. And taking it off they adapted again (and could probably make the switch again much faster).
It doesn’t. It’s a misunderstanding of the situation, really.
Because our eyes use a lens to focus incoming light into a clear image, the image that hits our retina is “upside-down”, exactly like what happens in a camera or telescope.
The misunderstanding is this idea that our brain takes this crazy upside-down image and works its magic to flip the image correctly.
It doesn’t need to. Simply enough, the top part of our retina is just treated as the bottom of the visual field. It’s connected to the part of the visual cortex that expects the bottom of the image. It’s all handled by the “wiring”. No transformation is needed.
It’s just like a camera sensor. The top of the image just happens to be physically located on the bottom part of the sensor. There’s no reason to make the top of the sensor the top of the image and then require a microprocessor to flip it around. (Although we *can* do that and in fact this is very common with phones because we can hold them in an orientation, and redefining the top or bottom is convenient.)
How I imagine is our brain is like a computer. It has the inbuilt graphics card necessary to see however it writes and updates the software itself. So just based on the info comes with light, it starts to create a logic between this info and the surroundings as we get experience. And a software doesn’t care whether the image is upside down or not because just with a small tweak in that software can make us see as needed.
Latest Answers