Why is VR rendered twice

451 views
0

Why is VR frames rendered twice (Once for each eye), instead of just once on one single wide frame (Easier to run and maybe add some better graphics), and then leave it up to motion parallax for the user to interpret depth?

In: Technology

Because your two eyes need a view from [slightly different viewpoints](https://www.researchgate.net/profile/David_Mcallister4/publication/271489047/figure/fig4/AS:[email protected]/Camera-configuration-for-separate-Left-Right-stereo-rendering-shows-in-b.png)

[Here’s a 3D image in red/cyan format](https://blog.spoongraphics.co.uk/wp-content/uploads/2015/anaglyph/anaglyph-1.jpg). This allows you to see very well the difference between both images.

Notice how the separation between both eyes varies based on how close the object is.

The simplest answer would just be “That’s how our eyes work”, but it’s also worth considering that one render could be damaging to our eye.

When your eyes focus on something in real life, both eyes point at the object. Because your eyes are a distance apart, this means they point slightly towards each other. For most objects, this is comfortable, but if the object we’re focussing on is very close, then we go cross-eyed, which becomes uncomfortable after a short while.

If VR used one render, every object in game would appear at one position on the screen, so both eyes would need to point at it to focus on it. With the screen very close to your face, this would make you cross-eyed while playing, which can be bad for your eyesight in the long run. One way you could get around this is to put the screen further away, but then you’d need a bigger screen to fill the players view, which gets much heavier.

Using two renders, each object appears in two places (one for each eye), so to focus on them, each eye can point at their version of the object, which will be somewhere in front of the eye, preventing you from going cross-eyed.

Part of rendering is taking a information about objects and their locations and then calculating what would be seen from the camera’s viewpoint. You’re taking a 3D objects and converting it to a 2D image. That 2D image will be different if you move the camera even if all the 3D objects stay in the same location.

VR has to consider the view from two cameras, one for each eye. So it has to do those calculations twice. An eye lined up with a gun’s sights is going to see straight down a barrel while the other eye will see that barrel at an angle. If you just rendered one 2D image and panned over it to get the other eye’s view, you’ll just see straight down the barrel.

* In order to tell how far away something is, your brain compares the slight differences between what each of your eyes sees.
* In order for VR to trick your brain into thinking things on a flat screen are actually different distances away, it needs to create a slightly different image for each eye.