[ELI5] Why does Depth of Field (DoF) setting in video games usually take a toll in performance?

498 views

By blurring the background. Your basically lessening the need to render and reduce resolutions. So it would logically make sense for the increase of the games FPS. However it seems it turning on does oddly the opposite? And I wanna know why that is

In: 48

16 Answers

Anonymous 0 Comments

Dof isn’t a blur
It’s a circle of confusion.

imagine a pinhole camera like this |X| … the left vertical line is the world… the middle of the X is the pinhole in a box…and the vertical line on the right is the film back…
The X represents the LIGHT – … it travels into the camera in straight lines … and because the hole is so small only light that’s traveling straight through that tiny hole passes into the box and … a faint, inverted image hits the film back(or the back of the box)

As the hole grows larger you get the desirable effect of letting in more light – but the light is no longer all lined up like an X!
Because a bunch of light can come in from different directions you get a bunch of Xs overlapping each other… like if you were looking at an X while drunk.

To fix the fuzziness and get the image back, we invented something called a lens!! This curved glass aligns the light again!!! …. But it can only align it for ONE a plane in front of the lens…
This would take drawings to explain
But basically – only some things are at the right distance… so the curvature of the lens makes a sharp image on the film back.

https://i0.wp.com/farm8.staticflickr.com/7087/7155171948_927b343a45.jpg?resize=500%2C196

If you want to know more:

The round out-of-focus balls you see in the background of photos are called the bokeh
The shape and size of the bokeh is dependent on the shape and size of the Iris (and film back)

This is why there’s almost no dof in a pinhole camera … the pinhole is the iris and it’s so small that there’s a very small circle of confusion. But as the iris size grows, you get more light in the camera but also a larger circle of confusion…
The lens does it’s job and keep the images in the picture plane sharp, BUT everything else still looks Drunk

TO ANSWER YOUR QUESTION:
so when video games are properly emulating the circle of confusion they have to render the images many times! (Essentially being pinhole cameras from many points, and then blend them together)…
Blurs are just a 2D effect… DoF is a 3D effect.

Anonymous 0 Comments

You are technically rendering more not less. To do DoF you render the scene as normal and than apply some sort of multistage blurring algorithm to the final result. You also need a way to keep a portion of the image in focus. So yeah long story short it’s more not less and depending on the algorithm you use and how efficient it is it can actually be quote a heavy post process effect.

Anonymous 0 Comments

Usually it’s just a filter, filter adds not removes, everything is rendered as is, filters applied later. More filter less fps.

Anonymous 0 Comments

This will be somewhat “wrong” from a technical perspective, but to keep the answer high-level I think it’s useful as an explanation.

Video games get their final result by effectively “drawing” the world in a series of images, these images are then stitched together for the final result using some fairly advanced mathematics.

As a very simplified example: [Land] -> [Objects/Characters] -> [Particles] -> [Full screen effects] and then we merge all these images together.

DoF occurs in the above staged called “Full screen effects”, most of the game-world at this point has been drawn and a lot of fairly expensive lighting calculations have been performed.

Computers can’t go back in time for free, so our only way to “speed” things up is to simply not do work but because the work has largely already been done there isn’t much left to make the effect “free” or to allow the effect to reduce overall workload.

This isn’t to say it can’t happen… we could apply “some” full-screen effects at an earlier stage but then that result will be carried forward and could produce artifacts; for instance if we applied it before we have drawn the characters they might look like they were “cut out” and simply pasted onto the blurred image which would have them not be blurred when they should be… meaning now we have to perhaps do another pass (which means more work, maybe less) or live with the result and save time.

Alternatively, we apply DoF every-time we draw something to the scene; the issue with this is we are usually looking at thousand’s of “should I blur?” and checking is an instruction.

However you are doing calculations to hopefully “short-circuit” and save time later; everything is an instruction, all instructions take time, it’s a race to simply find out which instructions can prevent more instructions from becoming necessary.

If you are interested in a deep-dive into how “modern” games render; I suggest reading [https://www.adriancourreges.com/blog/2015/11/02/gta-v-graphics-study/](https://www.adriancourreges.com/blog/2015/11/02/gta-v-graphics-study/) by Adrian Courreges where they break-down GTA 5’s rendering pipeline.

Anonymous 0 Comments

Imagine an artist painting a landscape scene on a canvas who wants to implement focus, and knows objects too near or too far need to be blurrier than the ones at the optimal focus distance. But the only skill this artist knows for how to do that is to just paint everything with all the same crisp detail first, then slightly blur some of those things later by lightly wiping across them with a dry brush while the paint is still not quite dry.

That’s *sort of* what the graphics card does when it’s making distant objects be “out of focus”.

The math it has to do to know where to draw a polygon on screen doesn’t go away just because that polygon will get made fuzzy in post-processing later.

If it did work like you say, where distant things are rendered at lower resolution than near things it might speed it up, but that’s not really easy to do because resolution of the 3-D frame buffer is, well, a 3-D volume grid. It has to be the same dimensions all the way through.

That means, if you are rendering on screen at 3840 x 2160 pixels, then ALL the slices of the volume of space that are used to calculate the positions of things have to be 3840 x 2160. Whether they’re 1 meter from the eye or 1000 meters from the eye.

The one thing you could do to speed it up would be if the programmers decided to have lower-res textures used to paint farther objects. They’re still being painted on the same 3840 x 2160 canvas, but since a distant object uses fewer pixels of that canvas by virtue of taking up less of the field of view, you don’t need as detailed a texture loaded onto them. But the thing is… this is already being done even if you didn’t turn on Depth of Field, so turning on Depth of Field doesn’t reduce things any further in this regard.

Anonymous 0 Comments

Wow, so basically the game has to do math? No wonder it’s slowing down!