[ELI5] Why does Depth of Field (DoF) setting in video games usually take a toll in performance?

516 views

By blurring the background. Your basically lessening the need to render and reduce resolutions. So it would logically make sense for the increase of the games FPS. However it seems it turning on does oddly the opposite? And I wanna know why that is

In: 48

16 Answers

Anonymous 0 Comments

You still have to render all the shapes and pixels, except now you also have to calculate which ones need to blurred, and by how much, and how that impacts neighbouring pixels.

It’s computationally much harder. At no point are you reducing resolution. That would just make things blocky.

Anonymous 0 Comments

Depth of field uses the original rendered data and calculates how much each part needs to be blurred in post processing, probably with gaussian blur or similar.

Anonymous 0 Comments

Yeah it’s counterintuitive but blurring in real-time is actually very hard to do. Especially if you want it to be a decent quality blur. Lowering the resolution doesn’t cut it and isn’t usually how DoF blurring works in games, it’s not as simple of a problem as it seems at first glance.

Anonymous 0 Comments

If you look at a picture with depth of field its like a sharp picture and then a blurry picture in front of it in spots and a blurry one behind it in spots. A Game has to draw 3 pictures instead of one, which is 3 times as hard (in some ways) but also it has to draw the front and back pictures, then blur them, which isn’t as hard as drawing the picture but it’s still hard.

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

First, let’s establish that games (at least ones with DoF settings) generally work by only drawing what you’re looking at. Whatever is “behind” you (or even just out of view to the sides) is not being drawn until you turn that way. By not drawing most things, this makes what the computer does have to draw faster (higher FPS) since it doesn’t have to constantly draw the entire world, and can focus only on what you’re looking at.

With the DoF setting, the game does not have a “blurry” version of everything that it just switches to because that would be way too complicated, especially if it’s a good/true DoF that has levels of blur that fall off into the distance. Instead, what the computer has to do is still draw everything that it normally would (i.e. the clear version) and then add the blur on top of that based very specifically on where you’re standing, the direction and angle you’re looking, the amount of light, etc. In almost all cases, especially in any first person type game, DoF being on is adding work, not taking it off, which will always reduce FPS as there is by definition more work to do per frame.

Anonymous 0 Comments

Stuff isn’t naturally blurry. Rendering images can be thought of like taking a picture but unlike taking a picture there is no need to “focus”.
Cameras and our eyes have lenses to focus light into a point and everything that is blurry is not properly focused. Since when you render an image there is no need to “focus” the image, you end up with a detailed image where everything is in focus. To achieve a depth of field, the image must be blurred which adds computational time.

Anonymous 0 Comments

You blur a pixel by mixing in colours from the surrounding pixels. Like smearing paints to blur a painting.

The process of rendering is going object by object through the virtual world and transforming its 3d information into pixels. This happens in parallel for thousands pixels at a time. There is no set order. For this reason the code that calculates the colour of a pixel does not know the colour of its neighbouring pixels.

Even if it could know, what would this mean for blurring? I need the unblurred colour from the neighbouring pixel. If my program is producing blurred colours then I’m stuck. My neighbouring pixel is already blurred. I need all the unblurred colours for a blur effect.

For this reason blurring is done by rendering to an image and then blurring the result, an expensive extra step.

This texture is a normal rectangle texture of a certain resolution, so there is no way to have less resolution in the area I know I’m going to blur.

There is a variable rate shader (the program that runs on your GPU) that calculates a single colour that is used by multiple pixels. So that is one solution that may help here, I’m bit out of touch to know if this is common. It was a big promise for VR and eye tracking so that you could render more details where the eye is looking.

Anonymous 0 Comments

Calculating a blurred image still requires the original image, but now you also need to sample each pixel from *multiple* source pixels in the original image, which is just additional calculations on top.

Also, there are many different types of blur. Some look good, most look bad. Cheap to compute blurs look terrible. Cinematic bokeh-producing blurs are really expensive to compute. Creating a “cinematic” DoF look requires the latter, but also needs to take depth into account at every step.

In short, blur is not just reduced resolution, it’s like turning each pixel into a translucent circle and having to blend them all together.

Anonymous 0 Comments

I’ve never seen the point of depth of field. I feel like my eyes already do what DoF is trying to accomplish. That and motion blur.