Why are games rendered with a GPU while Blender, Cinebench and other programs use the CPU to render high quality 3d imagery? Why do some start rendering in the center and go outwards (e.g. Cinebench, Blender) and others first make a crappy image and then refine it (vRay Benchmark)?

1.01K views

Why are games rendered with a GPU while Blender, Cinebench and other programs use the CPU to render high quality 3d imagery? Why do some start rendering in the center and go outwards (e.g. Cinebench, Blender) and others first make a crappy image and then refine it (vRay Benchmark)?

In: Technology

17 Answers

Anonymous 0 Comments

Software engineer here. There’s a lot of wrong information in here guys… I cannot delve into all of it. But these are the big ones: (also, this is going to be more like an ELI15)

A lot of you are saying CPU render favors quality and GPU does quick but dirty output. This is wrong. Both a CPU and GPU are chips able to execute calculations at insane speeds. They are unaware of what they are calculating. They just calculate what the software asks them to.

**Quality is determined by the software.** A 3D image is built up by a 3D mesh, shaders and light. The mesh (shape) of which the quality is mostly expressed in amount of polygons, where high poly count adds lots of shape detail but makes the shape a lot more complex to handle. A low poly rock shape can be anywhere from 500 to 2000 poly, meaning amount of little facets. A high poly rock can be as stupid as 2 to 20 million polygons.

You may know this mesh as wireframe.

Games will use the lowest amount of polygons per object mesh as possible to still make it look good. Offline renderer projects will favor high poly for the detail, adding time to calculate as a cost.

That 3D mesh is just a “clay” shape though. It needs to be colored and textures. Meet shaders. A shader is a set of instructions on how to display a ‘surface’. The simplest shader is a color. Add to that, a behavior with light reflectance. Glossy? Matte? Transparant? Add those settings to calculate. We can fake a lot of things in a shader. A lot of things that seems geometry even.

We tell the shader to fake bumpiness and height in a surface (eg a brick wall) by giving it a bump map which it used to add fake depth in a surface. That way the mesh needs to be way less detailed. I can make a 4 point square look like a detailed wall with grit, shadows and height texture all with a good shader.

Example: http://www.xperialize.com/nidal/Polycount/Substance/Brickwall.jpg
This is purely a shader with all texture maps. Plug these maps in a shader in the right channels and your 4-point plane can look like a detailed mesh all by virtue of shader faking the geometry.

Some shaders can even mimic light passing through like skin or candle wax. Subsurface scattering. Some shaders emit light like fire should.

The more complex the shader, the more time to calculate. In a renderend frame, every mesh needs it’s own shader(s) or materials (configured shaders, reusable for a consistent look).

Let’s just say games have a 60 fps target. Meaning 60 rendered images per second go to your screen. That means that every 60th of a second an image must be ready.

For a game, we really need to watch our polygon count per frame and have a polygon budget. Never use high poly meshes and don’t go crazy with shaders.

The CPU calculates physics, networking, mesh points moving, shader data etc per frame. Why the CPU? Simple explanation is because we have been programming CPUs for a long time and we are good at it. The CPU has more on its plate but we know how to talk to it and our shaders are written in it’s language.

A GPU is just as dumb as a CPU but it is more available if that makes sense. It is also built to do major grunt work as an image rasterizer. In games, we let the GPU do just that. Process the bulk data after the CPU and raster it to pixels. It’s more difficult to talk to though, so we tend not to instruct it directly. But more and more, we are giving it traditionally CPU roles to offload, because we can talk to it better and better due to genius people.

Games use a technique called Direct Lighting. Where light is mostly faked and calculated as a flash. As a whole. Shadows and reflections can be baked into maps. It’s a fast way for a game but looks less real.

Enter the third (mesh, shader, now light) aspect of rendering time. Games have to fake it. Because this is what takes the highest render time. The most accurate way we can simulate light rays onto shaded meshes is Ray tracing. This is a calculation of a light Ray travelling across the scene and hitting everything it can, just like real light.

Ray tracing is very intensive but it is vastly superior to DL. Offline rendering for realism is done with RT. In DirectX12, Microsoft has given games a way to use a basic form of Ray tracing. But it slams our current cpus and gpus because even this basic version is so heavy.

Things like Nvidia RTX use hardware dedicated to process Ray tracing, but it’s baby steps. Without RTX cores though, RT is too heavy to do real time. But technically, RTX was made to process the DirectX raytracing and it is not required. It’s just too heavy to enable for the older GPU’s and it won’t make sense.

And even offline renderers are benefiting from the RTX cores. Octane Renderer 2020 can render scenes up to 7X faster due to usage of the RTX cores. So that’s really cool.

— edit

Just to compare; here is a mesh model with Octane shader materials and offline raytracing rendering I did recently: https://i.redd.it/d1dulaucg4g41.png (took just under an hour to render on my RTX 2080S)

And here is the same mesh model with game engine shaders in realtime non-RT rendering: https://imgur.com/a/zhrWPdu (took 1/140th of a second to render)

Different techniques using the hardware differently for, well, a different purpose 😉

You are viewing 1 out of 17 answers, click here to view all answers.