How does a GPU work, how does it help with painting pixels on the screen? How do better gpu’s paint these pixels faster?

249 views

No matter how stunning the content you’re rendering, isn’t it a set amount of colored pixels per frame, per second? Or is it different than that

In: 3

3 Answers

Anonymous 0 Comments

To a certain extent, you’re right. On a 4K screen there are approximately 8 million pixels. One of the basic functions of a GPU is to draw those pixels to the screen. But, the real hard work becomes deciding what color each pixel should be.

In a 3D game, there may be hundreds or thousands of objects inside the frame. Each of those objects is made of thousands or tens of thousands of triangles. The GPU figures out where all of those triangles are in relation to the camera for every frame.

Next, it draws a texture on each of those objects. This is a two-dimensional picture that is mapped onto a 3D surface. The size of the original texture is never exactly the size that it will be on screen. So, the GPU has to figure out how to best draw the texture onto the object so that it looks right.

The last big thing the GPU has to do is figure out lighting. Without any lights the scene is simply black. If everything is at full brightness it all looks flat and ugly. Correctly figuring out how all of the lights in a scene illuminate every object, and sometimes how that light bounces off one object and onto another, requires a TON of calculation. This step used to take hours for every frame of animation for a movie like Toy Story. Today it can be done sixty times per second (or more)!

Anonymous 0 Comments

I’m oversimplifying, but you can think of a regular CPU core that normally runs your computer as a processor that does one calculation at a time. If you need to do the same multiplication to 100 pieces of data, you usually have to loop over each piece of data and to do the multiplication over and over again. (Modern CPUs actually do allow some parallelization and operations on multiple pieces of data at the same time, but this is still useful for comparison.)

GPUs, on the other hand, really excel at doing operations on many pieces of data at the same time. They will typically have hundreds of cores allowing for massive amounts of parallelization and a single instruction running on one core can operate on many inputs at the same time.

When it comes to rendering scenes on your screen, you are usually breaking down an environment into triangles and performing the same operations on points or regions defined by those triangles. For instance, you might want to blur far away pixels and focus on nearer pixels, so you might have a blurring function that you need to run over every pixel on the screen. The GPU would allow you to do that more efficiently because it’s the same calculation just being spread across many different inputs.

GPUs aren’t just for graphics though. They are often also used for things like machine learning models, and physics simulations. In these cases the problem they’re trying to solve just looks similar: you have one calculation that you are trying to do across many different inputs. This makes GPUs much more attractive than traditional CPUs.

Anonymous 0 Comments

The hard part is not getting the picture onto the screen. That’s the easy part. The hard part is literally creating the contents of that picture from scratch *every single frame*.

In the real world, we have light rays bouncing off of stuff in all directions all the time. All you need is some kind of camera that can capture those light rays and record them onto something like film. From there, you can deduce a record of where things were and what was going on at the time the picture was taken.

In a video game, you kind of have the opposite problem. The computer knows exactly where everything is in the game world at any given time, including the camera. But there is no freebie light rays bouncing around to collect. So the camera can’t really “see” anything. There’s no “picture” to take.

The GPU’s job is to simulate those light rays in real-time. In the real world, light rays are emitted from light sources, reflected by objects, and some of that light makes it to the camera. Calculating the paths of light rays that miss the camera completely would be calculation time wasted, so GPUs typically work in reverse. They shoot rays out from the camera to hit stuff, and then try to back-calculate what light sources they may have came from to determine what color and brightness that bounced-off surface should be.

Simulating accurate optical physics to a realistic degree, 60 times every second, for enough pixels to cover a 4K display is no joke. That takes a horrendous amount of math operations. If you assigned a general-purpose CPU to this task, it would be utterly swamped. So we design custom-built chips that do *only* this task, and do it extremely well, at the cost of sucking at almost everything else. That’s a GPU.