Why do computers need GPUs (integrated or external)? What information is the CPU sending to the GPU that it can’t just send to a display?

1.34K views

Why do computers need GPUs (integrated or external)? What information is the CPU sending to the GPU that it can’t just send to a display?

In: 219

41 Answers

Anonymous 0 Comments

The CPU is really good at task-switching, doing a bunch of things basically all at once.

They GPU is designed to configure itself to ONE task and to do the same thing bazillions of times per second.

It’s like comparing a tractor to a sports car. They’re fundamentally different machines.

Anonymous 0 Comments

Computers don’t “need” GPUs. It’s just that if you have the CPU doing all of the processing for images then there is a whole lot less CPU “time” available to do all the general-purpose stuff a computer does and everything would be slower – including the graphics. GPUs are designed to mathematical processing very quickly and can do graphics processing while the CPU is doing other general-purpose stuff. There are lots of chips on a motherboard doing special purpose stuff so that the CPU doesn’t have to do it (that’s why phones now have SoC – they put a bunch of special purpose shit on the same die as the CPU).

Anonymous 0 Comments

If we’re talking about a 3D game, the information that the CPU passes to the GPU is stuff like the shape of the objects in a scene, what color or what texture that object has and where they are located. The GPU will turn that into a picture that your monitor can display. The way you go from a bunch of shapes and colors to a picture involves a matrix multiplication, which is something that a GPU can do a lot faster than a CPU.

Anonymous 0 Comments

There’s no calculation a GPU does that a CPU cannot, and in the very old days the CPU just wrote to a particular location in memory when it wanted something to show up on screen. The reason you need a GPU is that displays have millions of pixels which need to get updated tens to hundreds of times per second, and GPUs are optimized to do a whole lot of the operations needed to render images all at the same time.

It’s sort of like asking why we need container ships when aircraft exist that can carry cargo, and the answer is that the container ship can move a whole lot more cargo at once, even if it has some restrictions on where it can take that cargo.

Anonymous 0 Comments

Think of it this way.

The CPU is like a chef in a restaurant. It sees an order coming in for a steak and potatoes and salad. It gets to work cooking those things. It starts the steak in a pan. It has to watch the steak carefully, and flip it at the right time. The potatoes have to be partially boiled in a pot, then finished in the same pan as the steak.

Meanwhile, the CPU delegates the salad to the GPU. The GPU is a guy who operates an entire table full of salad chopping machines. He can only do one thing: chop vegetables. But he can stuff carrots, lettuce, cucumbers, and everything else, into all the machines at once, press the button, and watch it spit out perfect results, far faster than a chef could do.

Back to the programming world.

The CPU excels at processing the main logic of a computer program. The result of one computation will be important for the next part, so it can only do so many things at once.

The GPU excels at getting a ridiculous amount of data and then doing the same processing on ALL of it at the same time. It is particularly good at the kind of math that arranges thousands of little triangles in just the right way to look like a 3D object.

Anonymous 0 Comments

A GPU is essentially a second CPU for your computer. The difference is that while a CPU is good at any task you can throw at it, a GPU is *really* good at exactly one thing – performing the kind of complex mathematical calculations that are used to render graphics. A CPU could technically perform these operations, but it would just be a lot slower at it.

When you are playing a game that has fancy HD graphics and needs to run at a high FPS, the CPU can offload the rendering to the GPU and the GPU sends the final frames to the display directly, resulting in much faster performance.

Anonymous 0 Comments

One explanation I like is comparing cpu with superman and gpu with 1000 normal people. The cpu is powerful and can perform lots of instructions just like superman can lift heavy things easily. Gpu can perform simple calculations parallely just like 1000 children who are taught to do math calculation can outperform even superman or cpu if you can divide a task. The pixels need individual calculation. Cpu is slow because it is a single one or its cores are countable in hands which is doing the task. Gpu on the other hand has a lot of small micro cpu with a lot of core count. 1050 ti has about 768 cuda cores.

Anonymous 0 Comments

A matrix is a bunch of numbers arranged in a rectangle that is X numbers wide and Y numbers long

So if X is 10 and Y is 10, you have a 10 by 10 square filled with random (doesn’t matter) numbers. A total of 100 numbers fill the matrix.

If you tell the cpu you want to add +1 to all of the numbers, it does them one by one, left to right, top to bottom one at a time. Let’s say adding two numbers together takes 1 second, so this takes 100 seconds, one for each number in our square

If you instead tell a GPU you want to add +1 to all of the numbers, it adds +1 to all the numbers simultaneously and you get your result in 1 second. How can it do that? Well, it has 100 baby-CPUs in it, of course!

So as others have said a CPU can do what a GPU can do, just slower. This crude example is accurate in the sense that a GPU is particularly well-suited for matrix operations… But otherwise it’s a very incomplete illustration.

You might wonder – why doesn’t everything go through a GPU if it is so much faster. There are a lot of reasons for this but the short answer is the CPU can do anything the baby-CPU/GPU can, but the opposite is not true.

Anonymous 0 Comments

The key difference is how the two processors function. A GPU is designed to do the same calculation lots of times at once, though with differing values, while a CPU is designed to do lots of different calculations quickly.

A simple way to think about this logic is that a single object on the screen in a game will be on multiple pixels of the screen at once, and each of those pixels will generally need to do the exact same set of calculations with just different input values (think like a*b+c with differing values for a, b, and c). The actual rendering process does the same idea at multiple levels, where you are typically going to position and rotate the points (vertices) of each object in the same way. It also turns out that this same style of calculation is useful for a lot of other stuff: physics calculations*, large math problems*, and artificial intelligence*, to name a few.

However for general program logic you aren’t repeating the same calculations over and over with just different data, but instead need to vary the calculations constantly based on what the user is trying to do. This logic often takes the form of “if X do Y else do Z”.

Now, modern CPUs will have some hardware designed to function like a GPU, even if you discount any embedded GPU. Using this is very good if you just need to do a small amount of that bulk processing, such that the cost of asking the GPU to do it and receiving the result will be too expensive, however its no where near as fast as the full capabilities of a GPU.

Beyond those design differences which are shared between dedicated and embedded GPUs, a dedicated GPU has the benefit of having its own memory (RAM) and memory bus (the link between the processor and memory). This means both the CPU and GPU can access memory without stepping on each other and slowing each other down. Many uses of a GPU can see massive benefits from this, especially games using what is known as “deferred rendering” which requires a ton of memory.

As a note, there is no reason you *couldn’t* just do everything with one side, and, in fact, older games (eg Doom) did everything on the CPU. In modern computers, both the CPU and GPU are what is known as Turing complete, which means they can theoretically perform every possible calculation. Its just that each is optimized to perform certain types of calculations, at the expense of other kinds.

* As a note, artificial intelligence heavily relies on linear algebra, as does computer rendering. Many other math problems can be described as such, converting the problem into a set of matrix operations, which is specifically the specialization of GPUs.

Anonymous 0 Comments

The CPU is a mathematician that sits in the attic working on a new theory.

The GPU is hundreds of thousands of 2nd graders working on 1+1 math all at the same time.

These days, the CPU is now more like 8 mathematicians sitting in the attic but you get the point.

They’re both suited for different jobs.

The CPU _could_ update the picture that you see on the display, but that’s grunt work.

Edit: I don’t mean the cores in a GPU are stupid, but their instruction set isn’t as complex & versatile as a CPU’s which is what I meant.