Why do computers need GPUs (integrated or external)? What information is the CPU sending to the GPU that it can’t just send to a display?

1.36K views

Why do computers need GPUs (integrated or external)? What information is the CPU sending to the GPU that it can’t just send to a display?

In: 219

41 Answers

Anonymous 0 Comments

Most modern CPUs can do exactly what you are stating, as they have the graphical processing capability right on the chip itself. This is enough for most office or school computers, as well as low powered laptops. Generally, GPUs are needed for more difficult tasks that are beyond the capabilities of what a CPU can handle. For example, a very detailed game or computer graphical design will almost require a separate GPU in the PC. However, there are some CPU models that lack the “graphics chips” or are there but disabled by the manufacturer, generally a cheaper model of a CPU.

With that being said, if someone was building a budget PC, with very light gaming, you can just get a modern CPU capable of onboard graphics for low end performance.

Although I feel like your question may be as to why a CPU itself would need *any form* of a graphics unit, even if it is “on the chip” as with many modern CPUs. The CPU, strictly speaking, does computations in and of itself, at a very fast rate. However, it does not have the ability to do the output frequency conversions needed for standards that make the PC capable of connecting to an HDMI, VGA, Displayport etc. It is the same notion that necessitates having system ram, a hard disk, and other peripherals attached to do what you want to do.

Anonymous 0 Comments

Cpu does lots of simple but wildly different tasks, a GPU does complex tasks that it is purpose-built for.

Anonymous 0 Comments

A computer doesn’t need a GPU.

What a GPU is good at is performing the same task on a bunch of pieces of data at the same time. You want to add 3.4 to a million numbers? The GPU will do it much faster than a CPU can. On the other hand, it can’t do a series of complex things as well as a CPU, or move stuff in and out of the computer’s memory or from storage. You can use the GPU’s special abilities for all sorts of things, but calculations involving 3D objects and geometry is a big one — it’s super useful in computer graphics (why it’s called a Graphics Processing Unit) and games. If you want stunning graphics for games, the GPU is going to be the best at doing that for you.

The CPU talks to a GPU using a piece of software called a “driver”. It uses that to hand data to the GPU, like 3D shapes and textures, and then it sends commands like “turn the view 5 degrees”, “move object 1 left 10 units”, and stuff like that. The GPU performs the necessary calculations and makes the picture available to send to the screen.

It’s also possible to program the GPU to solve math problems that involve doing the same thing to a lot of pieces of data at the same time.

Anonymous 0 Comments

The CPU is really smart, but each “core,” can only do one thing at once. 4 cores, means you can process 4 things at the same time.

A GPU has thousands of cores, but each core is really dumb (basic math, and that’s about it), and is actually slower than a CPU core. Having thousands of them though means that *certain* operations which can be split up into thousands of simple math calculations can be done much faster than on a CPU, for example doing millions of calculations to calculate every pixel on your screen.

It’s like having 4 college professors and 1000 second graders. If you need calculus done, you give it to the professors, but if you need a million simple addition problems done you give it to the army of second graders and even though each one does it slower than a professor, doing it 1000 at a time is faster in the long run.

Anonymous 0 Comments

To put this into perspective, a relatively low resolution monitor is 1920×1080 pixels. That is over 2 million pixels that need to be potentially sent 3 numbers (red, green, and blue values) for every frame. One gigahertz is 1 billion operations per second. Rendering 60 frames per second is 60 frames * 3 color values * 2 million pixels = 360 million operations per second — 1/3 of 1 GHz. Even further, graphics depend on tons of other operations like rendering, lighting, antialiasing that need to happen for every frame that is displayed.

It becomes clear that raw speed is not going to solve the problem. We like fast processors because they are more responsive, just like our eyes like higher frame rates because it is smoother. To get smooth, high frame rate video, we need specialized processors that can render millions of pixels dozens of times a second. The trick with GPUs is parallelization.

GPUs have relatively low clock speed (1GHz) compared to CPUs (3-4Ghz), but that have thousands of cores. That’s right, thousands of cores. They also use larger instruction size: usually 256 bits compared to CPUs’ 64 bits. What this all boils down to is boosting the throughput. Computing values for those millions of pixels becomes a whole lot easier when you have 2,000 “slower” cores doing the work all together.

The typical follow up question is “why don’t we just use GPUs for everything since they are so fast and have so many cores?” Primarily because GPUs are purpose built for the task they were designed for. Although that doesn’t prevent the possibility of general computing on GPUs, we humans like computers to be super snappy. Where CPUs can juggle dozens of tasks without a hiccup, GPUs are powerhouses for churning through an incredible volume of repetitive calculations.

PS: Some software takes advantage of the GPU for churning through data. Lots of video and audio editing software can leverage your GPU. Also CAD programs will use the GPU for physics simulations for the same reason.

Anonymous 0 Comments

Many many years ago, there were only CPUs, and no GPUs.

Take the ancient [Atari 2600 games console](https://en.wikipedia.org/wiki/Atari_2600) as an example. It did not have a GPU. Instead, the CPU would have to make sure that the screen is drawn, at exactly the right moment.

When the TV was ready to receive the video signal from the games console, the CPU would have to stop processing the game so that it could generate and start the video signal that would be drawn on the screen. Then, the CPU would have to keep doing this for the entire screen frame’s worth of information. Only when the video signal got to the bottom opposite corner of the screen could the CPU actually do any game mechanics updates.

This meant that the CPU of the Atari 2600 could only spend from memory about 30% of its power doing game processing, and the remaining 70% entirely dedicated to video updates as the [CPU would literally race the electron beam](https://www.youtube.com/watch?v=sJFnWZH5FXc) in the TV.

So later on, newer generations of computers and game consoles started having dedicated circuitry to handle the video processing. They started out as microprocessors in their own right, eventually evolving into the massively parallel processing behemoths they are today.

Anonymous 0 Comments

The cpu is a handful of college math majors, they are skilled in handling a wide variety of problems and in general are much faster than most at doing those calculations.
The gpu is a gymnasium full of 5th gradrs, don’t ask them to handle advanced calculus but give them a few thousand basic algebra questions and that mob of students is going to be done way faster than those couple of grad students.

Less eli5 : In general the cpu is deciding what happens on the screen and the gpu is in charge of saying that that looks like. As the one takes a lot of varied calculations and the other is more specialized at just drawing shaped and applying textures to them, but doing it with a ton of cores at once.

When it comes to games the cpu is running the game itself, saying what entity is where and where things are headed. The gpu is constantly trying to draw what the cpu says is there, it loads all the physical assets into its own memory, does all the calculations for how the scene is lit and dumps its result onto the screen. Once it’s done with all of that it asks the cpu where everything is again and starts all over.

The cpu contains only a handful or so very powerfull general purpose core to do the work, a modern gpu on the other hand has thousands of less flexible dumber cores that can brute force their way through all the work it takes to generate frame in a modern game. Plus having much faster memory on board the card itself helps when gpu is constantly referencing large texture files and storing information dealing with the current frame its working on.

Anonymous 0 Comments

It could, but then it would have to do everything the GPU has to do, and that would prevent it from doing everything that it has to do, and make it slow down. This is called software rendering.

Furthermore, the GPU and a CPU aren’t the same when it comes down to things.
Sure, they are both made from millions of transistors (digital switches) that turn on and off billions of times a second, but GPUs are like lots of small houses in the suburb, while the CPU can be tall city scrapers, built to do different things.

The GPU has a highly-specialized set of processors and pipelines that are really good at doing almost the same thing to a set of data really fast and in parallel, whereas the CPU has a more generalized processor that is built to be able to to more that just shader, texture and vertex calculations (those things that are really important to what makes 3D graphics amazing, when properly done).

The CPU does everything else, run a program, interact with the user via inputs, communicate with everything else, like the sound device, the network device, disk storage, memory.

Before, “GPUs” were usually just integrated into the motherboard, they were called “framebuffers” and they did mostly that, “buffer” or store one or two “frames” of data long enough for the scanlines to “draw” the image in the buffer to the screen.

But then people wanted more from video games. They wanted effects, like blending two colors together to make transparency effects from when half of your character was in water.

Sure, you could do it in software, but it could be slow, and as resolutions increased, the time to render stuff on screen became slower. So engineers thought, well why not make a specialized card to do those effects for you? This meant that the framebuffer would now have to have it’s own processor, to take care of stuff like transparencies and all, and now the CPU was free to do other stuff, like do more things, handle more stuff to be displayed on the screen.

Soon technology became fast enough so that you could send vertexes (3D points instead of 2D pixels) to the video card, and tell the video card to fill in the 3D points (usually triangles) with color (pixels) that would be displayed on screen, instead of telling it to fill a screen with many flat (2D) pixels. And it could do this 60 times per second.

Then, instead of just filling in trangles with a single color, you could now upload a texture (an image, say of barn wall) to the video card, and tell the video card to use the image to paint in the traingles that make up the wall in your scene. And it could do this for hundreds of thousands, and then millions of triangles.

All this while, the CPU is free to focus on other parts of the game, not waiting for data to load into the video card (by using direct memory access for example), or doing any actual pixel filling. Just sending data and commands over on what to do with that data.

Now imagine, if the CPU had to do everything, it would be huge and complex and expensive and run hot, and if it broke you’re have to replace your GPU AND your CPU.

Anonymous 0 Comments

Screens have millions of pixels and need to be updated at least 50 times per second. It is possible to connect a CPU directly to an HDMI cable (I have done that) but that doesn’t really leave much time for the CPU to do any other work.

For that reason computers have had dedicated graphics chips for a very long time. In the early days those were fairly simple chips that just shared memory with the CPU. The CPU would put instructions like “blue 8×8 pixel square goes here”, “Pac-Man goes there” into memory and then the graphics chip would send the right amount of electricity to the monitor at the right time.

These graphics chips have become more and more advanced and about 25-ish years ago were rebranded as GPUs. Nowadays they are quite generic and can run complicated calculations at phenomenal speeds.

Anonymous 0 Comments

They don’t *need* a GPU. Until the 1990s, computers didn’t have one at all, unless you count a fairly simple device that reads from a chunk of RAM and interprets it as video data a GPU. Early 3D games like Quake were perfectly fine on these systems, and did all the work on the CPU.

What the CPU sends is a lot of textures, and a bunch of triangles, plus information on how to apply the textures to the triangles.

The CPU could do this but the GPU is a lot faster at certain tasks. Drawing triangles being one such task. Twisting the textures around being another.

Early GPUs just did the texturing. That’s the most CPU intensive task.