Why does a computer need a dedicated graphics card for high-graphics video games, but can play 4K quality video without one?

90 views

Why does a computer need a dedicated graphics card for high-graphics video games, but can play 4K quality video without one?

In: Technology

It is the difference between playing a song on a CD vs playing the song live. 4K video is simply a recording, the graphics card is actually creating the video on the fly,

Video is just displaying images quickly from data.

Gaming is creating those images from scratch. Its a lot harder to make images than to just flip up the next file

There’s more math involved with games than videos. Many games have their own physics. So if you’re playing a game and your character fails off a cliff while shooting rockets, the game has to calculate rate of fall, graphics for the scenery and how they’d change, and ditto for the rockets. All of that takes processing power, which is part of why some games heat up your unit more than others.

A video is just playing a series of pixels/ colors/ images in time with an audio file. The system doesn’t have to process/ think as much.

Imagine the images you’re watching as a sequence of meals that the computer is serving you.
With prerecorded video, you get no menu and the computer gets a series of pre-prepared meals that it only needs to heat and serve to you. The person at the computer has no say in what happens next, and the processing steps that the computer has to do are quite simple (decompression). Typically it has a dedicated hardware circuit to do them, even.

With an interactive video game, the person at the computer gets a new menu and chooses, and the computer has to cook that recipe from scratch – and that happens typically 60 times a second or more. Since the future content isn’t known in advance it all has to be made locally which demands a stronger computer.

A video is like a roller coaster track. It’s already built (rendered) so all that is needed is some minimal power to power the cart through it.

A game is a roller coaster on an empty lot. When you sit in the cart, the track is being built in front of you as the you move along! The track also responds to wherever you want to go. To do this requires a lot more graphical power than just playing a video.

[removed]

From the standpoint of what the computer has to do, running a game and playing a video are two very different things.

When you play a video (whether from a file or a streaming service), you’re receiving encoded data. That’s just the video data written in a specific format to be understood by any player that supports that format. As an analogy, if the raw video is an idea in your head, the encoded video is a sentence in spoken or written language (i.e., it’s understandable by other speakers of the language the sentence is in). In the streaming service example, Netflix sends you *encoded* video and your computer *decodes* this video so that it can show it to you through a screen (a similar process happens with audio). In this case, most of the work that your computer has to do is decode the incoming data.

When you’re playing a game, on the other hand, there’s no video to play back from. That means that your computer has to *draw* each individual frame – in other words, it has to calculate exactly what is going to be displayed by each pixel when taking into account the objects on the screen, the viewpoint of the player, lighting (and lighting tricks), shadows, and all other sorts of effects. This process is called *rendering*. That’s not super crazy – you’ve certainly seen this sort of thing in movies (that’s how CGI works), but what makes it hard is that for a game, all of that has to be done in real time *and* fast enough (think: at least 30, hopefully at least 60 times a second) to be displayed smoothly.

As another analogy, imagine that decoding is reading a sentence and extracting an idea from it – then imagine that rendering is drawing a picture from an idea in your head. The drawing bit just takes a lot more time and effort.

TLDR: playing back a video (decoding) is simply a translation process, whereas playing a game (rendering) involves a lot of heavy lifting (in terms of sheer number crunching) by the computer.

Watching a video is like holding a flip book and flipping through the pages.

Playing a video game is drawing the flip book as you flip it.

Because playing back pre-made video data is incredibly easy. Even old computers can do that, it’s just a case of having enough bandwidth to push the pixel data down, for the most part. There is some decompression required, but on a modern computer that’s nothing nowadays.

However, a 3D world is rendered… it’s literally created on-the-fly from millions of tiny objects, each with a different texture, colour, pattern, interaction (e.g. transparency, reflection, “dullness”, etc.) and dozens of light sources all moving around.

It’s the difference between me putting paintings into a van without stopping, and me painting them fast enough that someone could load a van with paintings constantly without ever stopping. And if you want those paintings to look “comparable” to a video of a real person or game, then those paintings have to be created very, very well and extremely fast for it to work.

And that takes a whole lot more skill and effort than just loading a few pre-made paintings into a van.

Video is pre-rendered. It just a matter of displaying images. Games have to be rendered in real-time.

Game graphics are rendered in real time and benefit from special hardware built for parallelism and particular mathematical operations.

Most mainboards have a decent integrated graphics controller these days. It certainly does well at decompressing video data and piping it to the output, which is a more linear operation.

I don’t see it mentioned in other comments so I’ll bring it up.

Hardware acceleration.

Playing back 4k video is actually surprisingly hard work for the computer. But the algorithm used is so common that manufactures like Intel, Apple, and Qualcomm (common android CPU maker) have built dedicated circuitry that is really really good at playing video. By contrast, the algorithms used by games are much more generic and thus need the powerful graphics card.

If you want an example of this, watch the CPU usage on an old computer while playing a 4k YouTube video. Older computers still had this video decoder built in, but could only work up to 1080p. Actually experienced this on a buddy’s laptop. It was reasonably powerful and had a 4k screen, but lacked a 4k video decoder. So when we watched a 4k YouTube video, the CPU spiked to 100% and the fans went full jet engine mode as it tried to keep up.

Why does my phone play 1080p games at 60fps and barely gets warm??

There is a bunch of mathematics to take the 3D space and convert it to an image that goes to your monitor.

The most expensive process happens in the GPU, you pass all the objects vertices (mesh), textures, camera, and transform them using the MVP (model, view, projection matrix) in the GPU vertex shader.

Then the GPU does all that math and some magic with triangles, then returns you the resulting frame pixels with other relevant information. You take those pixels and calculate the lighting, shadows, after effects in the pixel shader (also called Fragment shader), once per each pixel.

This whole process in modern games runs several times to generate a single frame (we get 60fps and up nowadays). Every light that casts shadows has to be rendered and then mixed together in the final pass. Then you have post process pass like Bloom, anti-aliasing, ambient occlusion, color correction, etc

That’s what the GPU is for, it does hard math for each vertice, and then for each pixel that goes to the screen, several times. That’s why a GPU has so many cores, it’s a parallel job.

A GPU can’t process very good conditional logic. It’s engineered to do parallel math. The logic of the game, objects transformations, physics, etc, it’s all handled in the CPU.

There are some great ELI5 answers here already – ill add that for video the ‘rendering’ has already been done. When you export a video in an editor, it takes a long time for the renderer to draw what will be in each frame of the “flip book”.

Many video editors use the video card to make that video rendering process faster. Complicated videos like fancy 3D animations take just as long if not more to ‘render’ to video as complicated video games take to render each frame. But once that work is done, the information is stored in a ‘flip book’ format so its quick to play it back. Since a video is always the same each time its played, you can do all of the “drawing” work ahead of time.

You can’t do the work ahead of time in a video game, because a video game is different every time its played – you the player decide what will happen next. So the ‘next’ frame is unknown until you turn, shoot, jump, or the game changes weather, lighting, etc.

Well 4k is not all the same. Just because it can play it doesn’t mean it does so well. It may look good, until you drive it with something that meets 4k standards.