Why does a computer need a dedicated graphics card for high-graphics video games, but can play 4K quality video without one?

398 views

Why does a computer need a dedicated graphics card for high-graphics video games, but can play 4K quality video without one?

In: Technology

16 Answers

Anonymous 0 Comments

Video is just displaying images quickly from data.

Gaming is creating those images from scratch. Its a lot harder to make images than to just flip up the next file

Anonymous 0 Comments

It is the difference between playing a song on a CD vs playing the song live. 4K video is simply a recording, the graphics card is actually creating the video on the fly,

Anonymous 0 Comments

There’s more math involved with games than videos. Many games have their own physics. So if you’re playing a game and your character fails off a cliff while shooting rockets, the game has to calculate rate of fall, graphics for the scenery and how they’d change, and ditto for the rockets. All of that takes processing power, which is part of why some games heat up your unit more than others.

A video is just playing a series of pixels/ colors/ images in time with an audio file. The system doesn’t have to process/ think as much.

Anonymous 0 Comments

A video is like a roller coaster track. It’s already built (rendered) so all that is needed is some minimal power to power the cart through it.

A game is a roller coaster on an empty lot. When you sit in the cart, the track is being built in front of you as the you move along! The track also responds to wherever you want to go. To do this requires a lot more graphical power than just playing a video.

Anonymous 0 Comments

Imagine the images you’re watching as a sequence of meals that the computer is serving you.
With prerecorded video, you get no menu and the computer gets a series of pre-prepared meals that it only needs to heat and serve to you. The person at the computer has no say in what happens next, and the processing steps that the computer has to do are quite simple (decompression). Typically it has a dedicated hardware circuit to do them, even.

With an interactive video game, the person at the computer gets a new menu and chooses, and the computer has to cook that recipe from scratch – and that happens typically 60 times a second or more. Since the future content isn’t known in advance it all has to be made locally which demands a stronger computer.

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

From the standpoint of what the computer has to do, running a game and playing a video are two very different things.

When you play a video (whether from a file or a streaming service), you’re receiving encoded data. That’s just the video data written in a specific format to be understood by any player that supports that format. As an analogy, if the raw video is an idea in your head, the encoded video is a sentence in spoken or written language (i.e., it’s understandable by other speakers of the language the sentence is in). In the streaming service example, Netflix sends you *encoded* video and your computer *decodes* this video so that it can show it to you through a screen (a similar process happens with audio). In this case, most of the work that your computer has to do is decode the incoming data.

When you’re playing a game, on the other hand, there’s no video to play back from. That means that your computer has to *draw* each individual frame – in other words, it has to calculate exactly what is going to be displayed by each pixel when taking into account the objects on the screen, the viewpoint of the player, lighting (and lighting tricks), shadows, and all other sorts of effects. This process is called *rendering*. That’s not super crazy – you’ve certainly seen this sort of thing in movies (that’s how CGI works), but what makes it hard is that for a game, all of that has to be done in real time *and* fast enough (think: at least 30, hopefully at least 60 times a second) to be displayed smoothly.

As another analogy, imagine that decoding is reading a sentence and extracting an idea from it – then imagine that rendering is drawing a picture from an idea in your head. The drawing bit just takes a lot more time and effort.

TLDR: playing back a video (decoding) is simply a translation process, whereas playing a game (rendering) involves a lot of heavy lifting (in terms of sheer number crunching) by the computer.

Anonymous 0 Comments

Watching a video is like holding a flip book and flipping through the pages.

Playing a video game is drawing the flip book as you flip it.

Anonymous 0 Comments

Because playing back pre-made video data is incredibly easy. Even old computers can do that, it’s just a case of having enough bandwidth to push the pixel data down, for the most part. There is some decompression required, but on a modern computer that’s nothing nowadays.

However, a 3D world is rendered… it’s literally created on-the-fly from millions of tiny objects, each with a different texture, colour, pattern, interaction (e.g. transparency, reflection, “dullness”, etc.) and dozens of light sources all moving around.

It’s the difference between me putting paintings into a van without stopping, and me painting them fast enough that someone could load a van with paintings constantly without ever stopping. And if you want those paintings to look “comparable” to a video of a real person or game, then those paintings have to be created very, very well and extremely fast for it to work.

And that takes a whole lot more skill and effort than just loading a few pre-made paintings into a van.

Anonymous 0 Comments

Video is pre-rendered. It just a matter of displaying images. Games have to be rendered in real-time.