Why have video game graphics taken 37 years to look this way? Apart from purely technical (and power) limitations, what exactly gets better every three or four years (PS2 to PS3 to PS4 in game cutscenes for example)

196 views

Why have video game graphics taken 37 years to look this way? Apart from purely technical (and power) limitations, what exactly gets better every three or four years (PS2 to PS3 to PS4 in game cutscenes for example)

In: 0

5 Answers

Anonymous 0 Comments

Power is pretty much the more or less of it.

The number of things you could possibly want a computer to do can always outstrip the things it actually can do. You can bring a modern PC to its knees today with very little effort just by opening too many Chrome tabs. Imagine how constrained a game developer has to feel when they want to implement really detailed things in really detailed environments. It’s way easier to hit the ceiling of what’s possible than it is to not. That ceiling is higher now than it ever has been, and it’s always climbing higher, but it’s still easy to reach.

That said, there is something to be noted about how big the video game market has become, and the factor that has played. A bigger customer base means more opportunity to make bigger profits, and bigger profit opportunities means developers can justify bigger budgets, which can pay for more hands on deck to cram more details into the game. The gaming market was by no means “small” back in the era when video game graphics were much more primitive. But the market we have today, where video games can have bigger production budgets than mainstream Hollywood blockbusters and support global simultaneous releases, there’s definitely a lot more money to toss around at graphics.

Anonymous 0 Comments

Aside from technical reasons I’d say it comes down to natural competition between games and time to evolve.

Games use “now with better graphics” as a selling point. Duck hunt comes out, then someone makes deer hunt but it has better graphics to get people to switch. Then comes beaver hunt with “way better than deer hunt” and so on and so forth.

Give something 30 years of natural competition and pushing to beat the last title, it naturally evolves.

Same thing can be applied to pretty much every business model. Cars, movies, technology, whatever.

Anonymous 0 Comments

There’s not much beyond technical limitation. So perhaps you are asking how exactly the technical capability limit visual fidelity?

Unlike real life, a surface in video games is not a smooth continuous piece, like your skin for example. Video game uses small flat triangles (polygons) connected to each other to make a surface.

The thing with using flat surfaces to make curves is that the amount you use dictates how curvy it looks. If you are only allowed to use 20 triangles to make a sphere, it will look like a D20 dice. If you’re allowed to use more, then you might make something like a D100, while still not a sphere, it’s much more spherical than a D20. So, the smoother you want some surface to look, the more triangles you would need to use.

This is where the technical limitation comes in. Since each of those small triangles needs to be calculated by the GPU to be rendered. The power of the GPU dictates how many triangles you can use at any given time. That’s why FF7 characters have those blocky bodies and arms and legs. As the power of GPU increases, developers are able to use increasingly more triangles in their games, making surfaces look increasingly smoother than before.

Anonymous 0 Comments

In addition to power, they’ve also invented better tools to make games look more realistic. We have algorithms for making hair go with movement. We put people in suits with lights and their movement is mapped to characters, and we do it with faces too, making speaking look much more realistic.

Anonymous 0 Comments

There are a lot of things but the biggest would be

* computing power. We have magnitudes more powerful computers now than we did then so they can process more information. Nobody could comprehend that we would have games running at such high resolutions such as 3480 x 2160 (4k) pixels back then. I remember running games that were 640 x 480 pixels back on the 90s. Devices now have gigabytes of memory now when they had megabytes to work with

This also carries over to pre rendered cutscenes and movies Pixar could only render about 30 seconds when making toy story for every 24 hours of rendering on special render farms. It can be rendered real time on a modern decent pc at home

pre rendered cutscenes, apart from time to render them, also require storage space. The ps2 used dual layer DVDs and they are around 8gb. PS4 uses dual layer bluray discs upto 50gb. 6x more storage means higher quality textures and cutscenes can be stored

* experience. You will have noticed that console games for a generation look better than the first releases on the same console. Developers have had time to optimise and get the most out of the capabilities or work around restrictions. The PS3 was a good example, where it was a powerful machine but hard to code for. It took them time to really get the most out of it

* Developer tools. Games may use engines such as unreal engine, which get upgraded and can use new features such as Ray tracing for better lighting , and possibly better optimised