It is common for modern video games to be badly optimised due to, partly, inefficient usage of multiple cores on a CPUs.
I’d assume that developers know this, but don’t have the time to rectify this.
So, therefore, what are the difficulties in utilising various cores of a CPU effectively? Why does it require so much focus, money, or perhaps time to implement?
In: Other
So first i’ll need to explain why CPUs have multiple cores, and what advantage they actually provide.
You can think of any given program (including video games) as a set of instructions, progressing from step one to step two and so on. I’m going to break out the old PB&J sandwich metaphor. Consider the following instructions:
Step 1: place bread on table
Step 2: apply peanut butter
Step 3: apply jelly
Step 4: apply bread
That is four steps, and you do them in that order. But do you really have to apply the peanut butter before you apply the jelly? If you had an extra set of hands you could apply the peanut butter and the jelly at the same time. Consider this revised process, with two people called a & b:
Step 1ab: place bread on table (both of them do this)
Step 2a: apply peanut butter
Step 2b: apply jelly
Step 3a: combine jelly slice and peanut butter slice
Now, thanks to having a second set of hands, there’s now only 3 steps, because our original steps 2&3 have been split up and are being done in paralell at the same time. Multiple CPU cores are essentially those “extra hands” that let us do this. However, there are limitations to this. For example, we obviously can’t combine the slices at the same time the pb&j is being applied to them. So no matter how many more extra hands we have, we can’t really split this up any more than it already is.
Another pitfall with paralellism is what’s called a race condition. Say in the above metaphor that we end up applying the peanut butter (step 2a) faster than the jelly (step 2b). If we jump straight into step 3a, before step 2b is done, then we’ll have a mess on our hands. So we need person a to wait for person b to finish before continuing. You usually don’t get this wait behavior for free in programming, and if you forget to tell the program to wait, it won’t and you can get some really weird bugs as a result.
Paralell programming can improve the performance and efficiency of a program, but it has limitations and presents additional challenges.
How this all relates to videogames is that there are a lot of processes that videogames rely on which can’t really be done in paralell, or can’t be done easily in paralell. There are still some things that can be done in paralell, and are becoming more common. For example, rendering is usually done in a seperate thread on a lot of newer games. This allows the physics, logic, input, and networking parts of the game to be decoupled from rendering so they won’t lag or slow down when the GPU is struggling. This helps resolve a lot of the issues related to network lag and tunneling (stuff getting stuck inside of or teleporting through walls).
Another reason why multicore support isn’t as common is because many popular game engines are relatively old and have been around since before multicore CPUs were a thing. They’ve just been updated over time as new technology becomes available. The problem though is that the execution model is one of the major core components of a game engine, so it is very difficult to add multicore support to an engine that wasn’t originally designed for it from the ground up.
Edit: an inconsistency.
Latest Answers