I don’t know if I’m explaining this right…
A computer can run logic at some speed based on how powerful the components of it are, so if it can perform the logic of something, for example, movement in a game, how does it know how much should be done based on its power, instead of essentially running in “fast-forward” or conversely in slow motion?
In: 1307
In the early days of computing, game logic and video playback speeds were directly tied to the computer’s processor speed. So games and videos would literally run faster on more powerful hardware.
But programmers realized this was a problem, so they changed how games and video playback work under the hood. Here’s the key:
Modern games and video players update the logic and render the graphics in separate steps. The logic update happens in discrete time steps, not continuously. For example, the game logic might update 60 times per second, fixed, regardless of how fast the computer is.
After the logic update, the graphics get rendered. A faster computer can render more frames per second, making the visuals smoother. But the underlying logic is the same.
So while a faster computer can achieve higher frame rates and smoother visuals, the game logic itself – things like physics, AI, and video playback speed – stays fixed. This isolates the logic from the rendering performance.
In summary, by separating the logic updates from the rendering, programmers ensure games, videos, and other software maintain a consistent speed across different hardware. The visual smoothness improves on faster hardware, but the functional speed stays the same. It’s like a digital metronome keeping the beat regardless of the instrumentation.
Latest Answers