I don’t know if I’m explaining this right…
A computer can run logic at some speed based on how powerful the components of it are, so if it can perform the logic of something, for example, movement in a game, how does it know how much should be done based on its power, instead of essentially running in “fast-forward” or conversely in slow motion?
In: 1307
In general, the programmer tells it how fast or slow to run. I don’t know too much about this, but my guess is it’s somewhere in the CODEC description.
But essentially everything is done through code. In code, you can tell the computer “generate a frame (of video) and send it to the video card to pass to the monitor, then wait for 10ms, then generate and send the next frame”, and it will do that.
As for games, the idea is similar but slightly more complex. In addition to display-based tuning, you also have to do gameplay-based tuning. Let’s say you’re playing a game like Dark Souls, and the computer calculates your distance relative to its own internal speed. Then, if you hold the joystick forward for 1 second you could move hundreds of digital kilometers in the game. That’s obviously not ideal. So in addition to making the screen update at a speed that humans can process (or that the screen itself can process; I’m not going to get into refresh rate), the game itself has to have additional controls to make sure the game plays smoothly.
Latest Answers