I don’t know if I’m explaining this right…
A computer can run logic at some speed based on how powerful the components of it are, so if it can perform the logic of something, for example, movement in a game, how does it know how much should be done based on its power, instead of essentially running in “fast-forward” or conversely in slow motion?
In: 1307
So there’s two ways to handle this.
The first is by padding out NOP instructions, which basically tells the CPU to “do nothing”. This is what early console games did, since every console had the same hardware and ran at the same speed, they could just write the main loop to do everything it needed to do, then put in enough NOPs to make it run at whatever speed they wanted. This actually does result in a “fast forward” effect if you try to run them on faster hardware.
With newer games you first read the clock into a variable we’ll call T, do all the stuff you need to do, then just sit there and keep reading the clock until it reads T+10ms before you run the loop again. That means the loop will run once every 10ms.
In both cases, the stuff you do in the loop is: check inputs to see if a button is pressed, react accordingly, update the character’s position, then render the results on the screen.
Edit: since you’re reading from a realtime clock, all that matters is that the CPU can complete all tasks within the disired timeframe. It will run at the same speed on all hardware that is at least fast enough to beat the clock.
Latest Answers