I don’t know if I’m explaining this right…
A computer can run logic at some speed based on how powerful the components of it are, so if it can perform the logic of something, for example, movement in a game, how does it know how much should be done based on its power, instead of essentially running in “fast-forward” or conversely in slow motion?
In: 1307
to add to the many comments here.
A single loop of the game loop is generally called a “Tick”, back in the day 1 tick was unregulated, it ran as fast as the computer could execute the code.
But game dev nerds are very smart and realized pretty quickly that computers would get faster and are not consistent, so 1 tick then became regulated by the display refresh rate
to drawing on the screen easier 1 tick was equal to 1 monitor refresh, monitors were pretty 24 or 30 hz up till about a decade ago, when 60, 90, 120, 144 became normal.
So generally now most game engines now use multiple “tick” rates which are based on time, and set to speeds to make their job easier, the rendering tick uses the refresh rate, network tick uses the optimal network rate, and the core tick rate which regulates them all and handles input generally runs as fast as it can. These also tend to run in their own process (thread) which allows better utilization of your cores and multithreading. However thats hard to program as threads dont like to talk to one another, so engines still are single threaded.
Latest Answers