Why are physics in some games somewhat connected to running fps?

190 views

take for example geometry dash and celeste.

in geometry dash, having a 360hz monitor will have slightly different gameplay than a 60hz, and some user created levels have made frame perfects that are only possible at 240hz+.

celeste has a 60fps cap, if you use assist mode’s builtin speedhack at 0.5x speed, since the game has 60fps cap, it’s effectively 120fps calculation. this also affects it, why?

In: 8

4 Answers

Anonymous 0 Comments

When you encounter a game where the physics seems to be tightly connected to the visual frame rate it is because it was kind of naively programmed that way. Many games with consistent physics regardless of visual fps are using 2 different loops for visuals and physics. So the visuals can go as fast as the want, by the physics is on a fixed frame time.

Anonymous 0 Comments

Physics game dev. here

*Note : I’ll explain this in a very simple term.*

Everything depends on FPS. In each frame, we calculate things and then send it to the display device(monitor f.ex). And a standard game runs on 60 FPS. This means that we make 60 calculations in a second.

So now, imagine you want to push a box to right side 1 cm. In a second with your base FPS rate, this means that your box is moved 60 centimeters to the right side. All is okay and no problems.

And then, you somehow alter the game’s capped FPS rate. Say you make it 140 FPS. This means that there’ll be 140 operations in a second, and your box is moved 140 cm to the right side. This is a behaviour you don’t really desire as it breaks the basic flow of the game

You can counter this by using **delta time**. This is essentially adjusting your FPS with the CPU clock time(okay, I’m getting out of ELI5 sorry). Now, instead of moving 1 cm to the right, you move less or more depending on if your FPS is higher or lower. If your FPS is 30, you move it 2 cm (2 cm * 30 operations = 60 centimeters) or if your FPS is 120, you move it 0.5 cm (0.5cm * 120 operations = 60 centimeters). In the end, you will move your box 60 centimeters nonetheless. The difference is determined by the delta time you get. There’s a lot more to add here, but this is for another topic.

I’m sorry that I’m not good at explaining this as ELI5. Hope it looks good 😅

Also, here’s [my game Physics! Fun](https://play.google.com/store/apps/details?id=com.foxtrio.physicsfun) that you can try(you can change FPS in the game as an option)

Anonymous 0 Comments

The chip running the game is running in binary, so it has a *clock cycle* that defines how often the 1s and 0s happen. Like how does the computer know the difference between four 0s in a row and five 0s? Because in the background there’s a clock that just goes 01010101 which is how the chip “counts” what everything else is doing.

Let’s say you want your game to check if the character is touching a spike that will kill you. Your chip can’t “look”, really, it can only compare values and those values only change with the clock cycle. So the *fastest* that the chip can check is with the next clock cycle and look to see if there’s a 1 where there should be a 0 which means you touched the spike.

But your graphics card has its own chip that has its own clock cycle. What you see on the screen is updating with that graphics cycle, regardless of what the cpu is doing. The cpu might check if you’re touching the spike 1000 times a second, but your graphics card is only processing what should be displayed at, oh, 700 times a second, and it’s only actually updating what’s on the screen 60 times per second. What happens if the **cpu** updates where your character is calculated to be faster than your **gpu** calculates how to *show* where your character is?

You might end up in a case where the cpu says you touched a spike and are dead, but that’s not what it looks like on the screen. That’s not a good gameplay experience.

Back in the day of cartridge games, there was no separate cycle. The only thing that matters is what’s on the screen and your console was optimized to figure that out and show you. All the gameplay logic was usually built on the screen refresh rate. Now, though, that would be way too slow to calculate everything you need to run the game. So we have separate cpus that are good at handling different kinds of logic but go fast as fuck, and gpus which are only good at showing what’s on screen.

Some games are optimized for performance. There is enough wiggle room between what the cpu and gpu say that you probably won’t notice. Like, if you’re playing an online shooter, the hitbox is probably already bigger than your character model *and* there’s internet lag, and a decent amount of computing power goes into just kind of fudging it in a way that feels fair. The cpu needs to go fast af to keep up with everything happening.

Some games need to be tied to the graphics cycle. A game like geometry dash has frame-perfect timing so you need to know with 100% certainty that what the screen is showing is what the cpu thinks is going on. *One pixel* of difference might be enough to kill you. And that’s fine, because the game is simple enough that you don’t need much cpu to make it run so you can lean more on the gpu, or tell the cpu to follow the gpu and not just go as fast as it can.

Anonymous 0 Comments

Short: Because the developers decided to make it that way.

Long:

A game (or any program for that matter) is one long loop of instructions that gets repeated over and over again. Get the user input, process the user input, move the enemies, move falling objects down a bit, etc.

But there’s a second loop: The one that paints each frame to be shown on the monitor. That one also runs all the time. Paint the background picture, paint the enemies, paint the player, paint the explosions, paint the health bar, etc.

But this now means that you have two endless loops you have to convince the computer to execute at the same time. And there are multiple ways to do that:

(a) Lazy. Just put both in the same loop. If your game loop and graphics loop can both run 60 times per second (what’s considered the default monitor display frequency anyway), you can just combine them. And as long as everything stays at 60 fps, all is fine. Otherwise, things can get whacky. At higher fps, things move faster, and at lower fps they move slower. You can compensate for that in the game loop by moving objects depending on the measured time since the last iteration of the game loop…but that still introduces rounding errors (I guess this is what you’re seeing in your game? 0.33+0.33+0.33 != 0.5+0.5) and delayed responses to fps changes.

(b) Interweave. You can run your game loop, and in the pauses between runs the graphics loop as often as there is time. Minecraft did that before multithreading, for example. The game loop ran 20 times a second, and after each run, the graphics loop ran as often as possible. This generally works well, but graphics can become a bit stuttery as every 1/20 second there’s a longer pause to run the game loop.

(c) Multithreading. Split your program into two, and run each part on a different core of the CPU. While this gives the best results (the loops don’t interfere with each other), you just created a nightmare for the programmers as they need to make 110% sure that code from the graphics loop doesn’t access game data while code from the game loop changes it and vice versa. Otherwise, you get crashes, objects that are in two places of the screen at the same time, or flicker because they’re sometimes missing, etc.