What’s the difference between a “32-bit” and “64-bit” game? How does this affect the engine and how much RAM it can use?

25 views
0

This hit me today as I was prepping some pasta. I’ve got a relatively beefy gaming rig that has 12 gigs of VRAM and 48 gigs of normal RAM. However, older games will still have a tendency to drop frames when a lot of stuff is happening at once, even with these standards. From what I’ve read this is because there’s something in the games, or their engines, or whatever that means they can’t use the full RAM capacity of my computer or draw from it as much as they need. I’ve even noticed this when configuring game settings for, as an example, *Total War: Rome II*, where even though it detects my graphics card it won’t draw on its full strength to get what it needs, always locking at around 3-4 gigs. By contrast, the more modern *Total War: Warhammer III* can use my rig’s full power, meaning I basically never drop frames when playing it.

Why is this? What inherently stops 32 bit games from using more VRAM?

In: 3

Each byte of RAM is referred to by an address. In a 32-bit system, this address is a 32-bit integer. In a 64-bit system, it’s 64 bits. 2^32 is 4294967296, so when you’re using 32-bit integers to refer to bytes in memory, you can only refer to a total of about 4 GB.

For the most part i means which CPU architecture it was compiled for. All new CPUs have been 64-bit for a good while now.

32-bit CPUs could only allocate memory addresses up to about 4~ billion unique numbers. This is why they can only use a maximum of 4GB, they can’t get addresses for more than that.

64-bit computers won’t be running out of addresses for memory for a long time, 64-bit can support over 16 Exabytes.

Most devices are 64 bit anymore but not everything takes advantage of that. The primary benefit is that the software can make use of larger chunks of memory. However, 64-bit CPUs also have newer sets of instructions unavailable to 32-bit software so there are some advantages there too.

Bit size of the application determines how much memory it can see (2^32 = 4 GB) but the way Windows manages memory a 32 bits app can see only 2 GB of memory. There are some way to force an app to see 3 GB.

Bit size also determines how much data it can process in a single command using specific CPU instructions like SIMD. 64 bit vs 32 bits won’t be two times faster but it will be better

Bits are like the number of places in a number. For example, the number 100 has three places. You can represent any number between 0 and 100. The number 1,000 has four places and can represent any value between 0 and 1,000.

All of the numbers we use on a daily basis are based on the number 10. Hence, we call it a base 10 number system. You count up: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, then you roll over to the next “place” to get 10.

If you need to keep track of 10 items, you need only need 2 places. If you need to keep track of 100,000 of items, you need 6 places.

Binary numbers are similar, but they are base 2 instead. So you count up: 0, 1, then you roll over to the next place to get 10. It helps to think in terms of quantity instead of numbers:

|Quantity|Base 10|Base 2|
|:-|:-|:-|
|zero|0|0|
|•|1|1|
|••|2|10|
|•••|3|11|
|••••|4|100|
|•••••|5|101|
|••••• •|6|110|
|••••• ••|7|111|
|••••• •••|8|1000|
|••••• ••••|9|1001|
|••••• •••••|10|1010|

In order for your system to use memory, it has to keep track of the memory available. This is called addressing. For example, if you had 10 “slots” of memory, you’d need two places to keep track of the memory using base 10, but because computers store information using binary (base 2), you’d need four places in binary. We call these binary places “bits”.

So if you have 12 GB, you need more bits to keep track of it all. That’s where 64-bit architecture has an advantage. It can keep track of massive amounts of memory using a single number. 32-bit architecture has fewer places, so it either has to rely on tricks (like dividing up memory addressing into chunks), or simply cannot address any more.