What’s the difference between a “32-bit” and “64-bit” game? How does this affect the engine and how much RAM it can use?

196 views

This hit me today as I was prepping some pasta. I’ve got a relatively beefy gaming rig that has 12 gigs of VRAM and 48 gigs of normal RAM. However, older games will still have a tendency to drop frames when a lot of stuff is happening at once, even with these standards. From what I’ve read this is because there’s something in the games, or their engines, or whatever that means they can’t use the full RAM capacity of my computer or draw from it as much as they need. I’ve even noticed this when configuring game settings for, as an example, *Total War: Rome II*, where even though it detects my graphics card it won’t draw on its full strength to get what it needs, always locking at around 3-4 gigs. By contrast, the more modern *Total War: Warhammer III* can use my rig’s full power, meaning I basically never drop frames when playing it.

Why is this? What inherently stops 32 bit games from using more VRAM?

In: 3

8 Answers

Anonymous 0 Comments

Each byte of RAM is referred to by an address. In a 32-bit system, this address is a 32-bit integer. In a 64-bit system, it’s 64 bits. 2^32 is 4294967296, so when you’re using 32-bit integers to refer to bytes in memory, you can only refer to a total of about 4 GB.

Anonymous 0 Comments

For the most part i means which CPU architecture it was compiled for. All new CPUs have been 64-bit for a good while now.

32-bit CPUs could only allocate memory addresses up to about 4~ billion unique numbers. This is why they can only use a maximum of 4GB, they can’t get addresses for more than that.

64-bit computers won’t be running out of addresses for memory for a long time, 64-bit can support over 16 Exabytes.

Anonymous 0 Comments

Most devices are 64 bit anymore but not everything takes advantage of that. The primary benefit is that the software can make use of larger chunks of memory. However, 64-bit CPUs also have newer sets of instructions unavailable to 32-bit software so there are some advantages there too.

Anonymous 0 Comments

Bit size of the application determines how much memory it can see (2^32 = 4 GB) but the way Windows manages memory a 32 bits app can see only 2 GB of memory. There are some way to force an app to see 3 GB.

Bit size also determines how much data it can process in a single command using specific CPU instructions like SIMD. 64 bit vs 32 bits won’t be two times faster but it will be better

Anonymous 0 Comments

Bits are like the number of places in a number. For example, the number 100 has three places. You can represent any number between 0 and 100. The number 1,000 has four places and can represent any value between 0 and 1,000.

All of the numbers we use on a daily basis are based on the number 10. Hence, we call it a base 10 number system. You count up: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, then you roll over to the next “place” to get 10.

If you need to keep track of 10 items, you need only need 2 places. If you need to keep track of 100,000 of items, you need 6 places.

Binary numbers are similar, but they are base 2 instead. So you count up: 0, 1, then you roll over to the next place to get 10. It helps to think in terms of quantity instead of numbers:

|Quantity|Base 10|Base 2|
|:-|:-|:-|
|zero|0|0|
|•|1|1|
|••|2|10|
|•••|3|11|
|••••|4|100|
|•••••|5|101|
|••••• •|6|110|
|••••• ••|7|111|
|••••• •••|8|1000|
|••••• ••••|9|1001|
|••••• •••••|10|1010|

In order for your system to use memory, it has to keep track of the memory available. This is called addressing. For example, if you had 10 “slots” of memory, you’d need two places to keep track of the memory using base 10, but because computers store information using binary (base 2), you’d need four places in binary. We call these binary places “bits”.

So if you have 12 GB, you need more bits to keep track of it all. That’s where 64-bit architecture has an advantage. It can keep track of massive amounts of memory using a single number. 32-bit architecture has fewer places, so it either has to rely on tricks (like dividing up memory addressing into chunks), or simply cannot address any more.

Anonymous 0 Comments

As a basic idea, computer memory is like if you had a giant piece of paper. Each letter (byte) can be referred to by its position, so the first letter on the page is 1, the second is 2, and so forth. 0 is a commonly used value to refer to “invalid”. Now, computers like working with a fixed-length values as it makes a lot of operations faster and simpler.

A 32-bit computer naturally works with 32-bit numbers, which allow values of up to 4 billion. This limits you to referring to about 4 GiB of memory. Now, the operating system (OS) needs to reserve some of that memory for its own operation, and the simplest way to do so is to say all numbers that start with a 1 are reserved. This removes half of that memory, limiting you to 2 GiB of actual memory usage in any given program. In the later days of 32-bit computing, it became possible to enable an option, required by both the operating system and program to agree to use, that reduced that to only a quarter of the memory, or those addresses that start with two ones, allowing a program to use 3 GiB of memory.

Instead, a 64-bit computer works naturally with 64-bit numbers, which raises the 4 GiB limit by squaring it, leaving us with about exabytes, or about 16 billion billion bytes. As this number is insanely higher than any practical computer currently has, the half limit currently applies with no common way around it.

Its worth noting that most 64-bit processors don’t actually allow all 64 bits to be used for addressing currently. Generally, only 48-bits is actually usable by the processor, which provides a limit of about 256 terabytes, again, with half of that taken by the operating system. Once we start to approach that limit, its likely processors will start allowing closer to the actual 64-bits, but its also likely a decade or two away still.

There are also tricks that 32-bit processors and operating systems started to do to allow more than 32-bits right before 64-bit computers took off. While most programs would still be limited by the 32-bit limit, the OS and processor would often be designed to actually work with 48-bits. In such cases, the computer as a whole could have quite a bit more memory than any single program could use.

Anonymous 0 Comments

Imagine you live on a street and there’s a rule that states the address can’t be more than 2 characters. You could never have more than 99 homes on the street, and if somehow you could, you wouldn’t be able to give it an address, so how would anyone be able to send mail to those homes? But change the rule from 2 to 4, and suddenly you have the ability to have 9999 homes on the street.

The number of bits a system uses is this same kind of concept. If memory can only be addressed by 32 bits, there is a limit to how much memory can be accessed. Even if you have more physical memory than that limit, the software can’t get to it because it can’t message any space beyond the 32 bits. By increasing it to 64, your addresses are now able to be significantly larger, allowing access to more memory.

Anonymous 0 Comments

Prepping pasta… is that what people call it these days?