How it is possible for 8GB GPUs to be okay while the game is using 11GB VRAM

252 views

[https://youtube.com/clip/UgkxWr7LKhCJj8IyZB6XVjUwPG8h0Gnwoxd-](https://youtube.com/clip/UgkxWr7LKhCJj8IyZB6XVjUwPG8h0Gnwoxd-)

In this clip Hardware Unboxed says that 8GB GPUs are okay up to 11GB VRAM usage, but going beyond 12GB kills performance.

Can someone explain how this is possible? Does it have something to do with your normal RAM being used as some sort of replacement to some degree?

In: 0

9 Answers

Anonymous 0 Comments

I imagine it has to do with texture compression and it’s respective ratio that is probably limited by the gpu’s decoding speed? But I could be wrong.

Anonymous 0 Comments

I imagine it has to do with texture compression and it’s respective ratio that is probably limited by the gpu’s decoding speed? But I could be wrong.

Anonymous 0 Comments

I imagine it has to do with texture compression and it’s respective ratio that is probably limited by the gpu’s decoding speed? But I could be wrong.

Anonymous 0 Comments

It will use system RAM, yes. You still want plenty on-GPU memory because system RAM is much slower. A bit won’t be too bad, but too much will kill performance.

Anonymous 0 Comments

It will use system RAM, yes. You still want plenty on-GPU memory because system RAM is much slower. A bit won’t be too bad, but too much will kill performance.

Anonymous 0 Comments

It will use system RAM, yes. You still want plenty on-GPU memory because system RAM is much slower. A bit won’t be too bad, but too much will kill performance.

Anonymous 0 Comments

The driver will offload some of the data to system memory if the video memory completely fills up. This will allow an application to use much more video memory than your video card actually has, at the cost of performance. The bus between your main memory to the GPU is vastly slower, both in bandwidth and latency, than the bus between VRAM and GPU. That system ram bus is also often under load from other parts of the game that is running on the CPU.

How much of a performance hit it will be depends on what gets offloaded. If the offloaded data is only rarely used, the performance impact will be extremely minimal, while if its some major data used many times per frame, the impact will be huge. The actual offloading process is likely to use a least-recently-used system: that is, when data is used, the computer ensures its in VRAM and, if that requires kicking something else out, it will kick out whatever has the longest time since last use.

Now, a lot of applications will preload some data it does not immediately need. For example, a video game will generally have all nearby characters loaded, even if some are not actually on screen at any given time, whether due to the camera angle or due to walls. In this case, those off screen characters are likely to be the data that gets offloaded. While you are likely to see some slightly worse frame times when one of those characters actually comes on screen, its not frequent enough to have a noticeable performance impact.

Another source of this preloading comes in what is known as LODs. The idea is that you don’t need as much resolution for objects farther away, so you can render a simpler version using smaller textures when the object is farther away and get the same, and, sometimes, even better, results for cheaper. The application is likely* to still load the full resolution versions of the object and textures, even though they are not being used actively. This simplifies the application’s loading and helps avoid hitches as objects move around and more details needs to be loaded.

If you go well beyond the VRAM level, however, it will have to start offloading data actually being rendered. This means it has to do that data transfer every frame, or even multiple times per frame, which will quickly kill performance. Exactly when the major problems will start to occur will depend on the application usage.

* Some games will use processes to dynamically only load the LOD levels they actually need. This is much more common in an open-world no-load game compared to closed-world or top-down game.

Anonymous 0 Comments

The driver will offload some of the data to system memory if the video memory completely fills up. This will allow an application to use much more video memory than your video card actually has, at the cost of performance. The bus between your main memory to the GPU is vastly slower, both in bandwidth and latency, than the bus between VRAM and GPU. That system ram bus is also often under load from other parts of the game that is running on the CPU.

How much of a performance hit it will be depends on what gets offloaded. If the offloaded data is only rarely used, the performance impact will be extremely minimal, while if its some major data used many times per frame, the impact will be huge. The actual offloading process is likely to use a least-recently-used system: that is, when data is used, the computer ensures its in VRAM and, if that requires kicking something else out, it will kick out whatever has the longest time since last use.

Now, a lot of applications will preload some data it does not immediately need. For example, a video game will generally have all nearby characters loaded, even if some are not actually on screen at any given time, whether due to the camera angle or due to walls. In this case, those off screen characters are likely to be the data that gets offloaded. While you are likely to see some slightly worse frame times when one of those characters actually comes on screen, its not frequent enough to have a noticeable performance impact.

Another source of this preloading comes in what is known as LODs. The idea is that you don’t need as much resolution for objects farther away, so you can render a simpler version using smaller textures when the object is farther away and get the same, and, sometimes, even better, results for cheaper. The application is likely* to still load the full resolution versions of the object and textures, even though they are not being used actively. This simplifies the application’s loading and helps avoid hitches as objects move around and more details needs to be loaded.

If you go well beyond the VRAM level, however, it will have to start offloading data actually being rendered. This means it has to do that data transfer every frame, or even multiple times per frame, which will quickly kill performance. Exactly when the major problems will start to occur will depend on the application usage.

* Some games will use processes to dynamically only load the LOD levels they actually need. This is much more common in an open-world no-load game compared to closed-world or top-down game.

Anonymous 0 Comments

The driver will offload some of the data to system memory if the video memory completely fills up. This will allow an application to use much more video memory than your video card actually has, at the cost of performance. The bus between your main memory to the GPU is vastly slower, both in bandwidth and latency, than the bus between VRAM and GPU. That system ram bus is also often under load from other parts of the game that is running on the CPU.

How much of a performance hit it will be depends on what gets offloaded. If the offloaded data is only rarely used, the performance impact will be extremely minimal, while if its some major data used many times per frame, the impact will be huge. The actual offloading process is likely to use a least-recently-used system: that is, when data is used, the computer ensures its in VRAM and, if that requires kicking something else out, it will kick out whatever has the longest time since last use.

Now, a lot of applications will preload some data it does not immediately need. For example, a video game will generally have all nearby characters loaded, even if some are not actually on screen at any given time, whether due to the camera angle or due to walls. In this case, those off screen characters are likely to be the data that gets offloaded. While you are likely to see some slightly worse frame times when one of those characters actually comes on screen, its not frequent enough to have a noticeable performance impact.

Another source of this preloading comes in what is known as LODs. The idea is that you don’t need as much resolution for objects farther away, so you can render a simpler version using smaller textures when the object is farther away and get the same, and, sometimes, even better, results for cheaper. The application is likely* to still load the full resolution versions of the object and textures, even though they are not being used actively. This simplifies the application’s loading and helps avoid hitches as objects move around and more details needs to be loaded.

If you go well beyond the VRAM level, however, it will have to start offloading data actually being rendered. This means it has to do that data transfer every frame, or even multiple times per frame, which will quickly kill performance. Exactly when the major problems will start to occur will depend on the application usage.

* Some games will use processes to dynamically only load the LOD levels they actually need. This is much more common in an open-world no-load game compared to closed-world or top-down game.