The Xbox Series X has its CPU and GPU on a single chip, but the CPU runs at 3.8 GHz and the GPU runs at only 1.8 GHz. Why can’t the GPU run at 3.8 GHz too?

541 views

This also applies to the PS5, why can’t the GPU run as fast as the CPU?

In: Technology

5 Answers

Anonymous 0 Comments

Ultimately it doesn’t matter if the CPU and GPU are on the same chip or completely separate hardware components. Heat is the enemy. A typical consumer CPU only has 4-8 cores, but they can do complicated tasks so their circuitry is far more complicated. A GPU can have hundreds or thousands, but they are doing precisely the kind of maths that is relatively simple (for a processor) and can be done in parallel.

More things doing work means more electricity means more heat. Push the clock frequency too high and you will end up with more heat than you can get rid of with current cooling solutions. That’s why the GPU isn’t running at the same clock speed as the CPU.

Anonymous 0 Comments

The CPU is like a fast sports car. If you’re trying to get a vial of anti-venom from the lab to the hospital as quickly as possible, you want the car to be able to go as quickly as possible.

The GPU is like a big truck. If you’re trying to get a huge shipment of anti-venom from LA to NY, a truck is going to be far more practical. It has much more cargo space and it can go many more hours without needing a refill, even though it doesn’t travel as fast.

The CPU and GPU are good for different things. They work together. The CPU needs to be as fast as possible. The GPU needs to have high throughput, not raw speed.

Anonymous 0 Comments

I don’t know if you are in the US but in high school in the US we have a thing called the PACER test for physical education.

You have to run a small distance to get to a “check point” in under a few seconds, and you have to keep running back and forth between two checkpoints for faster and faster times until you just fail and can’t run in time for the checkpoint.

This is kind of how cycle times work in computer chips. The limiting factor in CPU clock speeds is whether the electricity can make it to the “checkpoint” in time.

So clock speed is related to cycle time, 3.8 billion cycles per second also means that each cycle takes about 0.2 nanoseconds.

Within these 0.2 nanoseconds, everything in the CPU has to “settle.” Chips contain devices known as transistors, electricity flows through them to do computations. Chips are designed to do a small amount of computation every cycle they run by the transistors choosing how electricity flows through the CPU.

Within these 0.2 nanoseconds, electricity has to travel through a bunch of transistors and make it to a “checkpoint” which saves what it just computed. If there are too many transistors, 0.2 nanoseconds isn’t enough, and the computation isn’t done in time before the next computation step starts and things just go bad from here.

So our goal with clock speed is to reduce the cycle time enough such that we can get it as small as possible but that all circuits on the CPU running at this clock can make it in time for their “checkpoint.”

There are a few conclusions from this.

1. Faster clock speed doesn’t necessarily mean faster compute speed when comparing different chips. One chip might have a slower clock speed but does more in a single cycle. For the same chip though, if you can increase the clock speed without bad effects, it will increase performance.

2. You want all of your circuits in your chip to take about the “same amount of time” to hit the checkpoint. If one circuit takes too long to checkpoint, it will be dragging down the speed of all the others because this circuit becomes the limiting factor as to how fast you can cycle. This is what CPU engineers painstakingly optimize. The big problem though is that for different inputs into each circuit, it takes different amounts of time, to get things to work properly we have to checkpoint the slowest time.

1 is the bigger reason for why the xbox’s graphics chip can run slower, it just does more in one cycle. You can design a faster clocking GPU by decreasing the amount done in one cycle, but graphics tasks don’t benefit as much from it so why bother.

Anonymous 0 Comments

A GPU runs very specific tasks, and those tasks are “harder” than the more “general” tasks a CPU performs.

Because the GPU runs “harder” tasks, it takes longer to complete them, and the GPU has to put more effort in. This in turn results in more heat being generated (modern GPU’s run significantly hotter than CPU’s)

Anonymous 0 Comments

[removed]