– Why does clock speed matter on a CPU, and why do some top-tier CPU’s have lower clock speeds than some from nearly 10 generations ago?

756 viewsOtherTechnology

I have a good understanding of what clock speed is, but why does it matter?

For the second question, I was wondering since for example, the new i9-14900K has a base clock speed of 3.2 GHz, whereas my previous desktop CPU, the i7-4790K, had a base clock speed of 4.0 GHz. Why hasn’t this number steadily gone up thought the years?

In: Technology

32 Answers

Anonymous 0 Comments

Increasingly, it is the number of cores that matter, which is essentially the number of chips at that clock speed stacked together. Other parts of the computer limit speed, so a super fast chip may not have any impact on overall performance. Bus speeds, graphics card speed, whether you’re using a solid state drive or one that spins, etc.

Anonymous 0 Comments

The clock speed is just how long a “tick” inside the processor is. It’s the number of smallest-possible operations the thing can do in a second.

It doesn’t actually tell you very much about how much work is getting done. A lot more goes into performance than just counting really fast. More CPU cores, more on-board cache, faster memory access, video adapters to take load off the CPU, faster disks, and more useful instruction sets all make a processor faster in practice, even if the clock speed looks lower on paper.

Anonymous 0 Comments

The higher the clock speed, the more power required and the more heat generated. At a certain point it gets too difficult – nigh impossible – to cool a processor effectively in a home machine.

Instead, manufacturers make multi-core processors. This means that instead of 1 processor running at a given speed, you effectively have multiple running simultaneously at a slightly lower speed. This means that each core can focus on different tasks at the same time. Imagine you have a pile of stuff to haul away. You could throw it all into one trailer attached to a Ferrari, or multiple trailers hitched to multiple jeeps. The jeeps may be slower, but they’re working at the same time.

The reality is of course more complicated but for an ELI5, that’s the best way to look at it.

The 14900k has 24 cores. The 4790k had 4. The 14900k can effectively do 6x the work of the 4790k, and the cooling requirements are less than if you had one core running 6x faster because although each core is part of a larger CPU, the lower clock speed means they’re only going to produce so much heat each.

Edit: there are of course even more things to consider such as threads, cache size, memory, bandwidth, etc. But that will overcomplicate matters too much for this sub.

Anonymous 0 Comments

There are CPUs that can clock faster, but as you increase clock speeds, you need more and more cooling and power to get them to run at those speeds. It makes much less sense to keep increasing speeds with that in mind, especially as a consumer product. Instead, you start distributing the CPU load across more and more cores in order to use resources more efficiently. They can each be processing different instructions at the same time, and as a whole they perform a series of operations much faster than an individual core could, even though the single core might have a higher theoretical clock speed.

Anonymous 0 Comments

Clock speed is hard to increase and takes a lot of extra wattage for each additional Ghz.
For CPU performance we have Single Threaded performance and Multithreaded performance.

There are essentially 3 or 4 factors for CPU performance. Clock speed (this would be analogous to an engine’s RPM), How many cores the CPU has (not every workload scales with more cores), The IPC or Instructions Per Clock (which would be analogous to an engine’s torque) – this is how much “work” is done per “clock” (so a 4Ghz CPU with higher IPC will outperform a 4Ghz CPU with less), and Cache, which is the memory in the CPU (how much and how fast the cache is) – Although some would argue that cache is really a contributing factor to IPC.

For something like Intel 14th gen, they list the TDP (PL1) as 125W. So you’ll notice that the 14700K has a higher base clock than the 14900K (which has 4 more cores) – that is because the base clocks you’re seeing is what both CPUs will give you at the 125W rated TDP (of course these CPUs can boost past this 125W, which is your boost clock ratings). A 14900K for example can boost a core to 6Ghz.

To increase CPU performance, you can either increase IPC, increase cache, increase clock speed (all improve single threaded performance) or add more cores (strictly increases multi-core performance).

A new architecture is required to improve IPC. Intel (from 6th – 10th gen), for example, had the same fundamental architecture and IPC. They increased performance by adding more cores and clockspeed because their next architecture was delayed.

Your 14900K at 3.2Ghz is going to be faster than a 4790K at 4Ghz due to more IPC, more cache, and more cores.

Anonymous 0 Comments

You say you have an understanding of clock speed so I’ll skip most of what I was going to say but…

In regards to increases in clock speed, you can think of the clock speed like the ticking of a real clock, and each tick the CPU can do some kind of logical operation, think of it like small maths equations.
Many parts of the CPU rely on these equations and in order to work each part must be in sync with the speed at which these equations happen – each tick.

If you tried to do two of these equations within a single tick, you couldn’t because it would not be in sync (as this is limited by the physics of electrical current in the transistors). Instead, Intel/AMD etc have added another core that has it’s own clock, so now you can do both equations in the same time – a single tick – but they’re done in physically different processing units.

If you take a look at other CPUs like the X3D line, you’ll notice they clock lower but have better perf in certain workloads like games. This is because different programs are programmed differently, and so some see little benefit from doing more calculations per second (maybe they simply aren’t doing a lot), but instead want to access lots of data very quickly. Now you get increased performance without increasing the clock speed.

Having more cores or clock speed is never strictly better or worse, it depends on the architecture, the program, and the instruction set of the CPU (ARM/x86).

Anonymous 0 Comments

Note that base clock is kinda irrelevant for newer chips. If the CPU is unloaded, it’ll go to a low-power state around 800MHz. For heavy single-core workloads, that i9 is going to be hitting 6GHz given some conditions are met, whereas the 4790k maxes out at 4.4GHz. For multi-core workloads, the i9 is still going to be well above 5GHz.

For another example, the i7 in my laptop has a base clock of 1.8GHz but boosts to 4.8 under load. The base clock really doesn’t tell me anything.

Anonymous 0 Comments

Think of it like this, you have 10 things that need done, and each one takes a minute. You could have 1 person do the ten things, and be done in 10 minutes, or you could have 10 people each do one thing at the same time and be done in 1 minute.

Transistors are like the people here, and different chips will have different numbers of transistors per core. If one has a million transistors and runs at 4ghz, and another has 3 million and runs at 2ghz, the one with triple the count will still be faster than the one with less.

Anonymous 0 Comments

Clock speed is how long it takes to execute a certain amount of instructions (not always the same but pretty much). So if you can do more clock cycles in one unit of time, all else being equal you can do more work/calculations

Anonymous 0 Comments

Ok, virtually every other response in this thread is missing a big point. Unless you put the world’s worst cooler on an 14900K CPU it will literally never run at 3.2Ghz. Modern CPUs have what is called a boost clock, which, with proper cooling, is the speed that it will run at. The boost clock on a 14900k is either 5.9 or 6Ghz, I don’t recall offhand. In reality it can’t run it at that speed continuously, but it will run somewhere north of 5.5Ghz. They will automatically increase their clocks until they hit a temperature or power limit. AMD CPUs do something very similar, albeit at slightly lower clock speeds, 5.8-5.8Ghz and at much lower power draw.

On top of that the IPC, or the amount of work the CPU does each clock cycle is higher on modern CPUs.