– Why does clock speed matter on a CPU, and why do some top-tier CPU’s have lower clock speeds than some from nearly 10 generations ago?

1.62K viewsOtherTechnology

I have a good understanding of what clock speed is, but why does it matter?

For the second question, I was wondering since for example, the new i9-14900K has a base clock speed of 3.2 GHz, whereas my previous desktop CPU, the i7-4790K, had a base clock speed of 4.0 GHz. Why hasn’t this number steadily gone up thought the years?

In: Technology

32 Answers

Anonymous 0 Comments

In the old days clock speed more closely related to actual processor performance than it is today.

Clock speed is the timer for the chip, it isn’t actually true to say that one tick on the clock = one operation on the chip but it’s close enough for this level of discussion.

So back when CPUs were simple the only way to make them work faster was to cram in more ticks per second by making the clock run faster.

But these days chips are a lot more complex and you can’t easily use a single metric like that to determine which performs better. Among other things we’ve found ways to get more processing power out of a chip without cranking up the clock speed. So clock speed isn’t exactly useless to know these days but it’s dropped from being THE benchmark to being a fairly minor factor.

This gets even worse when we get into the problem that “performs better” depends on the task at hand. For calculating 3d graphics, for example, what you need is the absolute maximum number of mathematical operations per second and it doesn’t much matter how you get them. The operations aren’t synchronous, that is it doesn’t matter if the calculation for one pixel finishes a few bazillionths of a second before the calculation for the rest. As long as they’re all finished in time for the next screen refresh you’re good.

FLOPS used to be another of the benchmarks, that’s “floating-point operations per second”, and by that standard a GPU which does the task described above will kick the ass of any CPU on the market. The GPU does one thing very, very, very, well. But it’s not as useful for other stuff. For more general purpose computing a GPU isn’t as useful.

So… Yeah. We’ve moved from simple (well, for microprocessor values of simple) chips made for general purpose computing and a nice simple easy to understand metric to more complex situation where even general purpose chips are conceptually more complex and you have to solve all sorts of interesting problems to make them work.

so these days we tend to benchmark CPU’s based on how rapidly they can perform a particular task, and the task used most frequently is rendering a 3d scene which might not really be the best measure for day to day operation since things not measured in that sort of benchmark may matter more to how fast your computer runs regular programs.

I personally think we ought to standardize on the Dwarf Fortress benchmark, at least for single cores: how many updates per second can it process on a standardized test fortress. I’m not actually joking, I think it’d work pretty well for single core measurements. Not so good for multi-core.

You are viewing 1 out of 32 answers, click here to view all answers.