– Why does clock speed matter on a CPU, and why do some top-tier CPU’s have lower clock speeds than some from nearly 10 generations ago?

775 viewsOtherTechnology

I have a good understanding of what clock speed is, but why does it matter?

For the second question, I was wondering since for example, the new i9-14900K has a base clock speed of 3.2 GHz, whereas my previous desktop CPU, the i7-4790K, had a base clock speed of 4.0 GHz. Why hasn’t this number steadily gone up thought the years?

In: Technology

32 Answers

Anonymous 0 Comments

Analogy time:

Imagine clock speed is how much cargo a vehicle can haul. Higher clock speed means more stuff can get hauled from a to b. Lower click speed means less stuff can get hauled from a to b.

When CPUs were first developed, they were attaining higher clock speeds with each generation. Even high by todays standards like a Pentium 4ghz. The problem is, that’s just one “core”. Which means it’s only one large vehicle that can go from a to b. Then back to a.

Since then, they’ve designed more cores (more vehicles). Starting out, those cores/vehicles were smaller. But having more of them actually meant that a 2.2ghz dual core was actually faster (in some applications) than a single 4.0ghz.

We now have 32+ core cores that max out above 4.0ghz each.

So that’s like having 32+ vehicles hauling goods back and forth instead of just one.

Anonymous 0 Comments

Clock speed does matter for otherwise equal chips as the one with a higher clock rate will do more work for a given cycle.

Now there are so many factors besides clock rate that improve/degrade performance of a chip.

Ultimately the things that you really want to look at are how a given chip will perform for a specific task.

Doing a bunch of scientific computation? You want high FLOPs (floating point operations per second).

At the end of the day, focusing on the output a chip can produce matters more than simple stuff like clock rate and the like.
For example: If I, a hypothetical chip architect company, wanted to make a chip with a 10 GHZ clock, I could. The approach would be to add many pipeline stages (which in effect splits the entire processing of a piece of code/instruction into multiple steps). There are many unintended side effects of doing this (branch misses being top of mind currently), but I could happily and accurately advertise a 10 GHZ cpu.

All this is to say that computer architecture is an immensely fascinating field and this question asked gets at the core of engineering, balancing tradeoffs in design and implementation of engineered products.

Edit: I want to note that the explanation above is mostly true, but contains simplification/untruths made for convenience of explanation.

Edit part 2, the sequel: I didn’t realize this was ELI5. The ELI5 explanation can be summed up as “Clock rate is only part of the story of what makes a CPU fast.”

Anonymous 0 Comments

In the old days clock speed more closely related to actual processor performance than it is today.

Clock speed is the timer for the chip, it isn’t actually true to say that one tick on the clock = one operation on the chip but it’s close enough for this level of discussion.

So back when CPUs were simple the only way to make them work faster was to cram in more ticks per second by making the clock run faster.

But these days chips are a lot more complex and you can’t easily use a single metric like that to determine which performs better. Among other things we’ve found ways to get more processing power out of a chip without cranking up the clock speed. So clock speed isn’t exactly useless to know these days but it’s dropped from being THE benchmark to being a fairly minor factor.

This gets even worse when we get into the problem that “performs better” depends on the task at hand. For calculating 3d graphics, for example, what you need is the absolute maximum number of mathematical operations per second and it doesn’t much matter how you get them. The operations aren’t synchronous, that is it doesn’t matter if the calculation for one pixel finishes a few bazillionths of a second before the calculation for the rest. As long as they’re all finished in time for the next screen refresh you’re good.

FLOPS used to be another of the benchmarks, that’s “floating-point operations per second”, and by that standard a GPU which does the task described above will kick the ass of any CPU on the market. The GPU does one thing very, very, very, well. But it’s not as useful for other stuff. For more general purpose computing a GPU isn’t as useful.

So… Yeah. We’ve moved from simple (well, for microprocessor values of simple) chips made for general purpose computing and a nice simple easy to understand metric to more complex situation where even general purpose chips are conceptually more complex and you have to solve all sorts of interesting problems to make them work.

so these days we tend to benchmark CPU’s based on how rapidly they can perform a particular task, and the task used most frequently is rendering a 3d scene which might not really be the best measure for day to day operation since things not measured in that sort of benchmark may matter more to how fast your computer runs regular programs.

I personally think we ought to standardize on the Dwarf Fortress benchmark, at least for single cores: how many updates per second can it process on a standardized test fortress. I’m not actually joking, I think it’d work pretty well for single core measurements. Not so good for multi-core.

Anonymous 0 Comments

You can cook burgers really fast to feed an army. But someone needs to make the burgers for you. So you can only go so fast.

What if you had two burger makers, now you are the bottleneck. What if instead of you just i replaced you with someone but they cooked at the same speed the burgers were made? What if you now had two of those burger makers and two of those cooks, what if those two cooks got faster but the maker didn’t? Add another maker…

See the point? Clock speed is only good if the rest of the system can keep up. At some point it’s better to add more, slower cooks and makers.

Sure you can learn to go really fast and the maker can go really fast to feed that army, but that requires you to not rest, stay cool, and not rest.

Anonymous 0 Comments

Okay, doing my best to honor the name of the subreddit.

Okay, you’ve got a Honda Civic. Doesn’t carry a lot and doesn’t go very fast, so it takes a long time to move a given number of packages from city A to city B. with a whole bunch of trips back and forth.

Then you buy a Tacoma, which also doesn’t go too fast, but can carry a whole lot more, so you can increase the number of packages you can deliver in a given time. Many fewer trips back and forth to deliver all your packages.

Then you go all out and buy a Ford Raptor. It increases your payload a bit further, but you can also drive that sucker 120 mph, so you can deliver even more packages a lot faster. Multiple trips from A to B seem to happen in the blink of an eye!

Then you change up technologies and find out that a long haul semi truck may only go 75mph, but it can handle so much more per trip that it is night and day faster at delivering a lot of packages, even though its speed is actually much slower. Even though it is slower, but cutting down your number of necessary trips from say 100 down to 20, it is still a huge improvement over the faster Ford Raptor.

Then you change up further and discover how much more you can move with a freight train than a semi truck, even though the pure speed is yet one level slower, down to 50-60mph. Even though it is slower, it only now takes one trip from A to B to deliver even more packages.

Hopefully that makes sense to your five year old self! 😉

The more instructions you can process and the more data you can move in a single clock tick, means you can potentially get more done with a slower clock than a simpler computer that does less per clock tick but with a faster clock.

Anonymous 0 Comments

Increasing clock speeds can have diminishing returns when compared to other options. This is because there are other bottlenecks within a computer that will make any speed increases useless by themselves.

One Bottleneck would be only being able to handle one process/calculation at a time. So rather than increasing speeds to get through each process slightly faster, you increase the core count to allow a processor to handle multiple tasks at once.

Another major bottle neck is having the really fast processor waiting for information to come along the bus from the slower memory or hard drive. So they add more cache right onto the processor which acts like really fast memory. So now the processor can just grab information right from the cache which operates at the same speed as the processor.

Anonymous 0 Comments

CPU is like a counting machine. Clock speed is how fast the counting is done. The faster the counting, the faster you get the result.

It is important, but there are some other things to consider too.

One of them is memory. Imagine you are counting a lot of numbers in a book one book has one number per page, other has a hundred. You will be able to count faster with the hundred number one because you don’t need to flip the pages so often. This is called memory cache. Newer CPUs have bigger and faster memory cache.

Now imagine that instead of having one person doing the counting you have four. You can split your counting task between four of you, so even if one person is slower, four people would probably be faster. This is called core( one core – one counting person). Newer processors tend to have more cores.

Also, imagine a person can drink coffee to count faster for some time. You can’t drink coffee all the time, your bladder gets full and you may need to pee more often, but if you time it right you can drink coffee when you have a lot of counting to do and stop drinking when you don’t have that much to do. This is boosting the processor speed. Coffee is electricity, pee is excess heat.

Anonymous 0 Comments

another thing to account for when trying to understand cpu speed is IPC, Instructions Per Clock, a modern processor runs more instructions per clock than a legacy one, so that might explain why a modern single core @2.4ghz does far more work than a legacy single core @3.5ghz

then you have cache, cache optimization, integrated memory controler, and a bunch of other tricks to make the cpu faster, back in the days, processors did exactly what they where told to, nowadays processors have a very advanced scheduler that runs code “just in case” then scrap it if it’s not needed

processors are really complex these days, I suggest that you research about older processors and the optimizations that followed, a good starting point might be the Intel 486 processor, ask ChatGPT and validate the information using Wikipedia and official sites from Intel and AMD

Anonymous 0 Comments

Not sure if these analogies work but it’s like an adult and a small kid each can carry a container of water from the car inside the house. The adult can bring in a five gallon just while the kid can only bring in a 500ml bottle.

The lawnmower engine can go 3000rpm like a car’s engine can go 3000rpm but the car is putting out more power due to more displacement.

A swimmer can swim the length of a pool in say 10 strokes. A younger swimmer can do it in 15 strokes.

Anonymous 0 Comments

Your CPU is like the wheels on a vehicle. The clock speed is the RPM of the wheel. The higher the clock speed, the farther it will travel in the same time period.

However it’s not the only factor. If your vehicle is a tricycle, a high clock speed doesn’t move you very far. If you have a monster truck, a lower clock speed will move you further than the tricycle.