Would a 2.5GHZ processor from a decade ago be the same speed as a processor now, at the same frequency?

40 views

So would two processors running at 2.5GHZ (just a random frequency I’ve chosen) but one from 10 years ago and one from this year be equivalent in processor power. Does this measurement mean they are the same in raw power?

###### 2.5GHZ (Old Processor) = 2.5GHZ (New Processor)?

Or is it because of new architecture that the newer one is faster and in that case newer is always better due to more cores etc.

In that case doesn’t it make it confusing to use gigahertz to measure the speed of newer processor in comparison to previous generations?

Sorry for the long question, it’s just always bothered me and confused me on how to choose the best processor.

In: 5

it may be, but the socket only matches older motherboards and those have definitely improved.

No, efficiency in terms of “operations per clock cycle” has drastically improved over the years, and so has power consumption per work done.

Processor manufacturers realized that using clock speeds as model numbers or performance benchmark was a dumb idea well over a decade ago for that reason.

> Does this measurement mean they are the same in raw power?

No. The frequency is only really useful for comparing the performance of the same processor, or maybe closely related ones. You can think of it like the RPM of an engine, it tells you how many cycles it performs but not how big or how many cylinders there are.

Modern processors get a lot more done per cycle so they are not at all comparable.

The short answer is: no, as [this video shows](https://www.youtube.com/watch?v=8QOoQWvrQ-Y).

The longer answer, for those who don’t wanna watch: in your computer, there’s a tiny little clock. Every so often, it says “do something” – a clock cycle. Gigahertz is a measure of how often it is between the clock saying that – or, mathematically, how many times it says to do something in a second.

Now, on early CPUs, that “something” was exactly one “instruction” at most. Instructions are the building blocks of all code – stuff like “fetch this byte of memory” or “add these two numbers together”. Complete more instructions per second, and your CPU performs better.

Now, there’s two ways to accomplish that goal. The first way is to complete more cycles – increasing gigahertz. The second way, though, is to complete more than one instruction with every cycle. There’s several techniques to do this, but what’s important is that the instructions per cycle is not fixed.

A modern CPU will tend to have better IPC than an old one. Better IPC with the same cycles per second means more instructions per second – which is what actually matters.

____

The reason we still use gigahertz is because it’s far, far easier to measure – and harder to manipulate. Let’s say one manufacturer’s CPUs perform really well in benchmark x, but less well in benchmark y – while the other manufacturer’s CPUs perform really well in y, but worse in x. Both manufacturers can say “we ran real benchmarks that show our CPU is 20% better than the competition at the same speed” – and conveniently ignore when they’re worse. The best route is to look at independent, reputable third party reviewers, because gigahertz – and all the other stats – isn’t a good indication of performance.

> In that case doesn’t it make it confusing to use gigahertz to measure the speed of newer processor in comparison to previous generations?

Yup, sure is. People still do it though.

The frequency only tells you how many clock cycles occur in a second. As long as the same amount of work gets done in one clock cycle, then a higher frequency will give you a faster processor.

But the amount of work per clock cycle is not fixed. A modern processor will typically be able to do more work per cycle than a processor from 10 years ago.

Just want to add that GHz is simply the instruction per second measure. There are other very important notion as caching layers which avoid having to go look into ram for data which takes time.

You’ve answered your own question. Newer processors do more PER CLOCK TICK than the old ones do. So even if you get 2.5 Billion ticks per second, each tick does more in the new processor.

There’s too much clever stuff to mention, but as an example say you had a piece of code (microcode really) that said “If X = Y, then do this, otherwise do this other thing” the newer processor might actually do BOTH things, while it is simultaneously retrieving the value of X and Y. So on on the first tick, it might start the code to fetch X, fetch Y, fetch address of instructions if true, fetch address of instructions if false. Then the next tick it compares X with Y while continuing with both branches, then on the third tick, based on if X=Y it switches code flow to the instructions you already started if X=Y and discards the other path.