why does CPUs/CPU cores seem to be capped at 3 – 4 Ghz since almost a decade?

795 views

why does CPUs/CPU cores seem to be capped at 3 – 4 Ghz since almost a decade?

In: 1368

29 Answers

Anonymous 0 Comments

The reason for it, that i learnt in my computer science classes, is that there are signals that need to travel through these components during each clock cycle. at 4 GHz, a clock cycle lasts 0.25 nanosecond. Considering that this signal cannot travel faster than light (3 *10^(8)m/s), the maximum distance it can cover during such a short clock cycle, is 0.25 * 10^(-9) * 3* 10^(8)= **7.5cm**

Which means your component has to be small enough so that the electrical signals do not have to cover more distance than that, in the span of a clock cycle.

Hence why we don’t make CPU/GPUs with a frequency ten times higher. It would involve making CPUs and GPUs that would be about ten times smaller (in surface) than they are now.

That reasoning is mostly dealing with orders of magnitude here. Maybe that in practice, these signals do not travel at the speed of light, but at 60%, 80% of that speed. No idea. Also, maybe they need to travel, two, three times along the CPU, i don’t know either. But the general idea is that we can’t keep making CPUs and GPUs smaller and smaller, and thus we can’t keep increasing the frequency way past a few Ghz, even though the CPUs frequencies increased by a factor of 1000 between the 1970s and nowadays.

Also there’s the heat issue. You can’t let your computer generates more heat than you can evacuate, else you have to throttle your GPU/CPU temporarily to prevent it from overheating. Heat generation is proportional to the square of the frequency. Hence, two CPUs at 1 GHz produce two times less heat than one CPU at 2 GHz, although the two options are equivalent in term of computational power. Hence why we rather multiply the CPU cores instead of making architectures with a single really fast CPU core.

The challenge however is to take maximum advantage of these parallelized architectures (aka architectures with several CPU cores instead of one single fast core), which lies in the developers’ hands. If you make a program that has to be run sequentially (where every calculation is dependent on the result of the previous one), it can’t take advantage of more than one CPU at a time. Minecraft is an example of that. Conversely, some tasks involves calculations that requires no particular order, and thus these calculations can be shared between several cores. A good example is image rendering. Each pixel can be calculated independently of the other pixels. If you share the calculations among several devices, each one can just do its calculations without having to coordinate itself with the others. This is partly why GPUs exist in the first place by the way. They are highly parallelized architectures suited for image rendering.

You are viewing 1 out of 29 answers, click here to view all answers.