Why is a processor’s speed not the only important factor in a computer’s performance?

1.02K views

Hello, everyone! I’ve been doing some research into computer hardware lately, and one thing that I keep coming across is this idea that the speed of a processor, while important, isn’t the only thing that affects a computer’s overall performance. I’m having a bit of a hard time wrapping my head around this because I always thought that a faster processor meant a faster computer. Can anyone explain why this isn’t necessarily the case? I’m really interested to learn more about this!

In: 97

45 Answers

Anonymous 0 Comments

A processor processes data. RAM stores the data to be processed. The RAM and the processor are linked by pathways (the ‘bus’) so the RAM can send the processor things to work on and the processor can send finished work back. Those pathways can only transfer so much data at a time.

If the processor can’t keep up with the work it’s being given, you’ll notice. If RAM and motherboard can’t keep the processor supplied with data fast enough, you’ll notice. If you’re doing graphics intensive work like 3D modelling or gaming, a lot of the processing work is being offloaded to a *graphics processing unit* (GPU). And just like the general processor, if it can’t keep up with the work or the rest of the system can’t keep it fed with stuff to work on, you’ll notice.

In discussions about PC performance, you’ll hear a lot of talk about “bottlenecks”. You have all this data flowing around and it’s getting hung up and slowed down somewhere. Common bottlenecks:

HDD too slow: processor is instructing HDD to send data to RAM and the HDD can’t keep up with the demand. The “bottleneck” is your HDD.

Not enough RAM: The reason we put data into RAM for processing is because sending data to the processor from RAM is much, much faster than trying to send the processor data to work on directly from the HDD. If you don’t have enough RAM, the system has to constantly swap data in and out of RAM from the HDD. This adds work and bandwidth, which can become a bottleneck.

GPU limitations: If the GPU doesn’t have the processing power to keep up, or if you can’t feed it the data it needs fast enough, it will cause a bottleneck.

Software engineering: Be very careful with this one. Among gaming circles, it’s quite trendy these days to complain about “poor optimization”. Most of the people who use that term have absolutely no idea what it actually means. They just heard someone else say it and they started using it because it sounded cool.

When you write software, you’re constantly making decisions about how the data you’re working with is stored and managed. If you make poor choices, the software won’t perform as well as it could. The process of reviewing those choices you’ve made with an intention of improving the performance of the app is called ‘optimizing’. It’s considered a best practice in optimizing to use something called a ‘profiler’ which helps to visualize what your program is doing while it’s running. The profiler monitors memory and processor usage on the running program to see where things are running smoothly and where they’re getting bottlenecked.

Consider this example: Let’s say I have a program I’m making that has a bunch of configuration settings that the user can control and that influence what the program does and how. The idea is that you make those settings available to the program whenever it might need to look something up. And because it’s a lot of options, let’s say that configuration information uses 100kb in RAM.

So we’ve got the configuration data, and we’ve got functions within the program that need that data to determine what to do. I can send a copy of that data to each function that needs it, each time that function needs it. That means making a copy of the data to send, so it’s duplicating space requirements, and the actual work done in duplicating the information to send is also putting demand on the processor and the bus.

If we can eliminate the need to copy all of that data every time we need it, we can save a lot of data movement and messing around. Instead of copying 100kb of data, we send 4 bytes of data that contains the address in memory of the data we want to make available to our program’s functions.

100000 bytes of data to move, or 4 bytes of data to move. Both options will ultimately lead to the same end result, but one of them uses 0.004% of the resources to move data as the other. That’s a decision made on the logic side that makes better use of the available options than the alternative and the benefit is obvious regardless of what hardware you’re running it on.

It doesn’t matter if your McLaren can do 300mph on a clean straightaway if the road is curving and covered in lose rock. How fast you can drive without killing yourself isn’t just a function of the car. It’s also the road conditions and the driver. It’s the same for a PC. Processor power is important, but it’s wasted if the rest of the system can’t keep the processor fed.

You are viewing 1 out of 45 answers, click here to view all answers.