Why is a processor’s speed not the only important factor in a computer’s performance?

969 views

Hello, everyone! I’ve been doing some research into computer hardware lately, and one thing that I keep coming across is this idea that the speed of a processor, while important, isn’t the only thing that affects a computer’s overall performance. I’m having a bit of a hard time wrapping my head around this because I always thought that a faster processor meant a faster computer. Can anyone explain why this isn’t necessarily the case? I’m really interested to learn more about this!

In: 97

45 Answers

Anonymous 0 Comments

Imagine you have three track athletes:

– “A” can run 10 laps in 5 minutes

– “B” can run 5 laps in 5 minutes

– “C” can run 4 laps in 5 minutes

One would imagine “A” is a better athlete because he runs more laps in the same time, but what if I told you this:

– “B” ‘s track is 4 times longer than the others

– “C” always runs at the same speed, but he can run with a 50kg backpack on without getting slower

Now you’ll realize that while “A” makes more laps, the overall work he can do is less because it’s a shorter track and he can’t carry as much weight.

It’s the same with processors. Some have a higher frequency but less computing power.

Anonymous 0 Comments

There’s always a bottleneck and usually the bottleneck had to do with processor speed. But not always.

There was a period of time where a dirt cheap Celeron 300A could be overclocked and would outperform a Pentium II at twice the price because the Celeron had a tiny onboard cache and the PII had a large cache that was stored off-board in two banks on either side of the chip.

That little bit of speed-of-light latency that was created by having to carry data to and from points an inch away slowed the Pentium II enough for it to lose in testing. But the only way to get the Celeron up to that speed was to overclock it from 300 MHz to 450 MHz, so processor speed still counted for everything.

Anonymous 0 Comments

I always think of it like your processor is like having a really smart really fast college professor that can do complex math and calculations. The more cores you have the more professors you have that can run a calculation in parallel. The caveat is that you can only give them about 1 page worth of math problems/text book reference material to work with at a time. That’s why there is a runner that has to grab a problem from the stack, put it on a piece of paper and give it to the professors. If your runners are slow and the task requires multiple pages to be transferred over then it’s going to take a long time as the runners will limit you.

Another thing to consider is the task you are asking of the professors. If the CPU is your professors then a GPU is like 20,000 classrooms full of middle/highschool algebra students. The kids can’t do complex level calculus or things like that but if you have heaps of basic math problems to solve then the hundreds of thousands of students are going to finish significantly faster than the 4-32 college professors, even if the runners are moving at light speed to keep the professors topped off with data.

So to tldr, the professors are extremely important but other things in the process can slow them down or they are not the right tool for the job.

Anonymous 0 Comments

No offence to the other comments, but I think they are bad.

Pretend you have a well. The speed of a well is measured in how many times you can pull up a bucket of water in an hour. 20 buckets an hour will give you more water than 10 buckets an hour right?

Well there is also the size of the bucket. If you double the size of the bucket (IPC) you’ll get more water without increasing the speed, which is more accurately called frequency & is measured in Ghz.

Now we understand what makes a good well (processor), but there is a whole town to think about too.

Pretend two towns have identical wells. Maybe one town is better about getting people to the well on time to fill up the bucket & take the water where it needs to go. A new faster well won’t help them at all, they really need a better way to organize people so the well doesn’t sit idle.

Anonymous 0 Comments

A processor processes data. RAM stores the data to be processed. The RAM and the processor are linked by pathways (the ‘bus’) so the RAM can send the processor things to work on and the processor can send finished work back. Those pathways can only transfer so much data at a time.

If the processor can’t keep up with the work it’s being given, you’ll notice. If RAM and motherboard can’t keep the processor supplied with data fast enough, you’ll notice. If you’re doing graphics intensive work like 3D modelling or gaming, a lot of the processing work is being offloaded to a *graphics processing unit* (GPU). And just like the general processor, if it can’t keep up with the work or the rest of the system can’t keep it fed with stuff to work on, you’ll notice.

In discussions about PC performance, you’ll hear a lot of talk about “bottlenecks”. You have all this data flowing around and it’s getting hung up and slowed down somewhere. Common bottlenecks:

HDD too slow: processor is instructing HDD to send data to RAM and the HDD can’t keep up with the demand. The “bottleneck” is your HDD.

Not enough RAM: The reason we put data into RAM for processing is because sending data to the processor from RAM is much, much faster than trying to send the processor data to work on directly from the HDD. If you don’t have enough RAM, the system has to constantly swap data in and out of RAM from the HDD. This adds work and bandwidth, which can become a bottleneck.

GPU limitations: If the GPU doesn’t have the processing power to keep up, or if you can’t feed it the data it needs fast enough, it will cause a bottleneck.

Software engineering: Be very careful with this one. Among gaming circles, it’s quite trendy these days to complain about “poor optimization”. Most of the people who use that term have absolutely no idea what it actually means. They just heard someone else say it and they started using it because it sounded cool.

When you write software, you’re constantly making decisions about how the data you’re working with is stored and managed. If you make poor choices, the software won’t perform as well as it could. The process of reviewing those choices you’ve made with an intention of improving the performance of the app is called ‘optimizing’. It’s considered a best practice in optimizing to use something called a ‘profiler’ which helps to visualize what your program is doing while it’s running. The profiler monitors memory and processor usage on the running program to see where things are running smoothly and where they’re getting bottlenecked.

Consider this example: Let’s say I have a program I’m making that has a bunch of configuration settings that the user can control and that influence what the program does and how. The idea is that you make those settings available to the program whenever it might need to look something up. And because it’s a lot of options, let’s say that configuration information uses 100kb in RAM.

So we’ve got the configuration data, and we’ve got functions within the program that need that data to determine what to do. I can send a copy of that data to each function that needs it, each time that function needs it. That means making a copy of the data to send, so it’s duplicating space requirements, and the actual work done in duplicating the information to send is also putting demand on the processor and the bus.

If we can eliminate the need to copy all of that data every time we need it, we can save a lot of data movement and messing around. Instead of copying 100kb of data, we send 4 bytes of data that contains the address in memory of the data we want to make available to our program’s functions.

100000 bytes of data to move, or 4 bytes of data to move. Both options will ultimately lead to the same end result, but one of them uses 0.004% of the resources to move data as the other. That’s a decision made on the logic side that makes better use of the available options than the alternative and the benefit is obvious regardless of what hardware you’re running it on.

It doesn’t matter if your McLaren can do 300mph on a clean straightaway if the road is curving and covered in lose rock. How fast you can drive without killing yourself isn’t just a function of the car. It’s also the road conditions and the driver. It’s the same for a PC. Processor power is important, but it’s wasted if the rest of the system can’t keep the processor fed.

Anonymous 0 Comments

You need to transfer many people through a revolving door you could spin it twice as fast to get more through but alternatively you could also send twice as many people through the door.

Anonymous 0 Comments

I have seen people using the highway analogy, but I think it could be used better. Think of the CPU like a highway, the CPU speed is the speed limit on the highway. The number of cores and the ability to hyperthread the cores are lanes on the highway. Just setting a faster speed limit on the highway won’t mean more traffic can get through. A single lane highway with won’t move as much traffic, even with a 90 mph speed limit, as an 8 lane highway with a 60 mph speed limit. Then there are also factors like the GPU would could essentially add HOV/express lanes for that type of work. And the L1-L3 cache which provide on/off ramps for the highway. The bigger the cache, the more lanes on those on/off ramps.

If you want to just run a single task as fast as possible the single or two lane highway with a fast speed limit could do that well. But most modern applications/games/etc try to multithread everything, so having more cores allows better performance in general as more traffic can move down the highway. And being able to efficiently have that traffic flowing on/off the highway with bigger caches also helps. Having a dedicated express lane with a GPU to do those tasks also means that calculations that could be like semis with huge loads slowing down regular lane, well now they can use the express lanes to offload that work and make it easier for normal traffic to flow on the other lanes.

Anonymous 0 Comments

It is not because your processor can do a billion “instructions” per second more than some other processor that it can get more work done. It’s all about efficiency

Anonymous 0 Comments

you are the type of guy to ask why 2 dots can connect into a straight line and why the fundamental laws of our universe are the way they are. Im feeling dumber with each question on this sub ffs

Anonymous 0 Comments

If you have a bottle full of water and turn it upside down, it pours out at a given speed/rate. If the bottle opening is smaller, it pours out slower. If the opening is larger, it pours out faster.

Now, imagine that you have a 4 bottles. 3 have small openings (GPU, RAM, chipset) and 1 bottle has a large opening (fast CPU). Turn them all over and the CPU pours out fast. But it doesn’t matter how fast it pours out. You still have to wait until the other three pour out.

This concept is the “bottleneck”. A slow GPU and RAM will “bottleneck” a fast CPU. Likewise, a slow CPU will “bottleneck” a fast GPU.

One component can only go as fast as other allows it to go.

This is a similar concept to “a chain is only as strong as its weakest link”. If you have a chain rated to 30,000 pounds but put one link in it that is rated to 10,000 pounds – the chain is only rated to 10,000 pounds.

In an ideal world, you would have a strong GPU, a strong CPU, adequate RAM, and a decent motherboard that can support it all. If you had the latest and greatest CPU and good RAM but you get an old 1060 GPU, your ability to play games will suffer incredibly. But you’d still be able to do simpler stuff like spreadsheets, Word, and surf the internet without much issue.