In most modern CPUs, they have a certain number of cores, let’s say six. Then, they have threads, which we could say are each cores ability to think. Humans can usually maintain one or two thoughts at the same time cleanly, kinda like we have one or two threads.
Hyperthreading, is where each core has a second thread, giving us 6×2 in our example of six cores, so 12 threads.
Most games use a single core, single thread. So having a 5ghz speed single core would in theory be better than 50 cores at 2ghz.
However, tasks that utilize multiple threads, say encoding video using cpu, would massively benefit from more threads.
Shortly, cores are the brain. Threads are each brains thinking lane. If it has more cores, then it has more lanes. Each core has at least one lane, but most modern systems have two per core, letting each core do more at the same time.
Cores are mostly physical ideas. It’s a separate module on your chip that can do independent computations.
Threads are mostly logical ideas. It’s loosely a way for computers to organize streams of work. You must have at least one thread per core (meaning multiple cores can’t simultaneously work on the same thread).
A simple analogy is, cores is the number of human workers. Each worker can act independently. And threads are workstreams – eg phone reception can be a work stream, watching the front door can be a work stream, greeting a customer can be a work stream. Each worker can multitask but can only focus on one task at any given time.
A thread is a list of instructions, like a recipe.
A core is like someone cooking. If you have multiple dishes to make, you might switch between different recipes while waiting on something.
If a core is advertised as having two threads, that usually means it can switch between two threads quickly, perhaps while waiting on a memory access the first thread needs to progress further it will work on the second.
Having multiple cores is like having multiple cooks and kitchens all able to work at the same time. But if you only want one cake, there’s only so much time you can save by splitting that task up to multiple cooks, it still needs the same amount of time in the oven.
Most games have multiple threads, but single thread performance always matters, because not everything can be run in parallel.
A cpu “core” is a physical part of your computer that executes instructions. A “brain” of sorts. When a computer program wants to, say, add two numbers, one of the cores actually does it.
If your computer has 16 cores, then that means that there are 16 individual physical “brains” that can all do completely independent things. One of them might add two numbers as part of one program while another multiplies two numbers as part of another.
A “thread” in *software* is a “stream of execution”/sequence of instructions. A particular process may have one or many threads. If you’re running two different programs, that’s at least two threads – one for each program. If you have two cores, these two threads can be executed at the same time, one on each core. Note that it is the job of the operating system (windows, Linux, macos, whatever) to “schedule” these threads to run on particular cores.
So far so good. But then why do cpus advertise threads if threads are software and cpus are hardware? And why is this number larger than the number of cores?
Well, it turns out that a cpu can “pretend” that it has more cores than it does. It might have 8 physical cores, but advertise 16 “logical cores”. This means that the operating system acts as though there are more cores than there are, and might tell two threads to run on the same core on the same time.
Why do this? If a core is a physical unit that is only capable of executing one instruction at a time, what is the point of telling it to do multiple things at once?
Well, it turns out that cpu cores themselves have several components. It might have, say, circuitry for addition and circuitry for multiplication that is completely independent.
So if the same core is told to work on two different threads, then there will (almost certainly) be times when the one thread wants to add while another wants to multiply. In this case, even the same core can do two things at once.
To summarize: Cpu cores are physical processing units that can operate independently. More of these are good, because you can do more at once. But sometimes even a single core can do multiple things at once, so long as they can be done by different parts of the core. But operating systems, who decide what work is given to who to do, believe in cores, not parts of cores (as of the last time I read about this, dunno if this is changing). So a core will sometimes pretend to be multiple “logical cores” so the operating system will put multiple threads (streams of instructions) on the same core, to take advantage of the parallel capabilities within a core. This is sometimes advertised as “multiple threads”.
The extent to which games can take advantage of any of this varies. Games have a lot of things that can’t be calculated until you calculate other things, which makes calculating things at the same time and so taking advantage of multiple cores harder. But not impossible, and it is my understanding that most games will take advantage of multiple cores to at least some extent.
As an aside, the whole “for tasks like gaming it’s done through 1 thread only” part isn’t *entirely* true any more.
Every program is going to have a list of things to do in a specific order. A simple example might be:
– Fetch an enemy’s health and position from memory.
– Calculate how much damage the enemy takes due to an explosion based on its position.
– If the damage is less than the enemy’s health, subtract the damage from the enemy’s health and write the new health to memory.
– If the damage is greater than or equal to the enemy’s health, run some more code to handle the enemy dying.
Each step here needs to follow this order. I can’t run the “enemy dies” code until I’ve checked the two numbers. I can’t check the two numbers until I’ve calculated the damage. I can’t calculate the damage until I’ve fetched the position. Since everything has to wait for the line before it, there’s no benefit to spreading it out across multiple threads – I can’t do two lines at the same time anyway!
But what if I had to do this for, say, 100 enemies? Well, the second enemy doesn’t depend on the first, so I can do it at the same time as the first one. And I can do the third enemy at the same time as the other ones. I can do as many as I want, based on how many threads I have.
It doesn’t come for free, of course. It takes some active work to split your code up and take advantage of having multiple threads – both from the CPU and from the developers. For a lot of time, developers didn’t really bother using multiple threads. If you wanted more than 4 threads, you’d need to shell out for something really expensive. It really wasn’t worth it, so devs didn’t spend the time to make their programs multi-threaded.
Here’s the thing: the number of threads in mainstream, gamer CPUs has steadily climbed. It used to be the case that you needed to shell out some serious money for more than 4 threads. Today, however? About 80% of gamers in Steam’s hardware survey have 6 or more **physical** CPUs, with most of those having 2 threads per CPU. 16.79% have 4 physical CPUs, but many of those 4-CPU players will have 2 threads per CPU, so 8 threads (for the record, 88.55% have 2 threads per CPU). Everyone has multiple threads… So devs are using multiple threads! Modern games are *more likely* to use multiple threads, there’s a lot more games that use multiple threads. A lot of new, modern games are multi-threaded, because multi-threaded performance has grown a **lot**.
A core almost like a separate self-contained processor, which can take on a particular task. The more cores there are, the more tasks can be done simultaneously. Like, for example, converting multiple files to another format in parallel. If there are more tasks than cores, then the operating system will pause some of them and switch to others according to their priority, and they will take longer. The jobs will usually be dispatched to any available core and the processors will appear to share the load.
Each processor can be further brown down into logical units. Sometimes they are not all fully loaded. Therefore each core can often keep track of 2 execution threads to give work to the idle units. The term for this is hyperthreading. They also appear as separate processors, but share some resources and are not as fast as separate cores.
Each program usually starts several threads, where one of them does the bulk of the work. Let’s say, one draws the picture, another plays the sound, and another listens to the network. The system still has some housekeeping tasks in the background. So while the main thread/task can only run as fast as one of the cores, the remainder of them can be spread out over the remaining processors.
There are cores, processes, and threads.
Cores = physical “sub-CPUs” within the CPU
Processes = One instance of a running program. If you have two instances of Notepad open, that’s two processes. Each process has its own assigned chunk of memory and can’t directly access memory belonging to other processes.
Threads = multiple things a process is doing. Having multiple threads can sometimes allow a program to get work done faster because the CPU can divide the threads between cores (e.g. “ok, core 1, you take thread 1, core 2, you take thread 2”).
Programs can be “single threaded”, “multi-threaded”, or “multi-process”.
Single threaded program = the program is not written to split its work up. This is the default: unless you deliberately write a program to use multiple threads or processes, it’s a single threaded program.
Multi-threaded = the program is able to divide at least some of its work into tasks that can be done separately at the same time. This takes extra work because the program must be carefully written so different threads don’t interfere with each other.
Multi-process / multiprocessing = the program runs additional copies of itself and they cooperate. More common in scientific computing and big data / cloud computing. Has the disadvantage that processes can’t pass information between them as easily, but the advantages that the program may be easier to write (if each process is single-threaded), one process crashing is less likely to bring down the whole program, and (this is a big one) you can have processes running on multiple computers.
For something like a game, it’s likely the program is using multiple threads, but because a lot of the program’s work is dependent on the previous step, it can only split a limited amount of work off to additional threads.
Latest Answers