CPU cores and threads are two different types of processors.
Cores are physical processing units that can run one task at a time, while threads are virtual sequences of instructions given to a CPU.
Cores increase the amount of work that can be accomplished in a time, while threads help organize and optimize that workload.
Multithreading allows for better utilization of available system resources by dividing tasks into separate threads and running them in parallel. Hyperthreading further increases performance by allowing processors to execute two threads concurrently.
When multiple cores are present on the same chip, communication time with cache and RAM is less, data access is faster, and the printed circuit board needs less space when a multicore processor is used.
A core is a kitchen. It has a sink, a cutting board, a refrigerator, a stove with one or two burners depending on the model of CPU, an oven, and all the other things that a kitchen has. But only 1, maybe 2, of each thing. In a real CPU these would be access to RAM/cache, an arithmetic unit, a floating point unit, a multiplication/division unit, vector math unit, and so on.
A thread is a cook/chef. The person actually doing the work with the tools, following the instructions in the recipe to make what’s ordered. In the CPU it executes instructions and has a limited memory of its own, used by software for very short term storage.
More than one cook in the kitchen does speed up cooking, but as I said there’s only 1 of each type of machine or resource in the kitchen. They must share. So the speed of cooking doesn’t literally double, but it does improve drastically. However it does also mean each cook slows down slightly compared to if they were working alone and not having to worry about sharing their tools.
And in the world of computer security, people have discovered this can be abused for learning information. If you can order one cook to make you something that requires constant access to the frying pan, and you notice the job took about twice as long as normal, you can infer the *other* cook needed the frying pan a lot as well, which may be information you can use.
Each core must have 1 thread at a minimum. More is considered better in the grand scheme of things, if the number of cores is to be the same, since more work can be done. But if you have more than enough cores for what you need, turning off hyperthreading and just allocating 1 thread per core may speed things up a bit.
This here is a 4 core CPU [https://www.techspot.com/articles-info/2363/images/2021-12-19-image-6-j_1100.webp](https://www.techspot.com/articles-info/2363/images/2021-12-19-image-6-j_1100.webp)
Core is the physical bit of hardware that does the computing.
Thread is a logical execution of a program. A program is a series of machine commands which need to be excecated to run a program.
A simple CPU only has one core, which can only run one thread at a time.
As CPUs developed it became apparent that a lot of the time that single thread is waiting on a memory operation and CPU time is wasted. To optimize that and better use the CPU resources multithreading was developed so that a CPU runs more than one thread at a time. It still only does one operation at a time, but one operation from one thread, then one operation from the second thread etc, it flips between threads. That results in both memory resources and CPU resources being used more on average because it’s usually the case that the different threads need to wait on memory at different times.
Development to multicore processors was simply to put more CPUs in your CPU, just copy paste the processing hardware. Its simpler to just have two processors than it is to make a single processor twice as fast.
Some comments mention improved performance by using multiple threads on a single core, which is not true, quite the opposite.
A processor core is something that performs work, whereas a thread is something that keeps track of the state of the work.
A program has at least one thread. In order to be fair, the core does a bit of work for one thread, then takes note of the state of that work, then switches to another thread. This switching takes a little bit of time.
Your program could choose to use two threads, but as long as both threads are run on the same core, it’s just going to add more switching overhead. The core can only execute one thread at a time.
Where you could gain performance is when you have multiple cores, and you create one thread for every core. Then every core can do some work for you at the same time.
This only works if you are able to split your work into mostly equal parts that can be executed independently. This is harder than it sounds to get right.
There are many other use cases for threads also, but no need to go into them here.
As an analogy, say you need to solve two sudokus. Solving one after the other is probably faster than trying to solve one number on one, then one number on the other etc (two threads one core). However, if you could offload one of them to a friend (two threads two cores), you’d be done about twice as fast.
To add to what others have said, there’s logical/physical cores and virtual cores
Cores are what is meant when you hear “i7, 6 core processor”, as a general rule if your processor is an Intel brand it has i-number minus one logical cores.
Usually modern processors will have double the virtual cores which means it can host twice the amount of threads though take this with a huge grain of salt because it’s heavily dependant on the brand, line, and generation of processor
Cores are calculators, individual workers who take a set of instructions to do computations in a certain order and send the result to different slots (called registers)
Threads are those list of instructions, each core can only read one list at a time, but they can hold many inactive threads and even pass of their lists to other cores.
Those threads are like jobs, and are dispatched by a common manager who assigns work and how much time is spent on each job. You don’t have to finish a job and usually they don’t finish them in one go before they pause to share time on applications and also wait for stuff like IO (disk reads or pinging internet in the background)
Imagine you have a bowl of candy in front of you. You can use one hand to grab a piece of candy and put it in your mouth. You eat the candy and then grab a new piece and put it in your mouth. This way your mouth has to wait for your hand to fetch the next piece of candy, inefficient! It would be faster if you used both of your hands to continuously grab one piece of candy and bring it to your mouth. That way your mouth never has to wait for the next piece.
In this analogy mouth is the core and then hands are the threads.
Latest Answers