Some comments mention improved performance by using multiple threads on a single core, which is not true, quite the opposite.
A processor core is something that performs work, whereas a thread is something that keeps track of the state of the work.
A program has at least one thread. In order to be fair, the core does a bit of work for one thread, then takes note of the state of that work, then switches to another thread. This switching takes a little bit of time.
Your program could choose to use two threads, but as long as both threads are run on the same core, it’s just going to add more switching overhead. The core can only execute one thread at a time.
Where you could gain performance is when you have multiple cores, and you create one thread for every core. Then every core can do some work for you at the same time.
This only works if you are able to split your work into mostly equal parts that can be executed independently. This is harder than it sounds to get right.
There are many other use cases for threads also, but no need to go into them here.
As an analogy, say you need to solve two sudokus. Solving one after the other is probably faster than trying to solve one number on one, then one number on the other etc (two threads one core). However, if you could offload one of them to a friend (two threads two cores), you’d be done about twice as fast.
Latest Answers