In simplified terms a CPU core can only do one thing at a time. A thread is essentially one such line of execution. It’s a singular task the CPU can work on. A CPU can have multiple threads going on and swap quickly between them, working on one for a bit, then working on the other for a bit.
Imagine it like someone reading a book. Most people can only read one book at a time, but if they so want they could read many books “at the same time” by reading a bit of one book, putting it down, reading a bit of the other book, going back to the first book, and so on and so forth.
In computers as a general rule every program running is it’s own thread, and sometimes multiple. Your CPU is just really quickly swapping between them one after the other to make it look like all the programs are running at the same time with no breaks to you, the human.
A CPU core can only perform one instruction at a time. A CPU takes time to get these instructions from your hard drive -> RAM, then to local CPU caches. Sometimes these instructions take more time to complete or are waiting for some other peripheral to return data. CPU cores optimize this by having threads, where they build up another set of instructions to perform and can swap between them if necessary or if it makes sense. It’s a method to effectively have two separate processes/sets of instructions sharing a CPU core without the overhead of the very large and complicated CPU core. Just have to duplicate the storage mechanism for the instruction sets and allow swapping between them safely.
Note: Some oversimplification of course for ELI5, but the basis is there.
A computer program is a bit like a combination of a cookbook and a Choose Your Own Adventure book. It has a sequence of simple instructions (1. crack an egg, 2. put it into a bowl), and also can jump around (if the person wants sunny side up eggs, go to page 5).
A CPU _core_ is like a cook. The cook can only do one instruction at a time. A thread is like a little bookmark that tells you where you are in the book. You can use this to allow a single cook to work on multiple dishes, jumping between bookmarks in several recipes (_multitasking_) or to allow multiple cooks to work on different parts of a complex recipe (_multiprocessing_).
CPUs like in your computer have several cores, but CPUs in simpler devices might only have one core.
A thread is basically a separate program. When a CPU runs a program it goes through line by line in the order listed in the program code.
Many OSs are designed to allow multiple programs to run on one CPU by periodically interrupting the CPU and reconfigure the CPU to work on a different program. This is “multi-tasking”. If a comouter has more than one CPU available, then the OS can configure individual CPUs to run separate programs simultaneously, instead of periodically stepping in and telling the CPU to do something different.
So what happens when a CPU runs a program. It has a memory control circuit which retrieves a line of program code. It then activates a decoder circuit which works out what the code is meant to do – for example, if it is a retrieve from memory instruction it engages the memory control circuit to retrieve the information; if it is a multiply instruction, the decoder engages the multiply circuit. Once the instruction is done, the cycle repeats and the next line of code is retrieved and decoded and executed.
You will notice that because a CPU has a ton of different components, like multiply circuits, add circuits, compare circuits, memory control circuits, most of the time they are sat there doing nothing. The circuits are only engaged when needed.
A trick that CPU makers have used is to out two decoder circuits in one CPU. Each decoder runs one program, so a two decode CPU acts a bit like having 2 separate CPUs. So let’s say decoder 1 is running program 1, and the next line is a multiply. Decoder 1 engages the multiply circuit. Simultaneously, decoder 2 gets the next line of program 2, this is an addition, the addition circuit is free, so it engages it. So both the multiplier and adder circuit are simultaneously engaged.
This allows two programs to share a CPU and make more efficient use of the circuits, but instead of having 2 separate CPUs costing money, taking up space and using power, you just have 1 CPU with an extra decoder circuit. It’s not as good as 2 separate CPUs( if 2 additions come in one will have to wait) but it’s a lot cheaper.
Modern PCs often have multiple CPUs – 4 to 8 CPUs are common in modern computers and phones. In PCs these multiple CPUs also can each run multiple threads. So a 4 CPU system could potentially run 8 threads (programs) simultaneously.
A CPU executes the instructions that comprise your software. A core is the section of the CPU that contains the circuitry that performs these mathematical and logical operations (add, multiple, compare etc). CPUs do other things that aren’t part of the core, for example all the memory controller circuitry that allows it to interface with system RAM, or the I/O controller that allows the CPU to talk to other hardware like storage drives or graphics cards.
CPUs don’t have threads, applications do. CPUs execute them. A thread is a discrete unit of work that can be performed in parallel with other threads. Every application has at least one thread. A very basic application might have two threads, one for the program logic, one for the GUI. This allows the GUI to be responsive even while the application logic is busy doing it’s thing, crunching numbers or whatever. Some applications might have dozens of threads, if they are performing the kinds of tasks that can be split up that way.
Operating Systems do this thing with threads called scheduling. Basically, a CPU can only do so many operations at one time. The OS is responsible for giving each active thread some execution time with the CPU (in the order of milliseconds) before switching to the next thread. This switching happens so fast that even if the CPU isn’t physically capable of running all the threads at once, it gives the appearance of doing of doing so – multi tasking.
A CPU with one core can more or less only do one operation at a time*. A CPU with multiple cores, or a system with multiple CPUs, can potentially do multiple operations in parallel. So the more cores a system has available, the more threads it can execute simultaneously. But applications need to be designed with multi-threading in mind to make use of that parallelisation.
*For example, Hyper-threading is a feature of Intel CPUs that allows a single core to execute multiple operations at once, as long as those operations use different parts of the CPU core.
Latest Answers