My understanding: A core is the brains of the cpu and has a certain amount of threads. So a quad core could have like 8 threads or 16 threads. If it has sixteen threads that means the 16 thread quad core (has 4 threads per core).
Think of the core like a brain. Everyone has a certain limited amount of threads for multi tasking. having more cores is better for multi tasking on a cpu and for video encoding. and having many apps opened.
Gaming benefits from multicore multi threaded cpus. Because AI and physics however a gpu is more beneficial and having a cpu that isnt bottlenecking the gpu card helps. Some games are CPU leaning (cpu bound) when it comes to certain tasks like RTS games tend to lean towards cpu more instead of gpu. it all depends on how the game was coded and game engine. It all depends but usually number crunching is done efficiently on cpus while gpus handle the 3d stuff.
More cores are better because they count as a processor. While threads are like lanes.
Tl;dr – Cores are processors, threads are lanes. More cores are better.
A thread is an instruction that needs to be processed
A Core is a brain that can process a task.
The more cores a processor has the more tasks it can perform simultaneously.
Applications can be single threaded meaning that all of its instructions have to be processed by 1 core, or multi-threaded meaning that it’s instructions can be broken up and run by multiple cores simultaneously.
Within modern CPUs a “core” is a physical processor that actually exists on the CPU die. A thread is one code execution process on a CPU. In the past it was 1 core, 1 thread.
Modern CPUs support a technology called “simultaneous multi-threading” which means the CPU can run two different processes at the same time on the same core. So, 1 core, 2 threads.
A core is a physical thing–it’s essentially a CPU, within the larger computer component that we call a “CPU.” Modern CPUs contain multiple processors… think, like a bunch of eggs in a carton.
A “thread” is a logical thing. It’s essentially a separate stream that the computer is processing. An analogy here might be a kitchen (the CPU), which is making multiple dishes (threads) independently of each other on shared hardware (stoves, counters, etc).
A CPU is a Central Processing Unit, a piece of hardware, that among other things calculate. It has a number of cores to execute instructions.
A thread is a programmatic task which has the responsibility of executing specific instructions. Threads are used in parallelism/simultaneous operations.
Threads run on CPU. You can have several hundreds of threads running simultaneously on your computer and they’re fighting amongst each other to get to use the CPUs cores.
When a thread is spawned it is queued to run on the CPU cores.
ELI5 example: if you have an app that has to calculate or grab some external data from e.g. a server, it runs on threads. If the app only used one thread then the graphical user interface would freeze completely while it was grabbing data from the external server. In order to avoid that a programmer will usually spawn a separate thread to grab data while the user can still navigate in the app, which uses another thread, simultaneously.
Cores are the workers doing things on the CPU. Each worker can do one thing at a time (for example add two numbers).
A thread is like a task. A worker can work on that task which consists of multiple operations, one after the other. A thread can also split into more threads (follow up tasks, sub tasks). Such a thread can, but doesn’t always have to be worked on. In a single core, you would classically have one thread running “at a time”. Scheduling etc happens, threads can share resources between cores depending on architecture.
But, one worker can work on one task, then another, then back to the first and then back an forth between them. At first glance this seems just as fast as running one after the other. But, if the rest around it is setup for it, it can improve the speed of both (say they would have a wait time for data to arrive). So instead of waiting for his colleague to lbring him parts for his task, the worker works on another one for a moment. So while a 4 Core CPU doing 8 threads isn’t necessarily faster, it can be. Intel calls their version of this hyperthreading. Not sure AMD has a version?
Cores are individual processing units inside CPU.
Thread is a queue of instructions.
Individual cores have many different parts. They have parts that read and decode instructions, prepare data, arithmetic unit, floating point unit,…
Originally cores were all single thread. That means only a single instruction could be finished at any given time.
This turned out to be very inefficient. Because no instruction ever needed to use all these parts at once, so you had all these perfectly good parts just doing nothing waiting for work.
So the idea became “If we just make sure there is no overlap there is no need to wait”
That’s why modern CPU have usually 2 threads per core.
There are two types of threads, hardware threads and software threads. A CPU core which is hyperthreaded can run multiple independent processes per CPU tick. So instead of one tick meaning one instruction, one tick means *n* instructions, where *n* is the number of threads per core.
A software thread simulates this phenomenon but can’t replecate the CPU tick. It can split a process into multiple simultaneous jobs but can only do *nc* instructions per tick (*n* number of threads *c* cores). Generally, though, the OS will keep one process on one hardware thread even if it’s multithreaded unless it’s very resource intensive.
A core is like a brain in the CPU chip.
A thread is like a train of thought.
A CPU with lots of cores is like having multiple people (and their brains) working on something.
Multi-threading is like multitasking, switching from two trains-of-thought depending on context.
(Computers are of course different to human brains, so the pros and cons of having more cores/brains or changing how many threads/thoughts you work on at once is different.)
Imagine you’re a parent doing laundry. There is one application (a load of laundry) which creates different threads of operations (sports, colored, white, …. laundry).
Let’s say there are a few steps in the laundry process: preparing the laundry, going in the washing machine, the dryer, ironing and folding.
One CPU core is one set of resources: someone to prepare, a washing machine, a dryer, an iron and someone to fold. **The most straightforward way to do more laundry is to just have more resources**: more washing machines, more dryers, more irons, and more personnel to prepare and fold everything. This means having multiple cores in your CPU.
But **depending on the workload, not all resources need to be used.** Some laundry does not need to go in the dryer, or does not need ironing or is just way easier to fold. With only a second washing machine, we can already improve our efficiency a lot. We can then put two loads in the washing machines: one that does need drying and ironing, and one that does not. This way we can achieve almost double the performance with only minor increase in resources.
**Summarized:**
A 2 core CPU will have twice the resources of a single core CPU, and can literally perform twice as fast. (Side note: the loads need to be independent of course)
A 2-threaded CPU will have only a few added resources, but with careful scheduling it can approach almost the same performance as if it had fully duplicated resources.
Latest Answers