What Exactly are Cores and Threads in CPU?

454 views

As an Electrical Engineering major, I have taking several introductory computer engineering courses. I have studied ARMv8 for a long time now. I know about registers, instruction fetch, arithmetic instructions, branching instructions, pipelining, and data forwarding. I know that some of these are specific to ARM only. ARMv8 is the only architecture that I know.

However, I am curious to know what exactly are cores and threads? And specifically for cores, how are instructions distributed to each core? And if a dependency exists from one core to another core’s instructions, is there a such thing as data forwarding from one core to another?

Lastly, kinda unrelated, but what is a Graphics Card and what are the differences between GPU architecture and the ARMv8 architecture that I have studied?

If someone could please answer these three questions, I would greatly be appreciated.

In: 13

5 Answers

Anonymous 0 Comments

A CPU with multiple cores is, effectively, multiple CPUs all living on the same chip that can all function more or less independently from one another. If you have a quad-core PC what you really have, in essence, is a PC with 4 separate processors that can multitask 4 different things at the same time. There’s some nuance involved with cores possibly being able to share memory space but that’s the high-level overview of it.

A thread is essentially “a task that needs to be done”. When you run a program on a typical computer, that program will have at least one thread created that represents everything the program needs the CPU to do. In general, one CPU core can be working on one thread at any given time. So a quad-core computer can, in essence, be thinking about 4 separate threads at any given time.

Some programs can even be *multithreaded*, spawning several independent threads that each handle a different part of the program’s logic that can be run independently. A common use case for this would be creating one thread to run the graphical UI logic, and another one to handle reading and writing data. Splitting the two up like this means that even if the reading and writing part hits a snag, the UI will never hang, because the threads are separate. Or, if you’re doing something really calculation intensive like rendering video, the work can be divided into as many threads as you have cores and your CPU can divide and conquer the workload.

GPUs are special kinds of processors that are designed for a very specific kind of job. A typical CPU will be very flexible, and can do just about anything you want it to do, but it has to process instructions one at a time. A GPU, by contrast, is only able to do really simple calculations, but it’s able to process hundreds of thousands of calculations all in one go.

The main task of most GPUs is for rendering display graphics (hence the name, Graphical Processing Unit). Graphics calculations aren’t hard, but you generally need to be able to calculate the color and lighting values for every single pixel on a screen. On a massive 8K screen, that’s a LOT of pixels to consider. And the refresh rate might be 144 times *every second*. A CPU is never going to be fast enough to do all of that calculation. But a strong GPU would be able to crank out those really simple calculations in big batches.

If you’re still confused about GPUs, Adam and Jamie of the Mythbusters put together [a pretty elegant demonstration of the difference between a CPu and a GPU](https://www.youtube.com/watch?v=-P28LKWTzrI) that I think will make this click for you.

You are viewing 1 out of 5 answers, click here to view all answers.