The Atari and the NES use the same CPU, the MOS 6502.
the NES has extra parts sitting around that CPU to let it draw faster, respond faster.
Your day to day computer has those extra abilities too.
on windows CTRL+SHIFT+ESC brings up Task manager. There you can watch your stats as you ask the computer to do different tasks.
Consider how a parent (the CPU) is in charge of the household (the computer system).
A parent can tell the child to go grab a book off the bookshelf and put it on the table (grabbing a file / data from the internet and putting it on the local drive).
The parent has only given one command. But the amount of information on that one command is a lot. The parent isn’t reading every single line of the book, it’s packaging the whole book and saying what to do with it.
You have friends over at your house and you want to put out 100 cookies in 5 trips to your kitchen. You can do that as long as you carry 20 cookies (or more) per trip.
If a chip (processor, direct memory access engine, ASIC, potato) can do 5 billion things in a second, and one of those things it can do is move 20 bits of data into a memory (likely higher…really high speed memory like graphics card RAM moves hundreds of bits in one clock cycle), then it can support 100 billion bits a second transfers.
A processor does not transfer one bit every clock cycle. In general it performs one step of a task every clock cycle. That step could be, “transfer these 100,000 bits from the internet adapter chip into the memory chip”, as long as the internet adapter chip, the memory chip, the CPU, and the motherboard have the bandwidth needed. In many cases, one of those components do not have the bandwidth needed, in which case the 100Gbit internet is wasted.
What you should be asking about is how the internet adapter chip works. After all, that’s what’s plugged into your internet cable or wifi antenna and converts the 1’s and 0’s from the internet provider into a format that the CPU can work with. And in that case, the answer is much more complicated, but it has to do with compression on the internet cable (wifi can’t get to 100Gbit) which allows it to send more than one bit of data for each clock cycle. The actual clock speed is lower than 5GHz.
it is easy to create a very high speed clock signal in one small area on a chip, ie at 100 ghz even faster then you ask about. however to make that clock signal work over a larger area on the chip reliably through more logic circuits it is very hard. as a result often a chip will have high speed areas or islands and slower speed areas.
another factor is 1 data bit at a time [in serial form] verses 8,16,32,64 or 128 bits in parallel [all at the same time].
likewise at high speed it is easy to line up 1 signal correctly, but really hard to get all 8, or 16 or 32 or 64 or 128 bits to line up correctly at the same time.
this is why you see/hear of interfaces like pcie x2, x4, x8 etc rather then 1 bit at a time they use 2bits, or 4 etc the clock rate is the same but they transfer 2bits, or 4 bits, etc per clock signal
It’s not like the CPU can only do a single mathematical operation per clock cycle, there are multiple CPU cores and each core is executing several instructions in parallel and out of order. The instructions are also broken into smaller operations so you might have hundreds/thousands instructions “in flight” at any given moment.
A gigabit is a measure of serial data, that is to say, a billion binary digits (bits). “Per second” makes this a stream of serial data.
A CPU also runs using binary, but in parallel. Simplistically-speaking, a single core single thread 32-bit processor “works” with 32 bits at one time. It looks into its intray and pulls a 32 of those bits at once as a “word”. Whilst inaccurate, 32 bits at 5GHz is 32 x 5 billion, or 160 billion bits.
Also, 100Gbit internet (ethernet?) is a statement of capacity and rarely seen as a sustained transfer for long periods of time. It *can* run up to 100Gbit.
This is a bit of an oversimplification, but illustrates the numbers. In reality, the CPU is not working hands-on with the network data. That is managed separately. Consider the CPU the boss, whilst the network interface is a delegated expert on network stuff, including serial data to usable “words” or bytes.
Even though there are a lot of great answers explaining how CPUs process more than 1 bit per cycle and why the processor’s clock speed is not a huge factor, I think most are missing the point.
Your question is still interesting for other reasons and it’s related to how digital communications work. The ‘real’ question would be: how can a 250 MHz wave/signal (for example, Cat 6 Ethernet cable) be capable of transferring 10 Gbit/s Internet?
Apart from having 4 cable pairs, the answer relies on something called __modulation__. If you only send a single ‘1’ as let’s say a pulse of 1V (volt) and a ‘0’ as 0V, those 10G speeds wouldn’t be possible, since you’d be at most, limited to 250 Mbit/s per cable pair, as your intuition told you. BUT, there’s a trick: if your electronics are capable of differentiating more than 2 amplitude levels (let’s say 0V, 1V, 2V, 3V to keep it simple), then you can associate each of those levels to a pair of bits, i.e: 0V = ’00’, 1V = ’01’, 2V = ’10’, 3V = ’11’. This way, if you send one pulse of 2V and then one of 3V you would be transmitting ‘1011’ with only two pulses. That means, you can send DOUBLE the amount of information, using the same frequency (250 MHz). This is called PAM (pulse amplitude modulation).
More complex types of modulation can extend this concept up to 12 bits / Hz. (Look up QAM on Wikipedia).
Latest Answers