If computer clocks max out somewhere around 5GHz, how is it possible for 100Gbit internet to exist? How does the computer possibly transfer that much data per second?

1.19K views

If computer clocks max out somewhere around 5GHz, how is it possible for 100Gbit internet to exist? How does the computer possibly transfer that much data per second?

In: 591

31 Answers

Anonymous 0 Comments

First, i have to mention that the only 100Gbit standards currently in use are fiber links. So i’ll explain how 10Gbit over copper works first, as it’s more fitting of an ELI5. But the short version is that you don’t actually need a faster clock to get more bandwidth.

So there’s 2 general tricks to make this work.

First, you have more than one “pair”. Each pair is a seperate communication line that works in paralell. Ethernet typically has 4 pairs (8 wires, 4 in, 4 out). That brings your needed clock speed down from 10GHz to 2.5GHz, because you have 4 lines you can use at the same time.

Second is that the data is encoded in a very specific way. I won’t go into detail on exactly how because that is definately not an ELI5 topic. But the bottom line is that you actually get to transmit 3 bits per clock cycle. And that brings the needed frequency down to 833.33 MHz.

Now for 100Gbit, you’d actually need a clock frequency of 8.33GHz if you were going over the standard 4-pair copper ethernet connection using the same 3 bit encoding. 100Gbit connections use fiber-optics instead, which changes the math. With fiber, you have “lanes” instead of “pairs”. A lane is actually a specific frequency of light that’s used to transfer the data, and you can put multiple lanes over a single cable (10 lanes for the 100Gbit specification). Fiber is usually packaged as single pair though (2 cables, 1 in, 1 out, 10 lanes in both directions). Fiber also has a 64 bit encoding instead of the 3 bit encoding you get with copper. So the math is actually 100Ghz/10 lanes/64 bits and that actually comes out to 165.25 MHz, which is, oddly enough, a lower clock speed than 10Gbit over copper. But, you’re sending 10×64=640 bits over the wire every clock cycle instead of 4×3=12.

Now to wrap this up, ultimately bandwidth is a product of how many channels you have, how many bits you can send over each channel per clock cycle, and how fast the clock is. You can either have fewer channels and a fast clock (serial), or a lot of channels and a slower clock (paralell).

How this relates to the CPU is that the bandwidth between your CPU and the network interface is *way* higher than the bandwidth over the network connection itself. For comparison, a typical modern CPU data bus is 64 bits wide and uses a dirt simple 1 bit encoding. At a 5GHz clock, that gives 320Gbit bandwidth over the bus.

You are viewing 1 out of 31 answers, click here to view all answers.