ELi5: How does a cable have higher bandwidth than another?

217 views

I get the difference between something like USB 2 and USB 3 where USB 3 has more wires but how does one USB C cable manage to transfer data faster than another USB C cable that has the exact same number of paths?

In: 28

18 Answers

Anonymous 0 Comments

> how does one USB C cable manage to transfer data faster than another USB C cable that has the exact same number of paths?

It rarely actually does.
As others have mentioned, the more likely cause of this is that they actually have different wires.
USB C cables that look identical (and have the same connectors) very commonly have a different number of wires inside.

But it *is* possible for two USB C cables with the same number of wires to be certified for different speeds.
Mostly this comes down to cable length and shielding.

In the early days of USB C (back when speeds were slower, in the USB 3.0 days), cables were allowed to be up to 2m long and were allowed to have relatively poor shielding.
“Shielding” here is kind of like a sleeve that protects the wires from outside radio interference (like your microwave oven).
If cables are not well shielded and/or are long, they can pick interference more readily.
And the faster the speed on the wire pairs, the more likely some interference will cause the signal to fail to be read correctly.

When we moved from USB 3.0 to USB 3.1, speeds increased, which means standards for cables got more strict.
Cables could be no longer than 1m long.

If you want the fastest speeds with USB 4.0, we’re still using the same USB C cables, but again the standards have tightened up.
Cables can be no longer than 80cm.

Anonymous 0 Comments

> how does one USB C cable manage to transfer data faster than another USB C cable that has the exact same number of paths?

It rarely actually does.
As others have mentioned, the more likely cause of this is that they actually have different wires.
USB C cables that look identical (and have the same connectors) very commonly have a different number of wires inside.

But it *is* possible for two USB C cables with the same number of wires to be certified for different speeds.
Mostly this comes down to cable length and shielding.

In the early days of USB C (back when speeds were slower, in the USB 3.0 days), cables were allowed to be up to 2m long and were allowed to have relatively poor shielding.
“Shielding” here is kind of like a sleeve that protects the wires from outside radio interference (like your microwave oven).
If cables are not well shielded and/or are long, they can pick interference more readily.
And the faster the speed on the wire pairs, the more likely some interference will cause the signal to fail to be read correctly.

When we moved from USB 3.0 to USB 3.1, speeds increased, which means standards for cables got more strict.
Cables could be no longer than 1m long.

If you want the fastest speeds with USB 4.0, we’re still using the same USB C cables, but again the standards have tightened up.
Cables can be no longer than 80cm.

Anonymous 0 Comments

> how does one USB C cable manage to transfer data faster than another USB C cable that has the exact same number of paths?

It rarely actually does.
As others have mentioned, the more likely cause of this is that they actually have different wires.
USB C cables that look identical (and have the same connectors) very commonly have a different number of wires inside.

But it *is* possible for two USB C cables with the same number of wires to be certified for different speeds.
Mostly this comes down to cable length and shielding.

In the early days of USB C (back when speeds were slower, in the USB 3.0 days), cables were allowed to be up to 2m long and were allowed to have relatively poor shielding.
“Shielding” here is kind of like a sleeve that protects the wires from outside radio interference (like your microwave oven).
If cables are not well shielded and/or are long, they can pick interference more readily.
And the faster the speed on the wire pairs, the more likely some interference will cause the signal to fail to be read correctly.

When we moved from USB 3.0 to USB 3.1, speeds increased, which means standards for cables got more strict.
Cables could be no longer than 1m long.

If you want the fastest speeds with USB 4.0, we’re still using the same USB C cables, but again the standards have tightened up.
Cables can be no longer than 80cm.

Anonymous 0 Comments

Imagine if each part of a signal on a cable was someone talking. Higher bandwidth means more people talking. If you put everyone talking in a hallway it gets really noisy and you can’t hear anyone. A low quality cable is like a small hallway, very quickly you can’t make out the conversations because it’s too noisy.

A really high quality cable design allows everyone to talk (the multiple signals) without being too noisy to hear. A good one let’s people whisper and still be heard making it possible for even more people to talk.

Anonymous 0 Comments

Imagine if each part of a signal on a cable was someone talking. Higher bandwidth means more people talking. If you put everyone talking in a hallway it gets really noisy and you can’t hear anyone. A low quality cable is like a small hallway, very quickly you can’t make out the conversations because it’s too noisy.

A really high quality cable design allows everyone to talk (the multiple signals) without being too noisy to hear. A good one let’s people whisper and still be heard making it possible for even more people to talk.

Anonymous 0 Comments

Imagine if each part of a signal on a cable was someone talking. Higher bandwidth means more people talking. If you put everyone talking in a hallway it gets really noisy and you can’t hear anyone. A low quality cable is like a small hallway, very quickly you can’t make out the conversations because it’s too noisy.

A really high quality cable design allows everyone to talk (the multiple signals) without being too noisy to hear. A good one let’s people whisper and still be heard making it possible for even more people to talk.

Anonymous 0 Comments

With a faster switching signal. The problem is that the faster you go, the more of a problem defects in the cable are, a pair of data lines with mismatched length, connections that cause internal reflections(because the connector has a different impedance than the cable), and crosstalk between different sets of data lines all become problems.

This is also why the cables rated for the fastest data transfer speeds also are generally pretty short. Signal integrity is harder with longer cables.

Anonymous 0 Comments

With a faster switching signal. The problem is that the faster you go, the more of a problem defects in the cable are, a pair of data lines with mismatched length, connections that cause internal reflections(because the connector has a different impedance than the cable), and crosstalk between different sets of data lines all become problems.

This is also why the cables rated for the fastest data transfer speeds also are generally pretty short. Signal integrity is harder with longer cables.

Anonymous 0 Comments

tl;dr: It isn’t so much the cable (although that certainly helps) as it is the receiver on each end being better able to read and interpret the signal. We’ve gotten a *lot* better at being able to read and interpret high-speed signals that have a limited bandwidth and possibly interference, as well as reduced signal overhead.

I’ll try to ELI5 this as best I can, but be ready for some geek speak.

Let’s go back to the days of the dial-up modem. Telephone lines back in the day used what was called a “voice grade channel,” which was about 4 kHz wide (it was actually 3.6 kHz, but let’s not worry about that). In order to send data over that channel, the data signal had to be no bigger than 4 Khz.

If you’re sending a simple binary signal, all you need is a way to discriminate a 1 from a 0. You also needed to keep a constant signal or else the receiving end would lose its place, so you couldn’t just turn the signal on and off. Instead, some of the earliest modems would send a spike at one frequency within the allowed bandwidth for a 1, and a spike at a different frequency for a 0. But thanks to the *lousy* quality of the phone lines (and the early days of signal processing), the faster you switched between these spikes the farther apart they needed to be in order for the receiver to tell them apart and know what the data was supposed to be. Since you were limited to a 4 kHz bandwidth, they could only get so far apart, so your speed was limited. The earliest modem (1958) had a neck-breaking speed of 110 bits per second (we’re talking 23.3 hours to transfer a single megabyte).

Four years later the receivers got better at singal processing and the limit was raised to 300 bits per second. 14 years go by and we managed to get it up to 1,200 bps, and we thought that was the upper limit.

But then we got clever. Somebody figured out that we could send a constant signal at the same frequency and alter a different property called phase back in the 1930s, but nobody really saw the use of it until now. You know how you see a radio signal represented as a wave in drawings and cartoons? That’s the phase. An electrical signal “wiggles” its voltage up and down, and every time the voltage completes a full cycle, that’s a full phase. It’s measured in degrees like a circle, and we found we can alter the phase at discrete points.

So, instead of shifting a signal between two different frequencies, we now place a signal right-smack in the middle of the bandwidth and left it there, and instead altered the phase. We weren’t taking up anywhere near as much bandwidth at that point, and by shifting (or keying) the phase at 90 and 180 degrees we could send the data we needed. The faster we sent the data the wider the signal became so we were still limited, but we managed to get it up to 2400 bits per second with this technology.

But that wasn’t it. As we developed better phase discriminators, we found we could alter the phase at *four* discreet points instead of two. That allowed us to send twice as much data but at the same keying rate. Instead of 90 and 180, we were now using 0, 90, 180, and 270, and instead of sending a 1 or a 0 we were now sending 00, 01, 11, or 10 with each shift. Double the data rate, keep the same keying rate and the same bandwidth.

They eventually moved up to *8* phase points, tripling the data. They tried 16 but the signal quality was too low over most telephone lines to be able to sort the phase shifts out, so instead we started fiddling with the power levels enough to have sort of an inner ring and outer ring (again, measured like a circle), provided the receiver was sensative enough to read them.

We eventually did hit an upper limit on telephone lines (well, the old twisted-pair analog lines, at least), which is where DSL entered the mix, but that’s another story.

Hopefully this wasn’t *too* complicated of an answer, but the advancement to USB-C is along the same lines. Better signal detection and processing allows for more data to be sent over a limited bandwidth. Cleaner data means the less overhead (like forward error correction) being sent along with the data, which gives the appearance of faster data speeds.

Anonymous 0 Comments

tl;dr: It isn’t so much the cable (although that certainly helps) as it is the receiver on each end being better able to read and interpret the signal. We’ve gotten a *lot* better at being able to read and interpret high-speed signals that have a limited bandwidth and possibly interference, as well as reduced signal overhead.

I’ll try to ELI5 this as best I can, but be ready for some geek speak.

Let’s go back to the days of the dial-up modem. Telephone lines back in the day used what was called a “voice grade channel,” which was about 4 kHz wide (it was actually 3.6 kHz, but let’s not worry about that). In order to send data over that channel, the data signal had to be no bigger than 4 Khz.

If you’re sending a simple binary signal, all you need is a way to discriminate a 1 from a 0. You also needed to keep a constant signal or else the receiving end would lose its place, so you couldn’t just turn the signal on and off. Instead, some of the earliest modems would send a spike at one frequency within the allowed bandwidth for a 1, and a spike at a different frequency for a 0. But thanks to the *lousy* quality of the phone lines (and the early days of signal processing), the faster you switched between these spikes the farther apart they needed to be in order for the receiver to tell them apart and know what the data was supposed to be. Since you were limited to a 4 kHz bandwidth, they could only get so far apart, so your speed was limited. The earliest modem (1958) had a neck-breaking speed of 110 bits per second (we’re talking 23.3 hours to transfer a single megabyte).

Four years later the receivers got better at singal processing and the limit was raised to 300 bits per second. 14 years go by and we managed to get it up to 1,200 bps, and we thought that was the upper limit.

But then we got clever. Somebody figured out that we could send a constant signal at the same frequency and alter a different property called phase back in the 1930s, but nobody really saw the use of it until now. You know how you see a radio signal represented as a wave in drawings and cartoons? That’s the phase. An electrical signal “wiggles” its voltage up and down, and every time the voltage completes a full cycle, that’s a full phase. It’s measured in degrees like a circle, and we found we can alter the phase at discrete points.

So, instead of shifting a signal between two different frequencies, we now place a signal right-smack in the middle of the bandwidth and left it there, and instead altered the phase. We weren’t taking up anywhere near as much bandwidth at that point, and by shifting (or keying) the phase at 90 and 180 degrees we could send the data we needed. The faster we sent the data the wider the signal became so we were still limited, but we managed to get it up to 2400 bits per second with this technology.

But that wasn’t it. As we developed better phase discriminators, we found we could alter the phase at *four* discreet points instead of two. That allowed us to send twice as much data but at the same keying rate. Instead of 90 and 180, we were now using 0, 90, 180, and 270, and instead of sending a 1 or a 0 we were now sending 00, 01, 11, or 10 with each shift. Double the data rate, keep the same keying rate and the same bandwidth.

They eventually moved up to *8* phase points, tripling the data. They tried 16 but the signal quality was too low over most telephone lines to be able to sort the phase shifts out, so instead we started fiddling with the power levels enough to have sort of an inner ring and outer ring (again, measured like a circle), provided the receiver was sensative enough to read them.

We eventually did hit an upper limit on telephone lines (well, the old twisted-pair analog lines, at least), which is where DSL entered the mix, but that’s another story.

Hopefully this wasn’t *too* complicated of an answer, but the advancement to USB-C is along the same lines. Better signal detection and processing allows for more data to be sent over a limited bandwidth. Cleaner data means the less overhead (like forward error correction) being sent along with the data, which gives the appearance of faster data speeds.