ELi5: How does a cable have higher bandwidth than another?

214 views

I get the difference between something like USB 2 and USB 3 where USB 3 has more wires but how does one USB C cable manage to transfer data faster than another USB C cable that has the exact same number of paths?

In: 28

18 Answers

Anonymous 0 Comments

USB cables have specifications that need to be met in terms of materials used, connections, etc. Some cable manufacturers do not adhere to those specifications and use subpar materials which cause a degradation of signal or an increase in crosstalk

Anonymous 0 Comments

First of all, USB-C has a lot of connector/pins/wires in it. Not every cable has all connectors, depending on the purpose. USB-C can be a USB 3 or higher connection, it can be thunderbolt (basically PCI-E over this cable), it can carry large amounts of power on several wires, etc. A wire intended to charge a phone may only carry USB + the number of charging wires expected for the phone to use.

Second, quality of the wires does matter. Signals degrade over copper over long distances, even a few inches or feet. Building the cable in certain ways can make it work over longer distances safely. It may involve making the wires thicker, shielding the overall cable with grounding, running wires together which serve the same purpose (eg: + and – for the same communication line) and so on.

Anonymous 0 Comments

USB cables have specifications that need to be met in terms of materials used, connections, etc. Some cable manufacturers do not adhere to those specifications and use subpar materials which cause a degradation of signal or an increase in crosstalk

Anonymous 0 Comments

A cable does not exactly have speed but will have electrical properties that are frequency depended. Depending on the material, and tolerances in manufacturing the properties can change. This can result in a larger usable frequency range and in less noise and other distortion, which can all result in higher bandwidth.

Is also not just the wires but the electronics in the endpoint where more complex filters etc can result in higher max capacity in the same cable

If you look at the common connection that ethernet over copper the nonsocial variant all use a 8P8C which is a connector that is from the 1970s if not earlier. With Cat-3 cables the usabel bandwidth is around 12 MHz and with two pairs you could achieve 100Mbit/s

The same connector with Cat-5 cables mind with higher tolerances and a bit different twisting could manage 1000Mbit/s it also requires a moire complex receiver circuit to filter and interpret the signal.

With Cat-6A 400MHz bandwidth is possible for 10 Gbit/S. With Cat 8 and 1600MHz bandwidth, you can manage 40Gbit/. This is all with the same connector and number of wires. But exactly how the cable is made and the required tolerance is not the same

USB 3.0 add two differential pairs to the A connector and create a larger B connector with the.

The USB-C connector adds two more differential pairs. So the A and B connectors do not have the same number of differential pairs as C

USB-C is a connector, not a protocol specification. the protocol specification that commonly is used over cable with USB-C is USB-3.1 and USB-3.2

If you look at the USB 3.0 the SuperSpeed had a signal bitrate of 5GBit/s. Later in USB 3.1 double it to 10Gbit/s with SuperSpeed

USB 3.2 then add the ability to use dual lanes at the same time in each direction and by using the same signaling as 10Gbit/s but two lines the result is 20Gbit/s

It is the dual lane that requires USB-C because it has four high-speed differential pairs compared to just two pairs for A and B.

USB 4.0 that now start to be introduced manage lane speed of 20Gbit/s and 40Gbit/s

Anonymous 0 Comments

First of all, USB-C has a lot of connector/pins/wires in it. Not every cable has all connectors, depending on the purpose. USB-C can be a USB 3 or higher connection, it can be thunderbolt (basically PCI-E over this cable), it can carry large amounts of power on several wires, etc. A wire intended to charge a phone may only carry USB + the number of charging wires expected for the phone to use.

Second, quality of the wires does matter. Signals degrade over copper over long distances, even a few inches or feet. Building the cable in certain ways can make it work over longer distances safely. It may involve making the wires thicker, shielding the overall cable with grounding, running wires together which serve the same purpose (eg: + and – for the same communication line) and so on.

Anonymous 0 Comments

A cable does not exactly have speed but will have electrical properties that are frequency depended. Depending on the material, and tolerances in manufacturing the properties can change. This can result in a larger usable frequency range and in less noise and other distortion, which can all result in higher bandwidth.

Is also not just the wires but the electronics in the endpoint where more complex filters etc can result in higher max capacity in the same cable

If you look at the common connection that ethernet over copper the nonsocial variant all use a 8P8C which is a connector that is from the 1970s if not earlier. With Cat-3 cables the usabel bandwidth is around 12 MHz and with two pairs you could achieve 100Mbit/s

The same connector with Cat-5 cables mind with higher tolerances and a bit different twisting could manage 1000Mbit/s it also requires a moire complex receiver circuit to filter and interpret the signal.

With Cat-6A 400MHz bandwidth is possible for 10 Gbit/S. With Cat 8 and 1600MHz bandwidth, you can manage 40Gbit/. This is all with the same connector and number of wires. But exactly how the cable is made and the required tolerance is not the same

USB 3.0 add two differential pairs to the A connector and create a larger B connector with the.

The USB-C connector adds two more differential pairs. So the A and B connectors do not have the same number of differential pairs as C

USB-C is a connector, not a protocol specification. the protocol specification that commonly is used over cable with USB-C is USB-3.1 and USB-3.2

If you look at the USB 3.0 the SuperSpeed had a signal bitrate of 5GBit/s. Later in USB 3.1 double it to 10Gbit/s with SuperSpeed

USB 3.2 then add the ability to use dual lanes at the same time in each direction and by using the same signaling as 10Gbit/s but two lines the result is 20Gbit/s

It is the dual lane that requires USB-C because it has four high-speed differential pairs compared to just two pairs for A and B.

USB 4.0 that now start to be introduced manage lane speed of 20Gbit/s and 40Gbit/s

Anonymous 0 Comments

Not all wires are created equal.

Say you’re driving a car. It has a top speed of 200 km/h. In this fantasy realm, there are no legally binding speed limits. You can freely drive as fast as you feel safe doing so.

The first road you are on is a long, straight, wide, flat, paved highway. No turns. No bumps. No hills. No obstacles. How fast are you gonna go? Perhaps as fast as you can, right?

A second road is a narrow, bumpy, winding gravel road that rolls over hills and twists and turns through dense woods. Are you also going to floor it on this, too? I hope not. You run a real risk of losing control and crashing.

It’s difficult to ELI5 exactly what makes cables differ without throwing out electrical jargon, but suffice it to say that some USB cables have little smooth-paved highways inside them, while other cheaper ones have gravel roads inside them. Same number of paths, different travel experiences. And when computers start communicating over the cable, they can feel the difference. When computers detect a crummy cable between them, they gentleman’s agree to speak at a slower speed, even if they theoretically can talk faster, because speaking slowly and hearing everything correctly the first time is faster than trying to speak quickly, being misheard several times, and constantly trying to error correct.

Also, one should be careful when talking about USB standards. The ones with letters (USB-A, USB-B, USB-C) *only* care about the shape of the plug, while the ones with numbers (USB 1.0, USB 2.0, USB 3.0) refer to the data protocol, which mostly affects the internal “shape” and build quality of the cable. They are somewhat mix-and-matchable. You did specify in your question that we are assuming the “number of paths” is the same number as USB-C is capable of using. But I do want to make an abundantly clear reminder that this isn’t always the case. Just because your fancy plug has 24 spots to attach wires to does not mean all of them are connected to something. Many cables with USB-C plugs may only have four cables in them, making them functionally equivalent to the USB cables we’ve already been dealing with for nearly two decades.

Anonymous 0 Comments

tl;dr: It isn’t so much the cable (although that certainly helps) as it is the receiver on each end being better able to read and interpret the signal. We’ve gotten a *lot* better at being able to read and interpret high-speed signals that have a limited bandwidth and possibly interference, as well as reduced signal overhead.

I’ll try to ELI5 this as best I can, but be ready for some geek speak.

Let’s go back to the days of the dial-up modem. Telephone lines back in the day used what was called a “voice grade channel,” which was about 4 kHz wide (it was actually 3.6 kHz, but let’s not worry about that). In order to send data over that channel, the data signal had to be no bigger than 4 Khz.

If you’re sending a simple binary signal, all you need is a way to discriminate a 1 from a 0. You also needed to keep a constant signal or else the receiving end would lose its place, so you couldn’t just turn the signal on and off. Instead, some of the earliest modems would send a spike at one frequency within the allowed bandwidth for a 1, and a spike at a different frequency for a 0. But thanks to the *lousy* quality of the phone lines (and the early days of signal processing), the faster you switched between these spikes the farther apart they needed to be in order for the receiver to tell them apart and know what the data was supposed to be. Since you were limited to a 4 kHz bandwidth, they could only get so far apart, so your speed was limited. The earliest modem (1958) had a neck-breaking speed of 110 bits per second (we’re talking 23.3 hours to transfer a single megabyte).

Four years later the receivers got better at singal processing and the limit was raised to 300 bits per second. 14 years go by and we managed to get it up to 1,200 bps, and we thought that was the upper limit.

But then we got clever. Somebody figured out that we could send a constant signal at the same frequency and alter a different property called phase back in the 1930s, but nobody really saw the use of it until now. You know how you see a radio signal represented as a wave in drawings and cartoons? That’s the phase. An electrical signal “wiggles” its voltage up and down, and every time the voltage completes a full cycle, that’s a full phase. It’s measured in degrees like a circle, and we found we can alter the phase at discrete points.

So, instead of shifting a signal between two different frequencies, we now place a signal right-smack in the middle of the bandwidth and left it there, and instead altered the phase. We weren’t taking up anywhere near as much bandwidth at that point, and by shifting (or keying) the phase at 90 and 180 degrees we could send the data we needed. The faster we sent the data the wider the signal became so we were still limited, but we managed to get it up to 2400 bits per second with this technology.

But that wasn’t it. As we developed better phase discriminators, we found we could alter the phase at *four* discreet points instead of two. That allowed us to send twice as much data but at the same keying rate. Instead of 90 and 180, we were now using 0, 90, 180, and 270, and instead of sending a 1 or a 0 we were now sending 00, 01, 11, or 10 with each shift. Double the data rate, keep the same keying rate and the same bandwidth.

They eventually moved up to *8* phase points, tripling the data. They tried 16 but the signal quality was too low over most telephone lines to be able to sort the phase shifts out, so instead we started fiddling with the power levels enough to have sort of an inner ring and outer ring (again, measured like a circle), provided the receiver was sensative enough to read them.

We eventually did hit an upper limit on telephone lines (well, the old twisted-pair analog lines, at least), which is where DSL entered the mix, but that’s another story.

Hopefully this wasn’t *too* complicated of an answer, but the advancement to USB-C is along the same lines. Better signal detection and processing allows for more data to be sent over a limited bandwidth. Cleaner data means the less overhead (like forward error correction) being sent along with the data, which gives the appearance of faster data speeds.

Anonymous 0 Comments

Not all wires are created equal.

Say you’re driving a car. It has a top speed of 200 km/h. In this fantasy realm, there are no legally binding speed limits. You can freely drive as fast as you feel safe doing so.

The first road you are on is a long, straight, wide, flat, paved highway. No turns. No bumps. No hills. No obstacles. How fast are you gonna go? Perhaps as fast as you can, right?

A second road is a narrow, bumpy, winding gravel road that rolls over hills and twists and turns through dense woods. Are you also going to floor it on this, too? I hope not. You run a real risk of losing control and crashing.

It’s difficult to ELI5 exactly what makes cables differ without throwing out electrical jargon, but suffice it to say that some USB cables have little smooth-paved highways inside them, while other cheaper ones have gravel roads inside them. Same number of paths, different travel experiences. And when computers start communicating over the cable, they can feel the difference. When computers detect a crummy cable between them, they gentleman’s agree to speak at a slower speed, even if they theoretically can talk faster, because speaking slowly and hearing everything correctly the first time is faster than trying to speak quickly, being misheard several times, and constantly trying to error correct.

Also, one should be careful when talking about USB standards. The ones with letters (USB-A, USB-B, USB-C) *only* care about the shape of the plug, while the ones with numbers (USB 1.0, USB 2.0, USB 3.0) refer to the data protocol, which mostly affects the internal “shape” and build quality of the cable. They are somewhat mix-and-matchable. You did specify in your question that we are assuming the “number of paths” is the same number as USB-C is capable of using. But I do want to make an abundantly clear reminder that this isn’t always the case. Just because your fancy plug has 24 spots to attach wires to does not mean all of them are connected to something. Many cables with USB-C plugs may only have four cables in them, making them functionally equivalent to the USB cables we’ve already been dealing with for nearly two decades.

Anonymous 0 Comments

tl;dr: It isn’t so much the cable (although that certainly helps) as it is the receiver on each end being better able to read and interpret the signal. We’ve gotten a *lot* better at being able to read and interpret high-speed signals that have a limited bandwidth and possibly interference, as well as reduced signal overhead.

I’ll try to ELI5 this as best I can, but be ready for some geek speak.

Let’s go back to the days of the dial-up modem. Telephone lines back in the day used what was called a “voice grade channel,” which was about 4 kHz wide (it was actually 3.6 kHz, but let’s not worry about that). In order to send data over that channel, the data signal had to be no bigger than 4 Khz.

If you’re sending a simple binary signal, all you need is a way to discriminate a 1 from a 0. You also needed to keep a constant signal or else the receiving end would lose its place, so you couldn’t just turn the signal on and off. Instead, some of the earliest modems would send a spike at one frequency within the allowed bandwidth for a 1, and a spike at a different frequency for a 0. But thanks to the *lousy* quality of the phone lines (and the early days of signal processing), the faster you switched between these spikes the farther apart they needed to be in order for the receiver to tell them apart and know what the data was supposed to be. Since you were limited to a 4 kHz bandwidth, they could only get so far apart, so your speed was limited. The earliest modem (1958) had a neck-breaking speed of 110 bits per second (we’re talking 23.3 hours to transfer a single megabyte).

Four years later the receivers got better at singal processing and the limit was raised to 300 bits per second. 14 years go by and we managed to get it up to 1,200 bps, and we thought that was the upper limit.

But then we got clever. Somebody figured out that we could send a constant signal at the same frequency and alter a different property called phase back in the 1930s, but nobody really saw the use of it until now. You know how you see a radio signal represented as a wave in drawings and cartoons? That’s the phase. An electrical signal “wiggles” its voltage up and down, and every time the voltage completes a full cycle, that’s a full phase. It’s measured in degrees like a circle, and we found we can alter the phase at discrete points.

So, instead of shifting a signal between two different frequencies, we now place a signal right-smack in the middle of the bandwidth and left it there, and instead altered the phase. We weren’t taking up anywhere near as much bandwidth at that point, and by shifting (or keying) the phase at 90 and 180 degrees we could send the data we needed. The faster we sent the data the wider the signal became so we were still limited, but we managed to get it up to 2400 bits per second with this technology.

But that wasn’t it. As we developed better phase discriminators, we found we could alter the phase at *four* discreet points instead of two. That allowed us to send twice as much data but at the same keying rate. Instead of 90 and 180, we were now using 0, 90, 180, and 270, and instead of sending a 1 or a 0 we were now sending 00, 01, 11, or 10 with each shift. Double the data rate, keep the same keying rate and the same bandwidth.

They eventually moved up to *8* phase points, tripling the data. They tried 16 but the signal quality was too low over most telephone lines to be able to sort the phase shifts out, so instead we started fiddling with the power levels enough to have sort of an inner ring and outer ring (again, measured like a circle), provided the receiver was sensative enough to read them.

We eventually did hit an upper limit on telephone lines (well, the old twisted-pair analog lines, at least), which is where DSL entered the mix, but that’s another story.

Hopefully this wasn’t *too* complicated of an answer, but the advancement to USB-C is along the same lines. Better signal detection and processing allows for more data to be sent over a limited bandwidth. Cleaner data means the less overhead (like forward error correction) being sent along with the data, which gives the appearance of faster data speeds.