Whats the association with bandwidth, and throughput in Ethernet cabling?

172 views

This feels like one of those questions that feels like it’s supremely simple knowledge, yet I cannot seem to figure it out.
As a quick example, CAT6a has a data throughput of 10gbps, and a bandwidth of 500mhz, data throughput is rather simple to comprehend, it can send 10gbps, and no more. But bandwidth? Can you use it to calculate latency of a cable of a given length? Can you use it to calculate the maximum endpoints?

In: 2

3 Answers

Anonymous 0 Comments

A funnel or faucet has a spout diameter (bandwidth) of 500MHz. Because of this, it can handle up to 10 gallons per second (10Gbps).

Anonymous 0 Comments

Bandwidth has two definitions commonly used.

The one it that is is the same as throughput, this is the one most commonly used by most people.

The other definition is the available frequency range in often in megahertz and it is for CAT6a 500Mhz. That is a physical measurement of in what frequency range signals travel along the wire and the attenuation is not too high for the distance you use. It will be unable when the signal-to-noise ratio of the signal on the other end is high enough for your application. 10GBASE-T allows for 50m for unshielded wire and 100m for shielded.

The amount of data you can transmit with a specific bandwidth will be theoretically limited but is and the signal-to-noise ratio. The output signal is not identical to the input for various reasons.

The practical speed depends on how you send the signal. If you look at early standards they will be like Morse code. Today they are more complex and more efficient

Exactly how it is done today is quite complex and what 10GBASE-T use is called 64B65B PAM-16 128-DSQ. The practical result of it is that the data rate efficiency is 6.25 bit/(s Hz).

That means a bandwidth of 1 MHz could be used to transmit 6.25 million bits/second. 10MHz 62.5 million bits/second and so on.

The encoding only used 400MHz even if the cable rating of 500MHz.

A single pair of wires can therefore transmit 6.25*400 = 2500 million bits per second. A CAT cable has 4 pairs so the total data rate is 10 billion bits per second.

So throughout = used bandwidth * data rate efficiency * number of channels.

Latency depends on the speed at which a signal travels through the wire and its length of it. Add to that the delay of the signal proceeding on each end. The bandwidth can be used to determine that.

The signal speed in the wire is the https://en.wikipedia.org/wiki/Velocity_factor and for Cat-6A is 65% the speed of light in a vacuum that is close to 300 million m/s. For a max cable length of 100m it will be 100/(300000000 * 0.65) = 5 *10^-7s or 0.5 microseconds =0.0005 milliseconds.

It will be a delay in another part of the network connection that is dominant for this range

This start to be a significant limitation for long distance. Light in optical fiber travel at 65% too so at 100km it is 500 microsecond = 0.5 millisecond. London-New York is 5570km the shortest path so a delay of at least 5570000/(300000000 * 0.65) = 28 milliseconds.

Anonymous 0 Comments

In electrical communications, bandwidth is the distance between the upper and lower corner frequencies of a channel. The corner frequency is the point at which frequency components begin to become attenuated, reflected, or are filtered out. This is in frequency domain, not time domain. When this happens, the signal as it is received at the receiver is distorted when compared to the signal as it was sent by the transmitter. If it’s too distorted, it becomes unintelligible.

Higher bandwidth on the transmission medium allows for either higher resolution transmission symbols, a greater rate of transmission symbols, or both. This had the effect of increasing the maximum data rate over the transmission medium.