What’s the difference between Bandwidth, Data Transfer Rate, Latency, and Throughput and what’s their job for internet speed?

171 views

What’s the difference between Bandwidth, Data Transfer Rate, Latency, and Throughput and what’s their job for internet speed?

In: Technology

Bandwidth is bits sent per second, without regard to errors.

Data transfer rate it bits pes second from end to end, including retransmissions to correct errors.

Throughput could be either, depending on the motives of the source of the data.

Latency is completely different, it’s the round time communication time, divided by 2.

Bandwidth and DTR are the same in a link with no errors and infinitely large buffers, but you can’t make that.

Latency impacts DTR, because if there is an error it takes 2 times latency to send the replacement data, so long latency has substantial negative impact on DTR.

**Bandwidth:** The theoretical max flow rate of information. It gets its name from radio, where information is sent over specific radio frequencies. If you’ve ever seen one of those wavelength charts with long waves on one side and short waves on another side, you’d pick a small chunk of the diagram and operate on those frequencies. The *width* of this *band* you’ve chosen determines your information throughput where a wider band has more capacity. This is why it’s called *bandwidth*. This doesn’t really apply to the way communication over wires like the Internet uses, at least not in the same way, but the term has stuck.

**Data transfer rate:** Measures flow rate of data, just like bandwidth, but while bandwidth is the theoretical max, transfer rate is the actual rate you measure. Distortions to the message while it’s in transit could cause them to be garbled on the other end. When computers detect this, they ask the other computer to send that chunk of the message over again. Basically, one computer misheard the other and went, “Huh? Didn’t quite hear that, can you say that again?” Your computers are still communicating at or near their highest speed, the bandwidth speed, but since they may be repeating themselves, the actual rate of transfer of useful information may be much lower.

**Latency:** It’s the time it takes for the message to get across your medium to your target. The travel time, basically. Ideally, messages on electrical or optical cable move at the speed of light, but even that is still a finite speed. It takes time for messages to travel. Add on top of that the extra time it takes for all of the routers along the way to play post office with your message and shunt it down the correct wires and the delay can noticeably build up. As another commenter pointed out, we usually measure this as the two-way or round-trip speed cut in half. Rarely do you want to send a message one-way and get no response back, so knowing the round-trip speed is usually what we care about. (And for our fastest messages, knowing the one-way speed [might even be impossible](https://www.youtube.com/watch?v=pTn6Ewhb27k).)

**Throughput:** Again, a data flow rate. This term is way more generic and can refer to just about anything you want it to mean depending on context. If you’re talking about maximum throughput of different cable designs, it would probably be referring to bandwidth. If you’re talking about designing networking solutions that promise certain real-world average speeds, it would probably be referring to data transfer rate.

For you, an individual end customer who just wants to use a computer now and again without having to think of all the details, the thing you’re probably most worried about with Internet speed is the *data transfer rate*. That’s the direct measure of how much useful data your computer can get from (or send to) the Internet at any given time. This is hard-capped by your bandwidth, and on a good day might be close to equal your bandwidth, but is almost always lower than it. Latency might be negligible if the only thing you want to do is download a bunch of data one-way, like watching high quality video, but if you’re doing something where you have to be constantly communicating back and forth in real time, like playing a video game, latency becomes extremely important.

Kind of wonky from a technical perspective, but hey, it’s for a five year old:

Bandwidth: how fast can data go through your connection? This is measured in “amount of data per second”

Transfer rate: how fast is data currently going? (Also “data per second”)

Latency: how long do you have to wait for some data to start arriving? (Measured in “milliseconds”)

Throughput: I would say ist is the same as transfer rate, but I may be wrong.

Depending on what you try to do with your Internet, you need different properties. Do you want to download a game? You need a high transfer rate. Even if the download takes 2 seconds to start (that would be a horrible latency), if it reaches a high data rate then it will be finished soon.

Do you want to do voice chat? Voice does not need high data rates, but a fast latency. If everything anyone days would take 2 seconds until the other person a hear it, you would constantly talk over each other (I think it starts to get annoying if the latency is slower than 100ms for voice chat).

Do you want to surf reddit? You need a bit of both.

Take all of the above, and realize that an SUV loaded with hard drives driving cross country has an amazingly high bandwidth, just extremely bad latency.

That’s modernizing a very old joke, “Never underestimate the bandwidth of a station wagon full of tapes.”