What’s the difference between Bandwidth, Data Transfer Rate, Latency, and Throughput and what’s their job for internet speed?

417 views

What’s the difference between Bandwidth, Data Transfer Rate, Latency, and Throughput and what’s their job for internet speed?

In: Technology

4 Answers

Anonymous 0 Comments

**Bandwidth:** The theoretical max flow rate of information. It gets its name from radio, where information is sent over specific radio frequencies. If you’ve ever seen one of those wavelength charts with long waves on one side and short waves on another side, you’d pick a small chunk of the diagram and operate on those frequencies. The *width* of this *band* you’ve chosen determines your information throughput where a wider band has more capacity. This is why it’s called *bandwidth*. This doesn’t really apply to the way communication over wires like the Internet uses, at least not in the same way, but the term has stuck.

**Data transfer rate:** Measures flow rate of data, just like bandwidth, but while bandwidth is the theoretical max, transfer rate is the actual rate you measure. Distortions to the message while it’s in transit could cause them to be garbled on the other end. When computers detect this, they ask the other computer to send that chunk of the message over again. Basically, one computer misheard the other and went, “Huh? Didn’t quite hear that, can you say that again?” Your computers are still communicating at or near their highest speed, the bandwidth speed, but since they may be repeating themselves, the actual rate of transfer of useful information may be much lower.

**Latency:** It’s the time it takes for the message to get across your medium to your target. The travel time, basically. Ideally, messages on electrical or optical cable move at the speed of light, but even that is still a finite speed. It takes time for messages to travel. Add on top of that the extra time it takes for all of the routers along the way to play post office with your message and shunt it down the correct wires and the delay can noticeably build up. As another commenter pointed out, we usually measure this as the two-way or round-trip speed cut in half. Rarely do you want to send a message one-way and get no response back, so knowing the round-trip speed is usually what we care about. (And for our fastest messages, knowing the one-way speed [might even be impossible](https://www.youtube.com/watch?v=pTn6Ewhb27k).)

**Throughput:** Again, a data flow rate. This term is way more generic and can refer to just about anything you want it to mean depending on context. If you’re talking about maximum throughput of different cable designs, it would probably be referring to bandwidth. If you’re talking about designing networking solutions that promise certain real-world average speeds, it would probably be referring to data transfer rate.

For you, an individual end customer who just wants to use a computer now and again without having to think of all the details, the thing you’re probably most worried about with Internet speed is the *data transfer rate*. That’s the direct measure of how much useful data your computer can get from (or send to) the Internet at any given time. This is hard-capped by your bandwidth, and on a good day might be close to equal your bandwidth, but is almost always lower than it. Latency might be negligible if the only thing you want to do is download a bunch of data one-way, like watching high quality video, but if you’re doing something where you have to be constantly communicating back and forth in real time, like playing a video game, latency becomes extremely important.

You are viewing 1 out of 4 answers, click here to view all answers.