Why is latency and bandwith separated? If latency is delay of data, and bandwith is data over time, shouldn’t one affect another? If it takes 5s for a car to start, and 10s to reach it’s destination, then the average speed would be spread out across the whole 15s. Should this not be the same?

779 views
0

Why is latency and bandwith separated? If latency is delay of data, and bandwith is data over time, shouldn’t one affect another? If it takes 5s for a car to start, and 10s to reach it’s destination, then the average speed would be spread out across the whole 15s. Should this not be the same?

In: Technology

Consider conventional mail (via a post office). This a high latency, high bandwidth type of communications. It takes a few days to get a shipment somewhere, but that shipment can contain almost unlimited amounts of data.

Contrast this with SMS (text messaging). This is fairly low latency – your text message arrives almost immediately – but low bandwidth (you can’t send all that much data).

Another way to consider this:

Latency is *responsiveness*. It’s how long it takes to get a reply once I send a message.

Bandwidth is *volume*. It’s how much information I can send over time.

Sending data is actually a two way communication. Computer 1 sends a package of data to computer 2. Computer 2 then sends an acknowledgement to computer 1 to say that it’s received the package of data, and it’s ready for the next. Latency is a measure of how fast this round trip happens. Bandwidth is a measure of how much data can physically be shoved down a cable in a set amount of time.

A high latency can mean computer 1 can’t send the data as fast as its capable of because it’s spending a long time waiting for acknowledgements from computer 2.

You can also think it doing a river analogy. Imagine you measure the average speed of the water droplets.. that will give you the latency from one point to another. Now measure of many droplets pass a “line” on the river during a certain amount of time.. Thats the bandwidth.

Lets look at two extremes here. Morse code flashed with lights vs IPoAC

If you’ve got two ships near each other you can send a message by flashing morse code via lights. The time it takes for each flash to reach the other ship is extraordinarily short, nanoseconds or microseconds at most, but because you have to manually flash the light on and off your bandwidth is quite low so it can take minutes to send a sentence. This is ultra low latency but also ultra low bandwidth.

The other end of the spectrum is IPoAC or Internet Protocol over Avian Carrier(my favorite implementation of IP). It involves loading all your data onto an SD card, strapping it to the leg of a carrier pigeon and sending it on its way. It’ll take hours or even days for your data to arrive, but you can send a terabyte this way pretty easily(but send it twice because predators=packet loss). This is ultra high latency and ultra high bandwidth.

If you’re sending a short message then you can do it wayyyy faster by blinking the lights, but if you’re sending a long message then the low bandwidth of the blinking lights would let you catch up using the carrier pigeon even though it needs a couple hours.

To take your car analogy, it takes 5 seconds for a car to start and 10 seconds to reach its destination is fine for sending a single car to the destination. But what if i want to move a bunch? Well then i can spend 10 seconds starting the truck that’ll take 20 seconds to get to the destination but carry 8 cars there, that’s a much faster way to move cars from A to B even though the Latency is a lot higher, but if you’re only moving a single car it’ll be slower

The connection you want depends on your needs.

It depends on what you need. If youre designing a car to drive 100 miles, then the speed of the car is more relevant. If you’re designing a car that only moves 5 feet, then it’ll spend most of it’s time starting up, so if you need it to make that trip faster, that’s the time you need to improve.