How does a route impact latency, and who determines a connection route?

211 views

How does a route impact latency, and who determines a connection route?

In: 12

4 Answers

Anonymous 0 Comments

Let’s plan a road trip. There are three logical paths on the map to get to your destination.

Path A – A two lane highway. It is the shortest distance to your destination but there are 4 towns on this route.

Path B – A four lane highway. This route is not as direct and there is a major city you will need to pass through.

Path C – Your friend has a small plane and pilots license. They offer to fly your family over on your trip. For this analogy, the airports are going to represent a wireless microwave network.

In this analogy, the towns are routers and city is an Internet exchange. Just like when passing through a town, every time your traffic goes through the router there a slow down. In towns the speed limit usually drops and their may be traffic control mechanisms, like stop signs or traffic lights. The same is true for routers, in most case a router on the Internet just needs to determine where your traffic needs to be sent next, but in times of congestion your traffic may queued in memory buffers as QoS is applied. The amount of time it takes for the router to determine how to forward traffic is called **Processing Latency**, this is how long you stop at each town looking at your map* and determining which road to take to the next stop in your journey. Any time stuck in a queue is referred to as **Queuing Latency**, which is how long you sit in traffic waiting for traffic to go around the carpet someone forgot to tie down. The packet is then transmitted onto the link, referred to as **Transmission Latency**, you can think of this as the time it takes to get through town. Lastly is the **Propagation Latency**, which the time you actually spend on the road and just like on a road. Even though its moving a fraction of the speed of light, your data still takes time to get between points and the longer that span is the more propagation latency is added.

Add all that up and you get the **total latency**, which you can then tell your kids who don’t know how to tell time yet and will ask you 5 minutes into your trip if you are there yet.

Path A – Shorter, but more stops in between. Also not as many lanes (bandwidth), so increased possibility of congestion on this path.

Path B – A little longer, but more lanes and fewer stops. However, the city is popular, so traffic may be higher leading to congestion, but the city also has more infrastructure so it might have capabilities to handle it. Might be better than path A.

Path C – Forget that traffic noise. Its a 30 minute plane ride. Its quicker, but because is a small plane you are limited on your luggage. So probably fine for a vacation, quick visits, or business (high speed, low latency stock trading for example), but not for moving house. Also this method restricts where your traffic gets dropped out, at which point you have to switch to different medium and add its delays. For example, lets say your destination didn’t have an airport, you would have to fly into the city and then rent a car.

As for who decides a route, I think u/raddpuppyguest did a good job of describing the mechanics of route path selection. I can’t really fit path selection into my analogy, as in the real world we determine the path of route while moving around, but in the analogy we represent the data. Data doesn’t determine the path in a network, and is in fact clueless as to how it gets from point A to B.

You are viewing 1 out of 4 answers, click here to view all answers.