How does a route impact latency, and who determines a connection route?


How does a route impact latency, and who determines a connection route?

In: 12

The Internet is just a bunch of computers connected together.

The Internet has a very basic pathfinding mechanism that boils down to “how many service providers do you have to go through to reach your website” by default.

Each service provider owns all of the routing within their network (referred to as an “autonomous system”). Ultimately, they have individual control over which links the traffic flowing through their network goes over, but practically, they don’t typically mess with individual routes without a good reason. The answer to “who determines a connection route” is each individual ISP determines which routes to install/use for a given destination.

Notice that the pathfinding is based on how many “autonomous systems (ISPs)” you have to hop through to reach a destination, but the Internet has no idea about the speed of the links to reach a given destination. That means, if Autonomous system 12345 has a very slow link, and your AS Path is 2329 12345 5467, you will encounter a lot of latency. Let’s suppose that 12345 has a 10Gig link to 5467 and that 10Gig link always has way too much traffic on it, so your traffic is getting dropped/experiencing delays.

There are two things that can be done to avoid latency in this case. First, AS 12345 could fix their link that is slow (either by adding more links to 5467, fixing errors on the link, or routing around the bad link), or you could try to use a VPN to go around that link.

Let’s say you use a VPN, and after using the VPN your path is 2329 44687 4236 5467. This “looks” like a longer path for your route (4 networks instead of 3), but 2329, 44687, 4236, 5467 all have amazing networks with 400Gig links. Even though you are going through more providers now, you have avoided the bad link in 12345, so you get significantly less latency.


EDIT: If you want to see the AS-PATH for yourself, you can go to []( and select the “BGP Route” radio button, then type your destination IP address in the box and click Probe. The AS Path (list of ISPs) will show up in the “Path” column (this is from hurricane electric’s perspective). If you want to check the path from your ISP, you can google “YOUR-ISP-NAME Lookingglass” and see if your ISP shares their routing tables for lookups.

Let’s plan a road trip. There are three logical paths on the map to get to your destination.

Path A – A two lane highway. It is the shortest distance to your destination but there are 4 towns on this route.

Path B – A four lane highway. This route is not as direct and there is a major city you will need to pass through.

Path C – Your friend has a small plane and pilots license. They offer to fly your family over on your trip. For this analogy, the airports are going to represent a wireless microwave network.

In this analogy, the towns are routers and city is an Internet exchange. Just like when passing through a town, every time your traffic goes through the router there a slow down. In towns the speed limit usually drops and their may be traffic control mechanisms, like stop signs or traffic lights. The same is true for routers, in most case a router on the Internet just needs to determine where your traffic needs to be sent next, but in times of congestion your traffic may queued in memory buffers as QoS is applied. The amount of time it takes for the router to determine how to forward traffic is called **Processing Latency**, this is how long you stop at each town looking at your map* and determining which road to take to the next stop in your journey. Any time stuck in a queue is referred to as **Queuing Latency**, which is how long you sit in traffic waiting for traffic to go around the carpet someone forgot to tie down. The packet is then transmitted onto the link, referred to as **Transmission Latency**, you can think of this as the time it takes to get through town. Lastly is the **Propagation Latency**, which the time you actually spend on the road and just like on a road. Even though its moving a fraction of the speed of light, your data still takes time to get between points and the longer that span is the more propagation latency is added.

Add all that up and you get the **total latency**, which you can then tell your kids who don’t know how to tell time yet and will ask you 5 minutes into your trip if you are there yet.

Path A – Shorter, but more stops in between. Also not as many lanes (bandwidth), so increased possibility of congestion on this path.

Path B – A little longer, but more lanes and fewer stops. However, the city is popular, so traffic may be higher leading to congestion, but the city also has more infrastructure so it might have capabilities to handle it. Might be better than path A.

Path C – Forget that traffic noise. Its a 30 minute plane ride. Its quicker, but because is a small plane you are limited on your luggage. So probably fine for a vacation, quick visits, or business (high speed, low latency stock trading for example), but not for moving house. Also this method restricts where your traffic gets dropped out, at which point you have to switch to different medium and add its delays. For example, lets say your destination didn’t have an airport, you would have to fly into the city and then rent a car.

As for who decides a route, I think u/raddpuppyguest did a good job of describing the mechanics of route path selection. I can’t really fit path selection into my analogy, as in the real world we determine the path of route while moving around, but in the analogy we represent the data. Data doesn’t determine the path in a network, and is in fact clueless as to how it gets from point A to B.


No single router knows ALL the IPs and the routes to get to them. They’re just not capable of of that.
Instead each router along the route knows it’s neighbors and they tell each other about what connections they have.

Lets say you have 3 routers connected in a line. A – B – C

So Router A has devices with 111 and 112 addresses. If router A gets data for 111, it knows exactly where to send it. It’s connected to it! But if it gets data for 333, it doesn’t have that connected to it, so it sends it out to Router B. Router B see’s that it doesn’t have 333 either, so it sends it out to Router C. C has 333 connected to it, so it sends it out!

Each time the data has to travel across a line, then a router has to read and figure out where to send the data, takes time. So in theory, you want the shortest trip. Instead of going A to B to C, what if we connect A directly to C? Then it has fewer “hops” and the data gets there faster.

But there are BILLIONS of routers in the world, we obviously can’t connect them ALL directly to each other. So instead, we build networks, or a big web of routers. Router A connects to BCDEF
Router B connects to ACDEFG
Router C connects to ABDEGH
And so on, until virtually every router in the world gets connected to each other through a series of hops.
Now lets say A wants to talk to H. But A isn’t connected to H. The data has to go through at least one other router first.

So what happens is, the routers talk to each other. A shouts out to it’s neighbors, “Hey, does anyone connect to H?” B comes back and says “I don’t!”. C says “I do!” So A knows to send the data to C, who will send it to H. A remembers that “Anything going to H, I’ll send to C.”
It gets much more complex when you have billions of routers and lots of hops, but it uses the same basic idea and usually just groups stuff up. For example, all addresses that start 155 might all belong to one ISP, so each router learns “Anything that starts with 155, send it this way” instead of memorizing each address.

Another way latency is impacted is by the speed of the connection between routers and the time it takes each router to examine the data and figure out where it goes. So if A has a very slow connection to C, then it may go slower than if A sent it to B who then sent it to C.
Or if C is getting TONS and TONS of Data, C might take longer to process the data and that adds latency. So shortest distance, or least amount of hops, may not always be the fastest.
One of the things modern routing protocols do is “rate” a connection and calculate the rating of a route to decide which way is the fastest. This can be VERY complex and isn’t always used though.