I always wonder how a server like google server receive thousands of request from people at once! I understand the signals are passed through fibernet cable, but won’t those signals collide ? Or how can some small wire handle soo many requests at once ? iin my mind a wire can only take one request at a time. I know these happens close to light speed, but still! its very hard to understand.
In: Technology
It uses buffers. Buffers everywhere.
You don’t have an unbroken wire connected to a google server. If you have fiber internet service, your ONT (Optical Network Terminal) is assigned timeslots where it is permitted to transmit upstream, and it has to be quiet the rest of the time to avoid collision with the other upstream transmissions from other customers on the network. This strategy is called Time Division Multiple Access (TDMA). All this data is decoded by the ISP equipment where it reaches a router (just a computer, really), that forwards your packets of data to another router and so on until it reaches the google datacenter.
The rates of incoming and outgoing traffic at each router do not necessarily match, especially on short timescales. Your ONT has a data buffer. Every router on the path has a buffer — multiple buffers, even, as the data packets are copied and headers modified internally. In fact, after propagation delay, the necessary delay introduced by the physical carrier over finite distances, queuing delay, the delay incurred by your packets sitting in a buffer not going anywhere, is a major source of latency on the public internet (and in consumer routers). In contrast, the transmit delay of the ONT waiting for a transmission opportunity is a relatively minor source of latency.
A router has finite buffer size. If those buffers are getting too full, a router will simply drop packets. In this way the internet is said to provide “best-effort” service; there is no guarantee of packet delivery. In fact, it is usually desirable for routers to drop packets _before_ it is strictly necessary in order to minimize latency on the link, and internet protocols like TCP are designed with this knowledge in mind: they use packet loss as a signal of congestion on the link. Packet loss is a _feature_ of the internet, not a bug.
At the other end, the server has it’s own socket and application buffers. It will handle requests as fast as it is able, up to some finite limit of outstanding queries, at which point it will also start to refuse requests. Depending on the service, there may be various load-balancers along the way that help it to make these decisions fairly and quickly at a large scale.
But, essentially, you’re right that multiple transmissions at the same time are problematic. There are a lot of ways that communication technologies use to divide access to a shared medium other than TDMA. If you use Wi-Fi on your lan, clients can and do accidentally shout over each other, rendering both transmissions indecipherable. In this case clients are expected to detect the problem themselves, sense when airwaves are quiet, and wait a random amount of time to try again, with the hope that they won’t accidentally transmit at the same time. This strategy is called Carrier Sensing Multiple Access (CSMA).
Later Wi-Fi standards (802.11ax / Wi-Fi 6) also use what they call OFDMA (Orthogonal Frequency Division Multiple Access) where the channel bandwidth (typically 80MHz in total) is divided into ~1000 smaller subcarriers and those are assigned to different clients, which enables multiple clients to speak at once without collisions. Notice that a packet from your phone on your home wifi is transmitted to your wireless access point, which is connected to or built into to your home router, which is connected or built into the ONT and so on. So, one packet might traverse several physical connections that use various multiplexing strategies.
Latest Answers