eli5 , How does a server receive thousands of requests at a time ?

784 viewsOtherTechnology

I always wonder how a server like google server receive thousands of request from people at once! I understand the signals are passed through fibernet cable, but won’t those signals collide ? Or how can some small wire handle soo many requests at once ? iin my mind a wire can only take one request at a time. I know these happens close to light speed, but still! its very hard to understand.

In: Technology

13 Answers

Anonymous 0 Comments

Package can collide: yup! That’s a thing (don’t ask me the technical details, my guess is they may read the data line at the same time to double check?)

How a wire can transfer so much data?

One part is how fast you can switch on/off that light switch. If the light takes 1sec to open, that limit your bandwidth.

The other is finding way to send multiples data at once. For example, what if you send a green signal, and me a red? Our signal will collide, but if the output see color to split them into different signal.
I think what they actually does is sending the signal with some angle. As per, you would typically send light straight in the fiber, instead, they send one like at 45 degree, another one at 15, …. (Warning that was random numbers).

On the computer side (network switch, or computer) they use tricks like DMA (direct memory attached). Aka, the fiber receiver can talk to the memory (RAM) directly. So the CPU, just have to take whatever it needs when it needs it.

Then DNS can provide a list of IPs to your computer, which takes one randomly – that can help with balancing the traffic to multiple servers.

You have load balancers that can do similar things internally (but I’m still curious how you can do that on a big scale, that makes no sense one could do that much, even if it does just proxy traffic).

Each step further can then use their own load balancer/DNS for inner resources, including dedicated sub servers.

On a side note for server, you may know CPU for consumers can go like up to 12 cores, for servers (your typical webserver at least), it can go like 64 cores (probably even more?). So, 64 simultaneously request.

Keep in mind, a lot of processing is also waiting, waiting the data to come from RAM, hardware, network (database), …

So you don’t necessarily need a lot of processing (for your typical webserver).

Anonymous 0 Comments

Well “received at once” is a little of a stretch. “In quick succession – before any previous requests had been served” would be more like it.

Anonymous 0 Comments

Every time you combine data streams from multiple wires into one (or vice versa), you need a device to handle that.

For example, your router will combine communications from all your devices into one wire that goes to your ISP (by having your devices essentially take turns using the wire), and your ISP will combine multiple customer’s data into a single wire that goes to the server.

As to how these data streams are combined, there is time division and frequency division. Time division just means the signal take turns being sent – usually a ‘turn’ lasts literally microseconds.

Frequency division is more complex, and works as follows:

1) a signal is encoded by varying the frequency of a wave over time. For example, if 100MHz corresponds to a 0 and 101MHz corresponds to a 1, when we observe 100MHz for 1 microsecond (an arbitrary amount of time I chose), then 101MHz for 2 microseconds, we just received ‘011’.

2) when multiple frequencies are added together, we are actually able to decompose them back to the individual frequencies through a technique called the Fourier transform.

This allows for multiple signals to be sent at the same time through a medium, as long as each signal receives its own unique frequency band to operate in.