How do we have so much space to store and share internet content? Wouldn’t we run out of space at some point because you can only build so many data or cloud servers?

230 views

With literally billions of internet users sharing billions of bytes of data every day there must be an endpoint where the servers can no longer handle the traffic anymore. I see this all the time with websites crashing because too many users try to access the same website at once.

In: 2

5 Answers

Anonymous 0 Comments

Please don’t confuse storage space with bandwidth. We’re nowhere near maxing out on either, but it’s important to remember that they are two separate things. Data is stored in storage space, data is transmitted over wires (to a point, but that’s the bottleneck). Storage space has been increasing forever and is on track to continue to do so for the foreseeable future. There are server farms that currently have storage capacities so large that we had to extend the scale for talking about how much storage space they had because the language from which we got things like kilo-, mega-, and giga-, didn’t have terms that went that far. The largest that I have heard of (which is almost certainly not the largest that exists) is 20 **YOTTAbytes**. I had to look up what a yottabyte was, it’s that big. But, if you consider that you can have a microSD card that can hold 512Gb and is the size of your fingernail, it’s not hard to imagine an entire room filed with those. Let’s just say that each one is .5mm tall and go for one stack in a standard 2.5m tall room in an ordinary house (yes, the exact measurements are a little off. It’s okay, I’m just making a point). That’s 500*512Gb of data or about 250Tb. And that’s just one stack. Now, with a 4m wide room and each microSD card being 10mm across, that’s another 400 stacks of those, for a total of about 100,000Tb of raw storage. No storage system will ever be that efficient, so lop off 20%, bringing it down to 80,000Tb for one wall of one room in one house. Now imagine an entire shopping mall full of those stacks, and the number quickly begins to boggle the mind. But that’s storage.
>
As for transmission, that’s been improving every year two, as we figure out more ways to pack information into data streams and better ways to stack those data streams on top of one another, but it’s still the biggest bottleneck in the process. 35 years ago, we struggled with 14,400 bits per second over old fashioned copper wires. We outgrew that, and modern copper (coaxial) wires can handle as much as 150Gbs. Internally, specialty copper wiring can handle 1Tbs or better. And that doesn’t even begin to cross over into what fiber optics can handle. That being said, companies that provide website hosting services have to divy up their transmission resources among all the sites they host, and some sites get less than others. And it costs them money to transmit the data. So, when you see a website go down for too much traffic, it’s not because the infrastructure can’t handle it. It’s because that sites payment plan didn’t allocate enough transmission resources for it. If they’d paid a little more, they’d have more bandwidth available for their use, and they’d never go down. Look at YouTube. YouTube hasn’t gone down for lack of capacity in years (if not decades). Why? Because they own their own transmission lines (and a few other reasons, like pre-positioning data closer to where it’s going to requested, but that’s a topic for another rant).
>
Hope this helps…

You are viewing 1 out of 5 answers, click here to view all answers.