How do we have so much space to store and share internet content? Wouldn’t we run out of space at some point because you can only build so many data or cloud servers?


With literally billions of internet users sharing billions of bytes of data every day there must be an endpoint where the servers can no longer handle the traffic anymore. I see this all the time with websites crashing because too many users try to access the same website at once.

In: 2

Data storage density (amount of data per given volume) has doubled every three years since magnetic storage was developed in the mid 1950s. On top of that, technologies have been developed to split the load of transporting data across the Internet. Plus, large fiber optic networks have been installed around the world providing enormous bandwidth for data transfer.

As for websites crashing under load, that’s usually an issue with the code of the site, rather than particular hardware limitations like you’d see in the late 90s. Cloud computing services offer rapid scaling and elasticity, where additional virtual machines are provisioned on a temporary basis to handle spikes in load.

There is a limit of the number of servers that can be built on earth because there is a finite amount of material on earth. But that does not mean we are close to maxing out storage any time soon.

A billion byte is not a lot of data at all, a gigabyte is a billion byte. There is 22 terabyte hard drives on the market today. 1 terabyte = 1000 gigabytes so they can store 22 000 billion bytes.

The mass of a drive like that is around 2/3 kg and most of that is aluminum. Earth crust has a mass of around 20^19 tonnes, it is around 8% aluminum. We extract around 50 million tonnes of aluminum every year.

A billion byte is not a lot of data at all, a gigabyte is a billion byte. There is 22 terabyte hard drives on the market today. 1 terabyte = 1000 gigabytes so they can store 22 000 billion bytes. the surface gets too hot for any life. So if we continue to use aluminum like today we have enough for the rest of the life of the earth.

There is a limit technically but we’re nowhere near it, not even close. And the thing is, as time goes on data storage gets more efficient. Today thousands of Gigabytes can be stored in devices smaller than your palm, in ten years, smaller than your fingertip

Theoretically yes, practically not really. Old hard drives took up the space of an entire car, for only a few megabytes. Now, there’s drives the size of a fingernail with 1 terrabyte of capacity. That’s roughly 280,000 times as much storage in something that is a million times smaller.

Please don’t confuse storage space with bandwidth. We’re nowhere near maxing out on either, but it’s important to remember that they are two separate things. Data is stored in storage space, data is transmitted over wires (to a point, but that’s the bottleneck). Storage space has been increasing forever and is on track to continue to do so for the foreseeable future. There are server farms that currently have storage capacities so large that we had to extend the scale for talking about how much storage space they had because the language from which we got things like kilo-, mega-, and giga-, didn’t have terms that went that far. The largest that I have heard of (which is almost certainly not the largest that exists) is 20 **YOTTAbytes**. I had to look up what a yottabyte was, it’s that big. But, if you consider that you can have a microSD card that can hold 512Gb and is the size of your fingernail, it’s not hard to imagine an entire room filed with those. Let’s just say that each one is .5mm tall and go for one stack in a standard 2.5m tall room in an ordinary house (yes, the exact measurements are a little off. It’s okay, I’m just making a point). That’s 500*512Gb of data or about 250Tb. And that’s just one stack. Now, with a 4m wide room and each microSD card being 10mm across, that’s another 400 stacks of those, for a total of about 100,000Tb of raw storage. No storage system will ever be that efficient, so lop off 20%, bringing it down to 80,000Tb for one wall of one room in one house. Now imagine an entire shopping mall full of those stacks, and the number quickly begins to boggle the mind. But that’s storage.
As for transmission, that’s been improving every year two, as we figure out more ways to pack information into data streams and better ways to stack those data streams on top of one another, but it’s still the biggest bottleneck in the process. 35 years ago, we struggled with 14,400 bits per second over old fashioned copper wires. We outgrew that, and modern copper (coaxial) wires can handle as much as 150Gbs. Internally, specialty copper wiring can handle 1Tbs or better. And that doesn’t even begin to cross over into what fiber optics can handle. That being said, companies that provide website hosting services have to divy up their transmission resources among all the sites they host, and some sites get less than others. And it costs them money to transmit the data. So, when you see a website go down for too much traffic, it’s not because the infrastructure can’t handle it. It’s because that sites payment plan didn’t allocate enough transmission resources for it. If they’d paid a little more, they’d have more bandwidth available for their use, and they’d never go down. Look at YouTube. YouTube hasn’t gone down for lack of capacity in years (if not decades). Why? Because they own their own transmission lines (and a few other reasons, like pre-positioning data closer to where it’s going to requested, but that’s a topic for another rant).
Hope this helps…