A lot of hard drives, distributed around the world and aggregated in large storage pools.
Evidently, the free space displayed in every accounts of every customer’s is not the real free space, it’s called thin provisioning where you let the software think he has more space then physically available, but you have mechanisms to plug in more hard drives when you reach certain thresholds.
Also they use techniques to reduce space taken by data like deduplication and compression where they store only one copy of identical « parts » (blocks) of several files.
Actually, physically available storage is several orders of magnitude smaller than the sum total of promised storage, and actually used storage is a fraction of that still.
There’s lots of different strategies they employ to make this possible, but a lot of it boils down to just hoping that you won’t need most of what’s *technically* available to you. Here’s what some of those strategies might look like:
1) While *theoretically* everybody on earth could sign up and flood Google’s servers with data in a single day, realistically there is a preexisting user base, and a more-or-less steady rate of growth of that user base. So why not just buy enough storage capacity for that much, and then a bit more as a buffer?
2) Promise everybody 100 units of storage. Median usage is 1 unit, and only about 1 in 10000 users actually cross 50 units. But seeing 100 units makes all our users happy.
3) More often than not, the longer it’s been since the last time since some file was touched by the user, the less likely it is that they’ll need it any time soon. At that point, it might actually be worth the cost in time to squish it with some time-consuming but storage-efficient compression algorithm. Think zip.
4) Along similar lines as (3), why waste good, fast, expensive equipment on files that aren’t likely to be accessed any time soon? Move them over to cheap, high capacity storage drives. Again, we’ve determined that we’re willing to eat the cost in time to access this stuff in the (very unlikely) scenario that you’ll want this file in the future. The point is, we can use the expensive stuff to store something that somebody else will want to use *now*.
5) So while it’s fairly unlikely that required storage will exceed what Google’s data centers have available, it could still happen. It’s quite likely that they’ve got contracts with other data-center-owning types so that in such an event, spillover data goes to their data centers until Google can figure out how to bring things back under control. That might look like buying more storage, or waiting for some data to be compressed.
There used to be cloud services providing “unlimited” storage. Then somebody decided to test it – they were screen-grabbing and uploading hundreds of cam models 24/7. Soon those services turned into “not unlimited anymore”.
The rest has been answered by others – they build datacenters faster than users fill them.
Latest Answers