When you provide a service you typically have multiple different environments. You have a development environment where you can tinker around with different prototypes and hacks. A test environment which is set up as it should without any of the manual tinkering so that you have somewhere to test out various things before going to production with it. And then you have the production environment which is what your paying customers will use. A regular development workflow is then to tinker around with some new functionality in dev, then once you have figured it out and know how to make this new functionality you will write it down and perform the changes in test, once you see that test is working fine and that none of your changes required something you missed in dev you can apply the changes to production where it might impact paying customers.
Each environment have its own database. The database is where everything permanent is stored. All data about all users and all content. This post here is stored somewhere in a big database in the reddit production environment. So deleting the database would mean that all information is lost. Without the database there is no permanent information and the system is just like you installed it when the service first launched, before any of the users and content.
It is very common to delete the development database. When you are tinkering with something you often break things so bad that the only way to move ahead is to start over from scratch. So you delete the database and start over. Even the test database is regularly deleted as a failed test might damage something. In some cases the test database is a copy of the production database so that the tests can be performed on real data. So regularly you delete the test database and copy it over from production to refresh its content. But sometimes people make mistakes and instead of deleting the dev or the test database they end up mistakenly deleting the production database. You could easily argue that a system where this is an easy mistake to make is badly designed so this only happens once multiple errors have been made at various stages. So it is rare but not so rare that it never happens. I have only done it once in ten years. But I have recovered from situations where others have done it a few times.
Usually it is not the end of the company though. There is usually backups of the production database somewhere. The issue is that it is usually in a bit of an awkward location. So you need to transfer it to the database server and often process it a bit. Parts of it might be in multiple locations so you need to combine multiple backups to recover the database. And the backups might not be the latest version of the database but might be a backup from last night so you lose changes that were done in the meantime. So recovering from such a mistake takes hours of downtime with lots of people working on the issue and even then you might not recover fully.
Latest Answers