Why is permanently deleting a file without any trace of it left on the system is considered to be a really complicated process?

494 viewsOtherTechnology

I got the basics. Deleting a file simply tells the system that the sectors it occupied are now available for new data to be written on.

But if you want a more secure deleting why cannot you just overwrite everything with zeroes? If the computer system is only working at its core by manipulating zeroes and ones, why is it said that physically destroying the drive the information is stored on, is the only secure way to make sure nothing is left on it?

In: Technology

8 Answers

Anonymous 0 Comments

The fastest way to delete a file is to just delete it from the index. Its even less secure when they go even farther and just mark the index entry as “deleted” but leave the rest of the index data intact – and this optimization is actually very common. It should be fairly clear why neither of these is especially secure – none of the data itself was actually touched.

Writing values to the disk to overwrite the index entry and the actual data, however this is imperfect, with the actual reasons depending on the type of disk:

For a magnetic disk, most commonly HDDs though floppies, magentic tape, and similar storages have the exact same issues, there will always be some leakage between the tracks. Basically, the magnetic write head might not perfectly align with the track, and some of the magnetic data will get written slightly off. Additionally, the magnetism will magnetize slightly more than you want. When you write back over the data, the offsets may be slightly different, thus leaving traces of the data that could be recovered. This is also complicated if there is physical damage to the disk resulting in bad sectors, which may get remapped automatically – if that happens, the overwrite may not happen in the same area of the disk as the data was originally written.

For solid state disks, typically called SSDs or Flash, the drive uses a bunch of banks of memory (called sectors) and remaps them on write. This happens as each block of memory can only be written a finite number of times, though the number is in the millions for modern memory. The disk then has firmware that balances the writes so the entire disk fails roughly at once, rather than having it fail in bits and pieces. As such, just writing new data doesn’t wipe the old data, and a TRIM/UNMAP needs to be called to tell the SSD to mark the data as free. Even then, the SSD doesn’t want to immediately overwrite the data as that reduces the lifespan of the sector. This means that simple attempts to overwrite the data may result in the data still existing elsewhere on the SSD.

With each of those, there are various methods that can be used to bypass the problems and optimizations:

For a HDD, you generally overwrite the data a bunch of times, ideally with random bits each time, to ensure any track overwrite gets overwritten. This process is basically random, and thus there is no true way to full guarantee a wipe without physically destroying the entire disk.

For an SSD, you can generally issue special delete commands that will bypass the avoidance of unneeded writes, but this comes at the cost of disk lifespan. To be safe, you need to run this not only where the data *was* saved, but likely every free sector on the disk, which means a *lot* of unneeded writes to the disk. Again, there is always some chance a bit of the data happens to survive any method you use, thus requiring physical destruction if you want full security.

You are viewing 1 out of 8 answers, click here to view all answers.