Why is permanently deleting a file without any trace of it left on the system is considered to be a really complicated process?

428 viewsOtherTechnology

I got the basics. Deleting a file simply tells the system that the sectors it occupied are now available for new data to be written on.

But if you want a more secure deleting why cannot you just overwrite everything with zeroes? If the computer system is only working at its core by manipulating zeroes and ones, why is it said that physically destroying the drive the information is stored on, is the only secure way to make sure nothing is left on it?

In: Technology

8 Answers

Anonymous 0 Comments

If you overwrite the file it is gone as fat as the computer is concerned.

However the physical medium itself might be different.

If you have magnetic media, a 1 written over a 0 may be distinguished from a 1 written over a 1.

In extreme cases it might even be possible to look back more than one write cycle.

This is mostly theoretical and would be hard to read out in practice, but to be sure there are standards that overwrite disks multiple times with multiple patterns.

Other media have different properties, but it seems restoring overwritten data is in theory always a possibility.

This is why degaussers and shredders built specifically for destroying tapes and disks exists.

Anonymous 0 Comments

There’s a lot of reasons. The easiest answer is that no matter what you do, it is possible old information exists on your drive without your awareness. The safest way to ensure information stays private is to destroy to drive because it is foolproof and generally inexpensive.

If your information is valuable enough to keep secret, it’s more valuable than a $200 hard drive.

By simply overwriting the drive, there’s a lot that can go wrong.

For example, computer drives can develop “bad sectors” which can’t be read or overwritten. When this happens, the computer will generally just save information somewhere else. It is possible that your drive still has old information in these bad sectors that could be recovered by a specialist.

For non-solid-state drives, magnets are used to arrange bits (generally on tape). Even if you overwrite the content of the tape, information may still be partially recoverable from the state of the magnetic field itself.

For solid-state drives, your computer is programmed to spread activity out over the entire drive to prevent wear. This means you might *think* you wrote over bits, but in reality your computer shifted the information around to extend the life of the drive. There’d be no way for you to know without doing a forensic analysis of your drive.

Anonymous 0 Comments

Imagine you have a wall painted white, and you paint a message in black. Even if you put a new coat of white paint, it’s still possible to see where were the black parts, because the coat of white paint is not thick enough to hide everything.

With a hard drive (let’s use the typical HDD), something similar happens. Zeroes and ones are a specific amount of “magnetic charge”, with some tolerance for variation. But if you overwrite everything with zeroes, the previous charge is not lost, you’ll find that where there was a zero, the charge is low, and where there was a one, the charge is higher (still below the threshold for zero, but high nonetheless). With the correct equipment, it’s possible to retrieve the deleted information.

You would need to overwrite the deleted information with random digits several times.

Anonymous 0 Comments

Think of a hard drive as a hotel. When you delete a file, it’s more like checking out of a hotel and having the key card de-activated. The room still exists in the state you left it, you just no longer have access to it and the hotel knows its available. Only when they need to use the room again is it actually cleaned and no longer as you’d left it. When the new guest checks in, that’s like a new file using the disk space and rendering the old file permanently gone.

Anonymous 0 Comments

Back in the last century I built mil-spec disk storage systems. Some were used to store highly classified data, for fun customers like NSA and SAC. So I learned a lot about secure data destruction.

First, overwriting stored data with 0s is mostly adequate, unless your Bad Guy has serious technical resources. That’s because, while your computer would read it as 0, some residual traces of the earlier magnetization are still there. That’s because the “track” where the magnetic data is recorded is a bit wobbly, so the overwrite might not completely overwrite every bit. Like a bad repaving job on a country road. Reading that leftover info varies from easy to extremely difficult, depending largely on how dense the data is on the disk platter itself.

Another issue is that modern OS and storage systems often virtualize the disk in some way. This means that if you have a file that looks to The OS like it is in sector 3, it could actually be anywhere at the hardware level. Asking the OS to overwrite sector 3 might have no effect at the low level hardware. Secure erase requires low level hardware support.

Back when 240 MB on 4 8-inch wide platters was the norm, it was relatively easy to see the old data. At today’s densities it might be impossible. But, while interesting, it’s mostly irrelevant because…

All my customers who cared about data-erase specified multiple overwrites, with various bit patterns. Sometimes many many overwrites, in an effort to remove any trace Of The secret data. I wrote a data erase routine for a 3-letter agency that took 45 minutes to complete, on a disk less than 1 GB. But even that didn’t really matter, because…

They installed a thermite incendiary destruction device in the rack, which took a whole 30 seconds to work. For a different customer, I asked how erase should be done in case the disk needed warranty repair work. They said, “Don’t worry. If it breaks, we crush and shred it, then burn the scrap. Just sell us a new one. ”

Your tax dollars at work, folks.

Anonymous 0 Comments

It depends on how the information is stored. In a traditional hard drive with spinning platters, the drive writes to the platters by changing the magnetic characteristics of a part of that platter to a north or south, one representing a 1 and the other a 0. It also records everything to a table of contents so it can just refer to that table of contents when looking for it. When it deletes something, it just deletes it from that table of contents showing the once occupied space as free. That information is still there and not turned into all 0s when you delete it. It’s there until new information is written and the drive decides the information needs to be written where the old information resides.

There is software that can scan every “free” section of the hard drive and rebuild the table of contents as long as those sectors aren’t overwritten and extract those files. There are also commands you can perform to write all 0s or random 1s and 0s to the drive. Once either one of these is done, extracting information from the drive becomes exponentially harder. If the old information was 1001 and the overwritten random digits are 1010 you can painstakingly analyze every sector of the drive and see how strong the 1s and 0s are to try to extrapolate what was originally on there. If you read each sector and find it shows 1.0000, 0.0001, 0.990, and 0.0100 you could make some inferences as to what used to be there. Again, this is an extremely slow, tedious, and expensive process. No one is doing this to hard drives they bought on eBay to reconstruct your old hentai fanfiction. Governments and billion-dollar corporations do this to find secrets from drives that have been improperly disposed of.

Anonymous 0 Comments

The fastest way to delete a file is to just delete it from the index. Its even less secure when they go even farther and just mark the index entry as “deleted” but leave the rest of the index data intact – and this optimization is actually very common. It should be fairly clear why neither of these is especially secure – none of the data itself was actually touched.

Writing values to the disk to overwrite the index entry and the actual data, however this is imperfect, with the actual reasons depending on the type of disk:

For a magnetic disk, most commonly HDDs though floppies, magentic tape, and similar storages have the exact same issues, there will always be some leakage between the tracks. Basically, the magnetic write head might not perfectly align with the track, and some of the magnetic data will get written slightly off. Additionally, the magnetism will magnetize slightly more than you want. When you write back over the data, the offsets may be slightly different, thus leaving traces of the data that could be recovered. This is also complicated if there is physical damage to the disk resulting in bad sectors, which may get remapped automatically – if that happens, the overwrite may not happen in the same area of the disk as the data was originally written.

For solid state disks, typically called SSDs or Flash, the drive uses a bunch of banks of memory (called sectors) and remaps them on write. This happens as each block of memory can only be written a finite number of times, though the number is in the millions for modern memory. The disk then has firmware that balances the writes so the entire disk fails roughly at once, rather than having it fail in bits and pieces. As such, just writing new data doesn’t wipe the old data, and a TRIM/UNMAP needs to be called to tell the SSD to mark the data as free. Even then, the SSD doesn’t want to immediately overwrite the data as that reduces the lifespan of the sector. This means that simple attempts to overwrite the data may result in the data still existing elsewhere on the SSD.

With each of those, there are various methods that can be used to bypass the problems and optimizations:

For a HDD, you generally overwrite the data a bunch of times, ideally with random bits each time, to ensure any track overwrite gets overwritten. This process is basically random, and thus there is no true way to full guarantee a wipe without physically destroying the entire disk.

For an SSD, you can generally issue special delete commands that will bypass the avoidance of unneeded writes, but this comes at the cost of disk lifespan. To be safe, you need to run this not only where the data *was* saved, but likely every free sector on the disk, which means a *lot* of unneeded writes to the disk. Again, there is always some chance a bit of the data happens to survive any method you use, thus requiring physical destruction if you want full security.

Anonymous 0 Comments

There’s a lot of pieces to this this response. None of them are really wrong, but a more modern response is that computers take a lot of short cuts to improve speed and prevent errors. So knowing even basic questions of every possible location some data might be stored is complex.

As a very simple example, when a file is deleted, it doesn’t actually delete the data, it just marks the space as available for use, since actually setting the data to zero takes time and doesn’t add any value. Your computer’s hard drive contains scraps of long-ago deleted files, ready to be be given new values.

Also, the hardware does tricks to improve performance as well. SSD drives try to spread the usage load out across all the parts of the chips (it’ll help them stay good longer). If it sees part being written to really regularly, it’ll secretly swap out where it’s writing to so that area doesn’t become too warn out, without the OS realizing it. Special tools could recover the values stored in that area.

These are just two out of dozens or hundreds of cases where knowing “all places that it’s ever been written to” is hard. That’s when people go dramatic… It’s very difficult to read a melted blob of metal.