why is defragging not really a thing anymore?

773 views

I was born in 1973, got my first computer in 1994, defragging was part of regular maintenance. I can’t remember the last time I defragged anything, even though I have several devices with hard drives, including a Windows laptop. Has storage technology changed so much that defragging isn’t necessary anymore? Is it even possible to defrag a smart phone hard drive?

edit to add: I apologize for posting this same question several times, I was getting an error message every time I hit “post”… but from looking around, it seems I’m not the only one having this problem today.

In: 821

40 Answers

Anonymous 0 Comments

I don’t see any complete answers here, so I’ll give it a try.

Hard drives basically just store a long string of ones and zeroes. To store files in a hard drive, we basically split this long string up into ranges and say “Bytes 0 to 10 000 is file X”. By storing these ranges in a table, we end up with a file system.

However, a problem can occur when we delete a file. If there are other files before and after the file we deleted, we now end up with a “hole” in the string of ones and zeroes. Over time, our harddrive will look like a swiss cheese of holes. At worst, this could mean that we could have a large amount of free space on the hard drive, but all that free space is spread out over lots of small holes. In that case, we’d be unable to store a large file, because there’s no single hole large enough to fit it. This problem is called fragmentation, and tends to get worse and worse as the drive fills up.

File systems solve this by allowing a file to be split up into multiple ranges. That means that a large file that can’t fit in a single hole can simply be split up into as many parts needed to fit the free holes. We just need to keep track of all of these separate ranges in the table.

Unfortunately, this can significantly slow down the performance of a hard drive due to additional seeking. While a mechanical hard drive can read over hundreds of megabytes per second as long as they’re reading a single block from start to finish, whenever the read head has to move it can take anywhere from 5 to 20 milliseconds, depending on the hard drive model, the physical distance between the blocks on the disk and the disk’s rotation speed. In 10 milliseconds, our harddrive can read more than a megabyte of data. This means that each time we split up a file to fit it into different blocks, we have to pay the additional seek time each time we want to read it.

Defragmentation is the process of moving files around on the hard drive to eliminate those holes and pack the files more neatly in the available space. While it takes time to do and causes wear on the hard drive from the additional use, it helps keep the performance of the drive up as the OS no longer needs to split up files when writing due to having to fit them into holes, and existing files are faster to read as they’re not split up either.

The reason why defragmentation has “disappeared” is two-fold. The first reason is that the OS got better and smarter at doing it automatically in the background. Your OS will automatically defragment hard drives when they aren’t in use to make sure that the performance stays optimal. It also tries to avoid defragmenting files that are very large in the first place, as the additional seek time cost of a 200 MB file is negligable; if it takes 1000 ms to read 200 MBs and a split adds 10 ms, it’s barely any difference. By only defragmenting small files, the defragmentation process becomes much faster.

The second reason is, like many others have mentioned, solid state drives. Solid state disks don’t have a single read head that has to find the data. Instead, it’s functionally as if each block of data had its own personal read head. While a mechanical hard drive can only seek to and read from one place at a time, an SSD can not only “seek” to different locations instantly, they can even read from many different locations simultaneously, their performance only being limited by the controller chip and transfer speed of the cable/slot the SSD is plugged into. As such, the “seek time” (which is really just the time it takes to process the read command and start returning data to the OS) of an SSD is generally less than 0.1 ms, at least a hundred times faster than a mechanical hard drive.

This changes the above equation significantly. If each split only adds 0.1 ms to reading a file, then it’s no longer worth defragmenting it unless it truly is tiny in the first place. This pretty much never happens, as tiny files easily fit in small holes, so they don’t usually need to be split up in the first place.

Defragmentation sounds even worse when we take the limited endurance of SSDs into account. Flash memory cells have a limited number of writes that can be done to them before they’re no longer able to store data anymore. Defragmentation is a very copy-intensive operation, usually requiring files to be copied to temporary memory and then to their new location. This causes excessive writes for essentially no benefit, so unless the fragmentation is extremely excessive defragmentation should be avoided for SSDs.

You are viewing 1 out of 40 answers, click here to view all answers.
0 views

I was born in 1973, got my first computer in 1994, defragging was part of regular maintenance. I can’t remember the last time I defragged anything, even though I have several devices with hard drives, including a Windows laptop. Has storage technology changed so much that defragging isn’t necessary anymore? Is it even possible to defrag a smart phone hard drive?

edit to add: I apologize for posting this same question several times, I was getting an error message every time I hit “post”… but from looking around, it seems I’m not the only one having this problem today.

In: 821

33 Answers

Anonymous 0 Comments

I don’t see any complete answers here, so I’ll give it a try.

Hard drives basically just store a long string of ones and zeroes. To store files in a hard drive, we basically split this long string up into ranges and say “Bytes 0 to 10 000 is file X”. By storing these ranges in a table, we end up with a file system.

However, a problem can occur when we delete a file. If there are other files before and after the file we deleted, we now end up with a “hole” in the string of ones and zeroes. Over time, our harddrive will look like a swiss cheese of holes. At worst, this could mean that we could have a large amount of free space on the hard drive, but all that free space is spread out over lots of small holes. In that case, we’d be unable to store a large file, because there’s no single hole large enough to fit it. This problem is called fragmentation, and tends to get worse and worse as the drive fills up.

File systems solve this by allowing a file to be split up into multiple ranges. That means that a large file that can’t fit in a single hole can simply be split up into as many parts needed to fit the free holes. We just need to keep track of all of these separate ranges in the table.

Unfortunately, this can significantly slow down the performance of a hard drive due to additional seeking. While a mechanical hard drive can read over hundreds of megabytes per second as long as they’re reading a single block from start to finish, whenever the read head has to move it can take anywhere from 5 to 20 milliseconds, depending on the hard drive model, the physical distance between the blocks on the disk and the disk’s rotation speed. In 10 milliseconds, our harddrive can read more than a megabyte of data. This means that each time we split up a file to fit it into different blocks, we have to pay the additional seek time each time we want to read it.

Defragmentation is the process of moving files around on the hard drive to eliminate those holes and pack the files more neatly in the available space. While it takes time to do and causes wear on the hard drive from the additional use, it helps keep the performance of the drive up as the OS no longer needs to split up files when writing due to having to fit them into holes, and existing files are faster to read as they’re not split up either.

The reason why defragmentation has “disappeared” is two-fold. The first reason is that the OS got better and smarter at doing it automatically in the background. Your OS will automatically defragment hard drives when they aren’t in use to make sure that the performance stays optimal. It also tries to avoid defragmenting files that are very large in the first place, as the additional seek time cost of a 200 MB file is negligable; if it takes 1000 ms to read 200 MBs and a split adds 10 ms, it’s barely any difference. By only defragmenting small files, the defragmentation process becomes much faster.

The second reason is, like many others have mentioned, solid state drives. Solid state disks don’t have a single read head that has to find the data. Instead, it’s functionally as if each block of data had its own personal read head. While a mechanical hard drive can only seek to and read from one place at a time, an SSD can not only “seek” to different locations instantly, they can even read from many different locations simultaneously, their performance only being limited by the controller chip and transfer speed of the cable/slot the SSD is plugged into. As such, the “seek time” (which is really just the time it takes to process the read command and start returning data to the OS) of an SSD is generally less than 0.1 ms, at least a hundred times faster than a mechanical hard drive.

This changes the above equation significantly. If each split only adds 0.1 ms to reading a file, then it’s no longer worth defragmenting it unless it truly is tiny in the first place. This pretty much never happens, as tiny files easily fit in small holes, so they don’t usually need to be split up in the first place.

Defragmentation sounds even worse when we take the limited endurance of SSDs into account. Flash memory cells have a limited number of writes that can be done to them before they’re no longer able to store data anymore. Defragmentation is a very copy-intensive operation, usually requiring files to be copied to temporary memory and then to their new location. This causes excessive writes for essentially no benefit, so unless the fragmentation is extremely excessive defragmentation should be avoided for SSDs.

You are viewing 1 out of 40 answers, click here to view all answers.