Reading/writing compressed data is more complicated. Good compression algorithms remove all the duplicate data, but if you want to read it back again it takes a lot more work to reconstitute the original data. Updating anything requires essentially rewriting the entire file. Less-effective compression mechanisms limit the effort but trade off compression efficiency.
An application would prefer to just seek to byte 1000 and read the next 5 bytes, then update it in place to a different value. Not read entire file, update in place, rewrite entire file.
Latest Answers