I have actually implemented file compression technology, so I feel particularly authorized to answer this question.
Software needs to see files whose contents it understands. This is why software authors design file formats to be optimized to the particular needs of the particular problem their software is designed to solve, be the files written documents, audio/video recordings, spreadsheets, executable programs, scripts, etc.
These needs do not necessary take into consideration file storage resources. So, when a user’s file storage space is filling up, it’s often in their interests to find ways to store **the exact same data** in a **smaller space**. That’s what a data compression file format does. It is a way of analyzing the content of a file, identifying *self-similar parts* of that file (that’s important), and recoding the file to take advantage of the fact that it can reduce the redundancy within the file to be able to store the content of the file in its own, compressed file format, which takes up less space, which is the whole point. Disk storage is not the only place where data compression is useful. Network transmittal of data benefits in taking less bandwidth and/or less time to transfer data from one place to another, if the data is compressed at one end and decompressed at the other.
This, of course renders the data into a file format that the software which originally understood the file’s contents no longer understands. This is the reason compressed files are given new filename extensions, so that even at the File System level, it becomes obvious that the contents of a file are compressed and so no one, human or software, makes the mistake of trying to operate upon that file’s contents as if they were only encoded in the original, inner file format.
Sometimes, this can be handled at the File System level, wherein the software responsible for reading data from or writing data to the actual storage media is the layer of software that takes responsibility for compressing the file’s contents on write, and uncompressing the file’s contents on read, which has the benefit that the file can be stored in its compressed state, consuming less space, while the original software is free to consume the file’s contents, seeing only the file format that it expects.
Often, software will expect its files to be compressed by external programs and so it can be architected to allow itself to be configured to detect compressed input and transparently pass the file through the appropriate decompresser program before trying to use the file’s contents.
Because one of the goals of compression is to reduce the redundancy of the encoded data, the compressed results have less redundancy to begin with, and so it’s not possible to compress already compressed data to get the file even smaller. In fact, trying to compress already compressed data will often result in a doubly compressed file that’s larger than the singly compressed file. This is due to the compression file format’s meta data overhead, as well as other factors. This is often true even when two different compression schemes are used in tandem, not just reapplying the same compression scheme multiple times.
Some file formats, for example audio/video recordings, are already encoded in some manner of compressed form. These are often “lossy” compression standards, such as JPEG or MP3, that explicitly throws away some data in order to make the image or video or audio appear identical when consumed by a human, while also rendering the data into a form that is more amenable to compression. It’s fine to recode a “lossless” audio file to a lossy one, if the human ear will not be able to tell the difference between the playback of the lossy and the lossless encodings. Other data types, for instance executable program code, would not be amenable to lossy data compression, since actually changing the details of the instructions in the program would likely be fatal to the execution of the resultant compressed-decompressed program.
For such lossless data compression schemes, it is paramount that the round-trip conversion of <original data> (compression) <compressed data> (decompression) <uncompressed data> give the result that <original data> and <uncompressed data> be bit-for-bit identical.
There are many different compression schemes at work in the world. Each one does what it does in slightly different ways. It is impossible to create a single compression scheme that works equally well on all kinds of data. The compression scheme at work in MP3 files is actually so specialized that it’s covered by a patent owned by the Fraunhoffer Institute. However, as adept as the compression scheme in MP3s is at compressing audio data, it would not work nearly as well for spreadsheets or written documents. Likewise, the kind of compression schemes that might work well on written documents would work very poorly for video streams. The diverse needs of different types of data and the continual research and development of computer algorithms insures that there will always be a new file compression extension to learn sooner rather than later.
Latest Answers