Computers need data to be structured and aligned in exact lengths (say 32, 64… bits) and must be able to access it in random order. Compression algorithms are able to compress effectively only long chunks of data, making them inditinguishable from random gibberish. Current computer architectures cannot random access nor distinguish data in compressed format since structure is completely lost and depends on the data being compressed, so there is no general rule to be able to do that.
Latest Answers