The files you describe is usually compressed. Compression work by looking for patterns in the data and then store the patterns instead of the data itself as the descriptions of the patterns is smaller. So how much a file compresses depends on how many patterns it can find during compression. Some files have more patterns to find then others, some compression is given more resources to find those patterns then others, and sometimes the compression is asked to make sure the file is still mostly intact even if parts of it is damaged which means it can not use all patterns it found.
It depends on how much compression you can get out.
If I have an hour long video of a black screen, I can perform a lot more compression than some other videos.
There is also a different acceptable amount of loss. Just because a video is 4k necessarily means that every pixel was preserved perfectly. You could have algorithms that are 99% effective in rendering the original from the compressed data, but take 1/10th the space, so it’s a loss you’re willing to take.
Media encoding includes various forms of compression. Media with more variation and specific detail needs more data to describe it. A simple (but reductive) example is a video file that just shows one frame for 30 seconds. You don’t need 30 seconds of frames (1800 at 60fps) to describe that video, you need one frame, the one you want to see, and some instruction that says “don’t change this frame for 30 seconds”.
Latest Answers