Why does audio and video take so much storage?

366 views

So for example, videogames are super heavy on disk space. And aparently most of that space is just the sounds, textures, models, etc. But the code takes very little space.

Why can something as complex as a physics system weight less than a bunch of images?

Code takes very little space, media takes more. But an image is just code that tells the computer how to draw something (I think)

So how come some code gets to be so small in size and some code doesn’t?

In: 53

18 Answers

Anonymous 0 Comments

I’ll start in reverse order:

Text (code) doesn’t take up a lot of space, compressed or uncompressed, as there isn’t a lot of details that need to be kept. Compressed text especially, as during compression you can change patterns into single letters, say “wherever I see axy change it to p.” And during decompression you would know that p=axy. (Special characters tend to be used, but just want to get the point across.)

Before code can be executed it needs to be compiled down into instructions that the computer understands. Compilers do an amazing job of optimizing the number of instructions needed.

So why do audio and video take up a lot of space? Well that depends.

For clarification, videos store information about frames, whereas each frame has information on what the individual pixels must look like to recreate some scene.

The higher the resolution the more pixels which are utilized for the same scene. (Meaning more data needs to be stored.)

Audio stores information about what signal must be reproduced, to create some sound. The higher the quality, the more signals that are stored OR the more levels for a given signal that is stored. (This is a completely separate topic, that will make this too long if I go into details.)

The files used to store them, is just a large instruction set that essentially says “if you want to recreate me, these are the pixels/signal values that you need to utilize.” The higher the quality, the more instructions each file contains. The program you use to open them, is the one that would have code to interpret the instructions.

Now that we talked about what the instructions say, we can discuss why the amount between low and high quality differ so much. And it’s as simple as: the amount of detail.

Files tend to be high quality because they store a tremendous amount of data about what was recorded. They tend to utilize compression techniques that are similar to Lossless. Meaning the amount of data that differs from the original file and the uncompressed file should be minimal.

Whereas low quality images tend to utilize lossy compression, as they can sacrifice a good amount of data, and still get their point across. (I.e fine details doesn’t matter.)

Uncompressed raw video takes up a ton of space, because the amount of details it initially records is staggering. Say you had a grey table, and to the naked eye the table seems to be entirely uniform in color.

When recording this table, the camera may produce a file that shows that each individual pixel on the table is a slightly different color of grey. While that information may be useful, from a player experience standpoint it’s entirely useless. As we mentioned that it’s completely indiscernible.

So rather than storing those different pixel colors in the file, the file that is sent to players would just have all the pixels set to the same color. I.e we maximize the amount of details that is perceivable to users, but minimize the amount of data that is utilized for unperceivable details.

You are viewing 1 out of 18 answers, click here to view all answers.