Aside from all the comparisons of file sizes, a video game’s code and an image aren’t even close to the same thing.
When you play a level of Mario, all the data file has is an identifier of what tile is in what position on the level. So it will load the little 12×12 sprite for a brick, and just draw that over and over on the screen hundreds of times.
When you go to a different level, it loads the exact same brick image, and then palette swaps it – the image itself may just be a handful of bits, not even a full byte, which tell it which color in the palette is drawn. It doesn’t have any actual color data of its own, which is why you can have levels with red bricks, blue bricks, grey bricks, green bricks, etc. Same with the clouds and the bushes; same sprite, but the bush would be placed in the background halfway below the brick and colored from the green palette.
Consoles were optimized to render things this way, because shaders didn’t exist yet and because they didn’t really have a lot of memory. So there were lots of optimizations, tricks and reuse of assets to work with what was available.
A JPEG on the other hand is a full-sized image of the entire level (not just what’s on screen as you actually play the game), and it saves 32-bit color data of every single pixel. So those thousands of bricks? The data might be repeating from brick to brick, but it’s still saving all of the data of every single pixel of every single brick.
To put it another way, think of the game’s code as letters of the alphabet. You can construct ANYTHING out of them, but there are only 26 of them. The JPEG is an encyclopedia.
Latest Answers