How is it that two different images, but same resolution, can have different storage size?

746 views

How is it that two different images, but same resolution, can have different storage size?

In: Technology

5 Answers

Anonymous 0 Comments

Compression is crucial, but don’t forget bit depth — the number of bits used to represent the color of each pixel. This occurs most often in the post-production and photography worlds, before compression happens for a deliverable product.

An 8-bit image uncompressed will take 1/3 of the size of a 24-bit image (8-bit each for “R, G, and B “) , uncompressed, at the same resolution. Some image formats like TIFF support an alpha (mask) channel, making another 8 bits per pixel of data.

So your file size can vary greatly depending on the number of bits used to describe the color of each pixel, well before you ever compress the image ( see other replies).

Many modern cameras can deliver 10-bit per pixel images or video frames, which have 30 bits per pixel (uncompressed.)

Anonymous 0 Comments

Images are usually stored using a compression methods to take less space. Methods such as JPEG compression basically work by treating neighbouring pixels in the image the same. So say that you have 100 neighbouring red pixels in the images are the same size, it is then stored as “100xred”, instead of “red, red, red,red, red, red, red, …”.

This means that images that have large areas that look the same can be stored with a smaller size, as they compress better.

Anonymous 0 Comments

Compression.

If an image is for instance pure white, you don’t need a huge file that goes:

pixel 1: white
pixel 2: white

pixel 1000000: white

You just write an image that effectively says:

pixels 1 to 1000000: white

In practice, compression gets a good deal more complicated than that, but that’s the basic principle. And of course it varies depends on content. For instance with the above scheme if each pixel was a different color, the compression would be ineffective.

Anonymous 0 Comments

Digital compression algorithms!
There are a bunch of different ways to do it. A common method is the only store the differences

Let’s say you take a picture of a cloud in the sky. There is a lot of white and a lot blue. We don’t need to store red and green info in the picture cause its never used individually

Additionally the same blue and same white are re used, so we can just say where the blue is and white is rather than : blue here, white here, blue here , blue here, …

Anonymous 0 Comments

Because images are compressed, if you convert them to uncompressed format such as ppm they will occupy almost the same (I’ll get to that difference later).

For example if an image has 20 blue pixels it’s better to store 20*(blue) than blue 20 times, so most image formats do something similar to this (and other more complex methods) to store the images. Videos do this too, which is why sometimes when you’re watching a video suddenly there will be a lot of green or other funny color that gets corrected as people walk around the scene, this is because each frame only stores the changes from the last frame.

Why don’t PPM have the exact same size? That’s simply because a 0 occupies less space (1 character) than a 255 (3 characters) in text mode, in binary mode they should be the same at 1 byte each.