ELI5. How do some pictures have such massive file sizes

371 viewsOtherTechnology

Regular pictures taken with phones range from about 10Kb to a couple hundred Mbs but today I see a post about a picture of brain tissue with a file size in petabytes. I also know that some NASA pictures are very large(dunno why) but even that makes more sense than a zoomed in pic of brain tissue. Obviously the instruments used are different but why such a large difference.

In: Technology

6 Answers

Anonymous 0 Comments

The post:
https://www.reddit.com/r/Damnthatsinteresting/s/GD4ryXSe4w

Anonymous 0 Comments

In this case it’s not a *picture* that is 1.4 petabytes. It’s a bunch of **3d data** at insane resolution.

Anonymous 0 Comments

A comment from that posts’ OP explains the process behind it.
https://www.reddit.com/r/Damnthatsinteresting/s/rPLxVf99RQ

Anonymous 0 Comments

You’re lumping a bunch of different things into “picture”. It’s like asking why some houses are so large, when you lump the Empire State Building in as a “house”, because there are some apartments in it.

Pictures takes by cameras are constrained by how many pixel circuits you can put on a sensor chip. But images can be composited from many separate camera exposures. Use the “panorama” feature of your smart phone for an example.

The brain image and the NASA images are examples of this. Many images were collected, combined together, colored in an interesting way, and that’s the giant file you’re looking at.

Anonymous 0 Comments

At its very simplest, an image will need to store the three colors that make up each pixel. For standard color space, that takes 24 bits, which is 3 bytes. A 4K image has 3840 * 2160 = 8,294,400 pixels. That means a 4K image would require 24,883,200 bytes = 24.8832 megabytes. There are various compression techniques to reduce the size, and some kinds of images compress much better than others, but that’s your basic math.

Anonymous 0 Comments

1. It depends on the image format–compression and the amount of “data” represented by each pixel matters a lot, and will be your base unit. If there is no compression, an RGB pixel without transparency, and 256 RBG values (each) requires at least 24 bits (3 bytes).

2. A 2d image scales up as a multiple of its increasing dimensions. 50×50 = 2,500 pixels. 100×100 = 10,000 pixels. 1920×780 = 1,497,600 pixels (>4 MB).

3. If the example is a 3d image, that means this increase in pixel density has *another* multiplier on it. 50x50x50 = 125,000 pixels. 100x100x100 = 1,000,000 pixels. 1920x780x780 = 1,168,128,000 pixels (>3 GB).

4. The above is an example of a more normal value of an image that isn’t even in the high end of what consumer-facing cameras are capable of (many can hit 3000+px in a dimension), and often for images of this level of detail, something like 1920px is *extremely* low. In order to get well resolved details of fine structures, you will want *multiples* higher, especially since they are looking at fine structures of cells across an area as comparatively massive as a millimeter (cells are at the micrometer scale). Once again, this is without compression and assuming 3-byte RGB (it’s possible they use other formats), but you get the idea of how this is scaling I’m sure.