Eli5: why when you take pictures with your phone (for example, of a beautiful sunset) the colours and proportions don’t look as good as with the human eye?


Eli5: why when you take pictures with your phone (for example, of a beautiful sunset) the colours and proportions don’t look as good as with the human eye?

In: 159

The lens in your phone is different from the lens in your eye. The eye is also far more sophisticated than most cameras generally and, specifically the cameras in phones. Things like contrast and color are never going to look as good being captured through a camera phone as they look when you’re using your eyes.

Your organic perception is more or less analog. You can see and perceive a continuous and smooth spectrum of light and color.

Computer vision is digital, and color is made up of 3 8 bit numbers. This is stored as either RGB (red, green, blue) or HSL (hue, saturation, luminance). This means that there’s limits to the possible colors a computer can display. HSL works more intuitively for us humans because it maps the actual RGB values of displays to a wheel of colors that we can then manipulate in brightness (how we get black in HSL) and saturation from white to full color. In contrast, changing one value in RGB just a little makes significant and often counterintuitive changes to the color.

The other thing at play here is the way these values are stored in memory. In memory, the data in an image is stored as square roots of the values read by the camera. This causes some data loss on display, and significant data loss or corruption when manipulating the image if you aren’t using a program that accounts for it.

When you see out of your eyes you are seeing through two lenses and, therefore, in three dimensions: length, width and depth. A camera only has one lens and captures two dimensions: length and width.

The challenge of photography is to convey the idea of three dimensions using two dimensions.

TL; DR: most cameras aren’t designed to capture all colors. RGB colors are an elaborate illusion and it doesn’t always work well.

On colors specifically: digital cameras and screens define every color as a combination of some amount of red, green and blue. The exact shade of “red” or “blue” it uses dates back to CRT TVs in most screens – the red we could do with the phosphors available in the day is the reddest it could get. This choice of “what is 100% red, and what is 100% blue” is called a colorspace, and screens and cameras can only represent the colors inside their color space.

Some real-life “reds” are redder than CRT phosphor reds, and we can’t photograph them and then show them on the screen. Other pure colors are slightly outside of the red-green-blue triangle, and we can’t represent them with any combination of RGB light.

HDR cameras and screens use more modern colorspaces (in addition to being able to distinguish more levels) – so a sunset looks better in HDR.

Another subtle thing is people have individual differences in color perception, so a mix of the same amount of red and green may look like one shade of yellow to me, and a slightly different yellow to you. If I make a camera that produces realistic colors to me, you might not agree at all.

There are 2 things to note here:

Field of view is the maximum angle that the camera (or your eyes) can see. If you use a wider lens, then objects get stretched more and more as you get closer to the edge of the picture. Phones have software compensation to minimize this effect, but it is just a natural consequence of the lens design. Phones have wide fields of view by default, with some having ultra wide lenses as well, capturing more of the scene, but more distorted. You naturally see some of this distortion with your eyes and your brain considers it normal, that’s why a zoomed in photo feels squished, because the natural distortion is missing.

The second, color reproduction, comes from multiple things: The lens could have a subtle discoloration, the sensor might be more sensitive to certain color tones, the compression that creates the JPEG file also distorts color slightly. The main reason smartphones have unnatural colors though, is because people in general prefer more vivid and brighter scenes, even if they are not as realistic. Because of this preference, the easy to use automatic camera mode usually brightens and oversaturates the input from the camera. MKBHD’s blind camera test shows this perfectly, it’s worth watching. A way to have more realistic results is by either taking the time to set the camera up in manual mode, or by making it save the uncompressed raw image along with the JPEG. These raw files look weird as well, but they can be edited to bring out detail that the JPEG would lose during compression, and also to have realistic colors.

The tl;dr is that people usually prefer dynamic and bright photos, so phones are adjusted to fit that. The minority that prefers realism can achieve it, but it takes longer