For the same reason why an old turn of the 20th century photograph doesn’t look right. It’s a crude representation of what we actually see. Don’t fool yourself into thinking your phone is different, it’s still a crude representation. Just a better one, much better. No photo your phone takes, nor an image it displays is correct, it’s a crude attempt at it. A few reasons why it is still crude though.
To start with, you’re viewing this on an screen. The screen can’t show all the colours you can see, nor all the contrasts you can see. [See here for various red-green-blue (RBG) colour gamuts](https://upload.wikimedia.org/wikipedia/commons/thumb/1/1e/CIE1931xy_gamut_comparison.svg/1280px-CIE1931xy_gamut_comparison.svg.png). That funny shape is all the colours your eyes can see. The colors are obviously on the screen so are washed out from what they really could be, but gives an idea.
As you can see, RGB three primary colours only draws a triangle that captures part of the colours we can see. That whole thing you were taught in school about three very fundamental and fixed primary colours? A total lie. It’s good enough, most of the colours we name are on there. But not all their shades, not all their hues. Some RGBs are better. Different and more pure blue, green, and red pixels can get more covered, but never all. A four primary colour based display and image would be better, but would also make displays much more expensive and files much larger.
You’ll note the one RGB triangle actually goes outside of the visible gamut, and its triangle tries to cover more of it. It relies on an impossibly red red, but that’s fine for the mathematical storage of the image, just means you need to select values that are possible to actually display, and this imaginary extension allows for more of those. Assuming you have a display that can do it, which you probably don’t. A very high quality printer, with a lot more than three primary colour inks, might be able to.
The rainbow pure colours are actually the curved edge of that shape, not a single true rainbow can be show on a RGB screen. They all look wrong, faded out. The red, green and blue colours a screen can make aren’t as pure as a rainbow, and anything they make through combination won’t be as pure either. That means a small triangle within that fill possiblity of colours. Why do some displays look better? Because they use high quality pixels that push more to the limits of a very pure red, green, or blue on the edge, a larger triangle.
And that’s assuming an analog view of just colours. Digital information is stored as bits, there’s jumps in it. Not only can you only go within that triangle, you can only go within a grid on it. More missed colours. Then there’s brightness and contrast. Again, it’s digital so there’s fixed brightness steps too, not any brightness is possible. As well, a screen is limited in contrast. Blacks are a dark grey, and whites are not as bright as they could be. A high dynamic range, HDR, display can improve this, both with more bits (so more fine steps) and more brightness on the actual display output.
And that’s just the display and the file being shown on it, this would all apply to only a computer generated image. Next is the camera. Again, same issues doubled up now. It’s digital, it has RGB based sensors. It has all the same limitations. On a phone, your display is probably better than your camera, photos taken with real cameras (or ‘shopped) will look better than what it takes. Additionally, the RGB sensors are just best approximation of human cone cells. They don’t have the same sensitivity to each colour, and they don’t even pick up the same spectrum. Go aim your TV remote at your phone camera and hit a button, you’ll see a purple glow that your eyes can definitely not see. There’s IR light out there, especially from a sunset, and your camera is going to pick this up and it’s going to distort the colours.
Then there’s more photography related tricks. There’s things like white balance, focal length, field of view, saturation. Adjusting all of these makes for better photos. You with your phone are a bad photographer. A professional photographer could get a better photo by playing with these, even without better equipment.
Under-expose the photo to get more accurate colors. You know how you can focus on a certain point by touching the screen? Focus on the clouds, but hold until your phone “locks” the focus. Then lift your finger, put it down on the right side (on iPhone but I believe it’s similar on Android) and move down. That changes the exposure, making it darker and making the colors truer.
This same trick can be used the other way when you have a backlit subject (bright light behind your subject) or an overcast sky. Focus on your subject, lock focus, and increase exposure.
Mostly white balance. The software behind your phone’s camera is designed to make pictures (especially of people) look good. One way it does this is to correct for lighting in a way that imitates the way that our brains work. Without it, a face might look reddish under incandescent light but greenish under fluorescent light.
Unfortunately for this to work really well, you need a large part of the picture to be a recognizable color: a blue sky or an approximately white wall. When you take a picture of a purple sky, your camera assumes that there’s some funny lighting and that you want the color fixed.
You may be able to get around it by manually setting the WB or by by pointing the camera at something neutral colored and holding the shutter button half pressed to keep that setting while you take the picture you want.
Your eye sees colors because it has three types of cells (cone cells) that are sensitive to three different frequency ranges of light. The relative responses of the three types of cones in the same part of your eye determine the color you see in that part of you eye.
A camera is a sort of artificial eye that tries to mimic the way a human eye responds to light. Cameras have tiny light sensors called photosites that are sensitive to three different frequency ranges of light. For simplicity, I’ll call them red, green, and blue photosites, because that’s the color they appear if you look at them under a microscope. The red photosites are mostly sensitive to a range of light frequencies that appear red or orange. The green photosites are mostly sensitive to a range of light frequencies that appear bluish green, green, or yellow. The blue photosites are mostly sensitive to a range of light frequencies that appear bluish green, blue, or indigo. The photosites measure the intensity of the light that hits them in the frequency ranges that they are sensitive to. The camera takes the measurements of several nearby red, green, and blue photosites and applies some math to determine the color it will produce for that part of the image.
The bright orange of the clouds in a sunset will have a very strong red photosite measurement, somewhat less strong green photosite measurement, and a relatively weak blue photosite measurement.
The problem with phone cameras is that the photosites are very small, and thus can’t count too many light particles (photons) before they can’t count any higher. The measurement maxes out. We say the photosite has “saturated”. It’s sort of like a measuring cup overflowing when you overfill it. A photosite saturates when that part of the image is too bright in the range of light frequencies it is sensitive to. For the bright orange of clouds in a sunset, the red photosites will saturate before the green ones do, and thus the difference between red and green photosite measurements will be recorded inaccurately. This causes the resulting color to shift away from orange towards yellow. If the green photosites fill up also, the resulting color shifts towards a lighter yellow, and eventually to white.
Even if you had a more professional camera with bigger photosites that didn’t saturate, the camera still needs to figure out how to map the very bright colors it saw to a color that your screen can display. Most phone and computer screens don’t get very bright, and so the camera is going to make some compromises in the color mapping. Yellow is a brighter color than orange on your screen, so it may pick yellow instead of orange in an effort to represent the brightness more accurately while sacrificing the accuracy of the hue. White on your screen is brighter still than yellow, and so it may shift a bright orange or yellow towards white.
I used bright orange as my example, but all bright saturated colors have problems with photosite saturation and color mapping. They will typically shift towards the nearest primary or secondary color, and then towards white, depending on how bright they are and how the camera does its color mapping.
Adjusting the color after the photo is taken doesn’t really work, because the color was recorded inaccurately by the camera. The information has been lost. You’ll never get back the original colors unless you paint them in manually or isolate that part of the image to make adjustments.
The solution is to change the way you take the photos. Reduce the exposure setting in your camera so that the entire photo gets darker. That solves the photosite saturation problem and also the color mapping problem. Of course the photo then may be too dark for your liking, but that’s just one of the many challenges in photography and reproducing colors the way you want. The only problem left to solve may be adjusting the white balance, but that can be done after the photo is taken with some photo editing software.
Latest Answers