Screens can only produce so many colors, and cameras can only capture so many. Most screens and cameras have 3 channels (Red, Green and Blue, or RGB) and on most screens they can be any 8-bit value (between 0 and 256), which when you do the math adds up to 16.8 million colors.
It’s different between devices because some are built to be more color accurate (and more expensive as a result) and in doing so will can have a higher bit-depth, usually 10 or 12 bits.
And there are things called color spaces, which is essentially the map of colors you have, even with the standard 3 channel 8 bit set up, those bits can correspond to different colors, depending on how you need the scene to look. Night/dark scenes especially use a lot more blue colors, so you can take the standard Adobe sRGB color space and shift it to blue to get better color than normal.
LEDs in screens vary in brightness a lot depending on manufacturer variability and even temperature. These variations aren’t consistent across red, blue, and green. So you get color variation.
There are tools to calibrate screens, but it’s an average–a calibrated screen doesn’t guarantee each pixel is correct, just that together they make the right colors and brightnesses. Cheap stuff isn’t even calibrated individually. There’s one factory calibration for every screen of that part number that may or may not work well with your particular screen. Further, things like brightness and contrast are specific to the hardware. Backlights vary too, in color and brightness. You can try to get screens of the same specs to match them, but could still be subtle differences due to different LEDs being used and different calibrations.
The same kind of thing applies to cameras. They have CCD pixels that turn light into electric charge. But their sensitivity also varies. Calibration has similar issues.
It would be possible to do much better in matching real-life colours and in having devices consistent. This would involve accurately calibrating both the cameras and displays which would be expensive and hardly anyone cares enough to pay more. So we get cheap and good-enough colour accuracy. Worse, people often prefer vivid colours to natural ones, so there’s an incentive to be unrealistic.
There are some colours that our existing RGB displays can’t show. Even the best OLED TVs using the latest colour standards can only show about 60% of possible colours, though the missing ones are mostly vivid colours that don’t occur too often in real life. So the missing 40% is not as bad as it sounds.
To understand the problem, check out [this diagram](https://www.displaymate.com/Display_Color_Gamuts_2_files/image006.jpg). It represents all human-visible colours using the white shape. Since we use three-colour (RGB) displays, they can only show colours in a three-cornered (triangular) shape, and the corners have to be in the white shape. The further you push the red and blue corners to the limit of the white shape, the dimmer they get.
Color is a very rich experience, which is the response to a mixture of many wavelengths of light at different intensities. Pixel screens are made up of red, green, and blue. These produce only a (relatively) few wavelengths of each color of light, designed to stimulate our eyes’ color receptors. So the screens don’t produce colors, they produce the illusion of colors.
So if aliens visit our planet, they will see our computer screens and won’t be able to see the color illusion at all, because their physiology will be different and their eyes’ will respond to differently to the specific color wavelengths used.
This illusion is delicate and small changes may be perceived as large differences. Tiny variations between screens can be easily detected by the human eye, because it’s an expert color processor and pixel displays are “cheating” by making the illusion of color.
In real life, there are countless different possible wavelengths of visible light, countless more possible combinations of those different colors, and countless different ways an object might absorb each different wavelength. For example you might have an amber LED that only shines a single wavelength of light, and you might have a mixture of red and green light that looks about the same color to your eye, but hold a pink and a grey object under each light, and they might look the same under the amber light and different under the mixed red and green light.
Our eyes perceive color through three types of cones, each is sensitive to a range of colors, but each type is most sensitive to red, green, and blue respectively. If the camera had an identical spectral response to your eye, and your monitor could perfectly saturate only one of the types of cone cells in your eye, you might be able to accurately mimic any color. But the light producing elements in a display are not perfectly saturated, so the range of colors they can display is limited(the range of colors that can be displayed is called color gamut), and different display technologies have different ranges of reproducible color. There are also a limited number of brightness steps for each color, and the range of brightness of a display is less than real life(less dynamic range). Also, the spectral response of a camera sensor is not identical to the spectral response of the cone cells in your eye.
All of this adds up to a complicated mess, and different color systems for computers(RGB) vs print(CMYK, Pantone, etc.)
Latest Answers