I’m asking this question from the perspective of a hobby computer scientist as well as someone who has a base level understanding of optics/light from middle school physics.
As I understand it, humans have 3 distinct cones for detecting lights in the broad red, green, & blue wavelength ranges. The red cone is also able to detect wavelengths shorter than peak blue light (wavelengths corresponding to purple) so that a mix of triggering red & triggering blue cones by purple light lets us see purple. Here’s an image corresponding cone detection spectrums as I understand them: [https://www.unm.edu/\~toolson/human\_cone\_action\_spectra.gif](https://www.unm.edu/~toolson/human_cone_action_spectra.gif)
When displaying images containing purple on my digital computer, my screen isn’t displaying purple wavelength light, but rather a mix of red light in the typical red wavelengths & blue light in the blue wavelengths to trigger both cones & trick me into seeing purple. However, I can still take pictures of purple things that originally trigger my red cones with wavelengths on the complete opposite end of the visible light spectrum & display them on my screen to trigger my red cones with red light instead. How does my digital camera translate purple light to the correct mix of red & blue to display in RGB image formats when these formats have to trick my red cones to seeing purple by displaying light on the opposite end of the visible light spectrum?
In: Physics
Camera’s see purple in much the same way that our eyes do. The sensors they use aren’t just sensitive to RGB, they’re sensitive to the whole visible spectrum (and infrared and ultraviolet, too, if there’s no filter on the lens blocking that light). The way the sensors are made, and then how the resulting data gets processed, is meant to emulate human vision as much as possible.
Basically, it’s because the camera is doing the same thing.
The CCD sensor in the camera is monochrome by its nature, so there are RGB filters embedded in the device (roughly analogous to the cones in your eye) to capture color information. The amount of light in RGB falling on a given pixel is encoded in an image file. Every pixel records the amount of red, green, and blue light that was falling on that pixel. When your computer displays the image, it lights up the pixels in your monitor, mixing the light to make a color according to those levels.
You already know that if you took a spectrum of the computer screen, it would be different than the real object. This is because the RGB channels only give an approximation of the spectrum, but there is enough color information there to make it very difficult for our brain to tell the difference. It’s good enough.
EDIT: One other point of clarification: purple is not a spectral color, it’s a sensation of the brain. If the camera captures the correct mix of red/blue light to give the sensation of purple, and that information is conveyed to the monitor, your brain will perceive purple when you look at the screen because it’s a good enough approximation.
Ok there are a couple of different things going on here.
The color purple *is* a mix of red and blue. A purple object reflects both red and blue light together. Your eye’s red and blue cone cells see this and your brain sees purple. Cameras also have separate red, green and blue sensors, so they also see red and blue. This is displayed as red and blue on your monitor.
Violet is a bit different. It’s off the blue end of the spectrum so contains no red. The reason we see it as purple is because [the red cone cells have a “flaw” where they also detect some light off the blue end of the spectrum](https://midimagic.sgc-hosting.com/huvision.htm). So violet should be deep blue, but our eyes wrongly see a little red in there.
Cameras don’t see this, so if you try to photograph a UV light what looks like deep violet to our eyes ends up photographing as bright blue instead.
Latest Answers