How do cameras capture the color purple?

178 viewsOtherPhysics

I’m asking this question from the perspective of a hobby computer scientist as well as someone who has a base level understanding of optics/light from middle school physics.

As I understand it, humans have 3 distinct cones for detecting lights in the broad red, green, & blue wavelength ranges. The red cone is also able to detect wavelengths shorter than peak blue light (wavelengths corresponding to purple) so that a mix of triggering red & triggering blue cones by purple light lets us see purple. Here’s an image corresponding cone detection spectrums as I understand them: [https://www.unm.edu/\~toolson/human\_cone\_action\_spectra.gif](https://www.unm.edu/~toolson/human_cone_action_spectra.gif)

When displaying images containing purple on my digital computer, my screen isn’t displaying purple wavelength light, but rather a mix of red light in the typical red wavelengths & blue light in the blue wavelengths to trigger both cones & trick me into seeing purple. However, I can still take pictures of purple things that originally trigger my red cones with wavelengths on the complete opposite end of the visible light spectrum & display them on my screen to trigger my red cones with red light instead. How does my digital camera translate purple light to the correct mix of red & blue to display in RGB image formats when these formats have to trick my red cones to seeing purple by displaying light on the opposite end of the visible light spectrum?

In: Physics