How do cameras capture the color purple?

106 viewsOtherPhysics

I’m asking this question from the perspective of a hobby computer scientist as well as someone who has a base level understanding of optics/light from middle school physics.

As I understand it, humans have 3 distinct cones for detecting lights in the broad red, green, & blue wavelength ranges. The red cone is also able to detect wavelengths shorter than peak blue light (wavelengths corresponding to purple) so that a mix of triggering red & triggering blue cones by purple light lets us see purple. Here’s an image corresponding cone detection spectrums as I understand them: [https://www.unm.edu/\~toolson/human\_cone\_action\_spectra.gif](https://www.unm.edu/~toolson/human_cone_action_spectra.gif)

When displaying images containing purple on my digital computer, my screen isn’t displaying purple wavelength light, but rather a mix of red light in the typical red wavelengths & blue light in the blue wavelengths to trigger both cones & trick me into seeing purple. However, I can still take pictures of purple things that originally trigger my red cones with wavelengths on the complete opposite end of the visible light spectrum & display them on my screen to trigger my red cones with red light instead. How does my digital camera translate purple light to the correct mix of red & blue to display in RGB image formats when these formats have to trick my red cones to seeing purple by displaying light on the opposite end of the visible light spectrum?

In: Physics

5 Answers

Anonymous 0 Comments

Camera’s see purple in much the same way that our eyes do. The sensors they use aren’t just sensitive to RGB, they’re sensitive to the whole visible spectrum (and infrared and ultraviolet, too, if there’s no filter on the lens blocking that light). The way the sensors are made, and then how the resulting data gets processed, is meant to emulate human vision as much as possible.

You are viewing 1 out of 5 answers, click here to view all answers.