why does colors appear different trough Camera than in rl?

149 views

why does colors appear different trough Camera than in rl?

In: 0

9 Answers

Anonymous 0 Comments

Dynamic range and color gamut are going to be some differences. But also, just the calibration of how to represent the light coming in is going to matter, then you have the post-processing, then you have the display of your monitor (color accuracy, bit-depth, color gamut, etc.).

Anonymous 0 Comments

Dynamic range and color gamut are going to be some differences. But also, just the calibration of how to represent the light coming in is going to matter, then you have the post-processing, then you have the display of your monitor (color accuracy, bit-depth, color gamut, etc.).

Anonymous 0 Comments

Dynamic range and color gamut are going to be some differences. But also, just the calibration of how to represent the light coming in is going to matter, then you have the post-processing, then you have the display of your monitor (color accuracy, bit-depth, color gamut, etc.).

Anonymous 0 Comments

Colors may appear different through a camera than in real life due to differences in how the camera and the human eye perceive and process color.

The human eye is able to perceive a wide range of colors, and our brains interpret these colors based on the wavelengths of light that are reflected off of objects and detected by the eye’s color-sensitive cells. However, cameras capture images using a different process. Most cameras use sensors that detect the intensity of light at different wavelengths, and then use software to interpret this information and create an image.

There are several factors that can cause differences in color between what we see in real life and what a camera captures. One of these factors is the lighting conditions. Different types of lighting can affect how colors are perceived and captured, and a camera may not always be able to adjust for these differences in the same way that the human eye can.

Another factor is the color profile or settings used by the camera. Cameras often come with different color modes or settings that can affect the way colors are captured and processed. These settings can have a significant impact on the final image, and they may not always accurately reflect the true colors of the scene.

Finally, different cameras have different sensors and processing capabilities, which can affect how colors are captured and interpreted. High-end cameras and professional-grade equipment may be more accurate in capturing colors, while lower-end cameras or devices may have more limitations.

In summary, colors may appear different through a camera than in real life due to a combination of factors, including lighting conditions, camera settings, and the technical capabilities of the camera or device being used.

Anonymous 0 Comments

Colors may appear different through a camera than in real life due to differences in how the camera and the human eye perceive and process color.

The human eye is able to perceive a wide range of colors, and our brains interpret these colors based on the wavelengths of light that are reflected off of objects and detected by the eye’s color-sensitive cells. However, cameras capture images using a different process. Most cameras use sensors that detect the intensity of light at different wavelengths, and then use software to interpret this information and create an image.

There are several factors that can cause differences in color between what we see in real life and what a camera captures. One of these factors is the lighting conditions. Different types of lighting can affect how colors are perceived and captured, and a camera may not always be able to adjust for these differences in the same way that the human eye can.

Another factor is the color profile or settings used by the camera. Cameras often come with different color modes or settings that can affect the way colors are captured and processed. These settings can have a significant impact on the final image, and they may not always accurately reflect the true colors of the scene.

Finally, different cameras have different sensors and processing capabilities, which can affect how colors are captured and interpreted. High-end cameras and professional-grade equipment may be more accurate in capturing colors, while lower-end cameras or devices may have more limitations.

In summary, colors may appear different through a camera than in real life due to a combination of factors, including lighting conditions, camera settings, and the technical capabilities of the camera or device being used.

Anonymous 0 Comments

Colors may appear different through a camera than in real life due to differences in how the camera and the human eye perceive and process color.

The human eye is able to perceive a wide range of colors, and our brains interpret these colors based on the wavelengths of light that are reflected off of objects and detected by the eye’s color-sensitive cells. However, cameras capture images using a different process. Most cameras use sensors that detect the intensity of light at different wavelengths, and then use software to interpret this information and create an image.

There are several factors that can cause differences in color between what we see in real life and what a camera captures. One of these factors is the lighting conditions. Different types of lighting can affect how colors are perceived and captured, and a camera may not always be able to adjust for these differences in the same way that the human eye can.

Another factor is the color profile or settings used by the camera. Cameras often come with different color modes or settings that can affect the way colors are captured and processed. These settings can have a significant impact on the final image, and they may not always accurately reflect the true colors of the scene.

Finally, different cameras have different sensors and processing capabilities, which can affect how colors are captured and interpreted. High-end cameras and professional-grade equipment may be more accurate in capturing colors, while lower-end cameras or devices may have more limitations.

In summary, colors may appear different through a camera than in real life due to a combination of factors, including lighting conditions, camera settings, and the technical capabilities of the camera or device being used.

Anonymous 0 Comments

Human vision detects colours using three different types of “cone” cells in the eye, known as Long-, Medium-, and Short-wavelength-sensitive. These are loosely equivalent to “red”, “green”, “blue” … but we’ll come back to that.

Modern digital cameras and phone cameras also have separate pixels in their image-sensors which detect “red”, “green”, and “blue” light.

However, you have to think about what happens if you see spectral (“rainbow”) colours *between* the red/green/blue primaries. If you looked at a spectral orange colour, that activates both the Long (“red”) and Medium (“green”) cones in some ratio that depends on both the colour of the light *and* how the sensitivity of the cone-cell decreases as you move away from its colour of maximum sensitivity. The brain interprets that ratio of red and green signal from the eye as a particular shade of “orange”. Essentially the RGB filters in cameras have a different response to in-between colours than the human eye, and therefore do not “see” in-between colours in the same ratios.

A camera sensor’s RGB filters generally have less-overlap than the colour-sensors in the eye, which can result in the camera seeing the colours “pulled” more-towards the red/green/blue primary than the original colour. (This is very apparent if you ever try to take a photograph of the spectrum from a prism.) Real colours contain a whole mixture of different wavelengths in different amounts, so it all gets quite complicated to predict the exact effects.

Specifically the Long (“red”) sensor in the eye effectively* has some extra sensitivity in the very deep purple, whereas the red sensor in a camera doesn’t. This is why some purple colours look purple (similar to magenta) to people, but only come out blue on a camera.

You can get *very* specialist cameras which can capture colour exactly the same way as human vision, but they’re very expensive (£50k+), bulky, and slow – they move special filters in front of the sensor sequentially to capture the colours – and this filters are designed to replicate the exact sensitivity of the eye.

This is a complicated topic of *Colour Science*, and well beyond ELI5. Including the subtle differences between *LMS sensitivities* and *colour matching functions*. I’ve done my best to try and simplify.

Anonymous 0 Comments

Human vision detects colours using three different types of “cone” cells in the eye, known as Long-, Medium-, and Short-wavelength-sensitive. These are loosely equivalent to “red”, “green”, “blue” … but we’ll come back to that.

Modern digital cameras and phone cameras also have separate pixels in their image-sensors which detect “red”, “green”, and “blue” light.

However, you have to think about what happens if you see spectral (“rainbow”) colours *between* the red/green/blue primaries. If you looked at a spectral orange colour, that activates both the Long (“red”) and Medium (“green”) cones in some ratio that depends on both the colour of the light *and* how the sensitivity of the cone-cell decreases as you move away from its colour of maximum sensitivity. The brain interprets that ratio of red and green signal from the eye as a particular shade of “orange”. Essentially the RGB filters in cameras have a different response to in-between colours than the human eye, and therefore do not “see” in-between colours in the same ratios.

A camera sensor’s RGB filters generally have less-overlap than the colour-sensors in the eye, which can result in the camera seeing the colours “pulled” more-towards the red/green/blue primary than the original colour. (This is very apparent if you ever try to take a photograph of the spectrum from a prism.) Real colours contain a whole mixture of different wavelengths in different amounts, so it all gets quite complicated to predict the exact effects.

Specifically the Long (“red”) sensor in the eye effectively* has some extra sensitivity in the very deep purple, whereas the red sensor in a camera doesn’t. This is why some purple colours look purple (similar to magenta) to people, but only come out blue on a camera.

You can get *very* specialist cameras which can capture colour exactly the same way as human vision, but they’re very expensive (£50k+), bulky, and slow – they move special filters in front of the sensor sequentially to capture the colours – and this filters are designed to replicate the exact sensitivity of the eye.

This is a complicated topic of *Colour Science*, and well beyond ELI5. Including the subtle differences between *LMS sensitivities* and *colour matching functions*. I’ve done my best to try and simplify.

Anonymous 0 Comments

Human vision detects colours using three different types of “cone” cells in the eye, known as Long-, Medium-, and Short-wavelength-sensitive. These are loosely equivalent to “red”, “green”, “blue” … but we’ll come back to that.

Modern digital cameras and phone cameras also have separate pixels in their image-sensors which detect “red”, “green”, and “blue” light.

However, you have to think about what happens if you see spectral (“rainbow”) colours *between* the red/green/blue primaries. If you looked at a spectral orange colour, that activates both the Long (“red”) and Medium (“green”) cones in some ratio that depends on both the colour of the light *and* how the sensitivity of the cone-cell decreases as you move away from its colour of maximum sensitivity. The brain interprets that ratio of red and green signal from the eye as a particular shade of “orange”. Essentially the RGB filters in cameras have a different response to in-between colours than the human eye, and therefore do not “see” in-between colours in the same ratios.

A camera sensor’s RGB filters generally have less-overlap than the colour-sensors in the eye, which can result in the camera seeing the colours “pulled” more-towards the red/green/blue primary than the original colour. (This is very apparent if you ever try to take a photograph of the spectrum from a prism.) Real colours contain a whole mixture of different wavelengths in different amounts, so it all gets quite complicated to predict the exact effects.

Specifically the Long (“red”) sensor in the eye effectively* has some extra sensitivity in the very deep purple, whereas the red sensor in a camera doesn’t. This is why some purple colours look purple (similar to magenta) to people, but only come out blue on a camera.

You can get *very* specialist cameras which can capture colour exactly the same way as human vision, but they’re very expensive (£50k+), bulky, and slow – they move special filters in front of the sensor sequentially to capture the colours – and this filters are designed to replicate the exact sensitivity of the eye.

This is a complicated topic of *Colour Science*, and well beyond ELI5. Including the subtle differences between *LMS sensitivities* and *colour matching functions*. I’ve done my best to try and simplify.