Why do colors from RAW images look different than “raw” colors in real life?


Colors from RAW images look different than “raw” colors in real life and somehow… “ugly”. Of course these files needs post-processing so the colors of those will be better and life-like, but I think there is something about the camera, the sensor… so that RAW files look like that.

I’m looking forward to reading interesting explanations from you guys!

In: 18

Your eyes are so much better than a computer. When we see things, light is reflected off an object, which is then turned into electrical signals. These go to your brain, and are interpreted as things you see.

For cameras, it’s similar, but different. First problem, the camera’s lens can only detect and record certain colours. Second problem, the screen you’re viewing something on can only display certain colours.

If you have a normal, non-HDR screen, your screen can display 255 shades of red, 255 shades of blue, and 255 shades of green. That’s it. Every colour the screen displays is a mixture of those colours, bundled very close together, which your eyes and brain basically mix into other colours.

But that means there’s a fundamental limit on what the screen can display. The reddest something can be is 255 red, 0 blue and 0 green. It can’t ever be redder than that, the screen can’t display it. But things in real life CAN be redder, and your eyes and brain can both perceive things as redder.

Here, it really depends on what kind of raw images we’re talking about: Video or Still.

For Stills: raws are very similar to a database just noting which pixel or even subpixel (the red, green and blue value of each pixel) has received which amount of light. It is then the job of a raw converter to convert that into a normal picture. These pictures are very contrasty initially but as you still have the raw data, you can manipulate them much better.

Video raw works very differently. To simplify extremely: while each frame is initially taken like a photograph the amount of data this generates is nearly impossible to save 24-60times a second.
This is why each frame is processed before it’s being stored. The still is being edited by the camera to lower the contrast and lower saturation. The frame is then stored pretty much as a jpg or another compressed filetype. This reduces the amount of data to manageable sizes.

Because the still is now very “flat” (less contrast less saturation) you still retain a huge chunk of dynamic range and this gives you the best possible options for post processing.

> I think there is something about the camera, the sensor… so that RAW files look like that.

Yeah, that’s kind of the point. RAW images are exactly the data that comes out of the sensor, which sees colors differently to your eye simply because *it’s not a human eye*. The whole purpose of the format is that you get to do the corrections the camera usually would automatically apply to make the image appear like what you’d see with your eyes.

Sensors try to do two things:

1) Give you a very plain and basic image, so that you can have more freedom when editing it

2) Try to capture more information about light (ie. dynamic range, the difference between the brightest spot and the darkest spot in the image), in order to do this they use various schemes (it’s basically all maths), which result in a flatter image with muted colors, but with a lot on information in it that you can later take advantage of in editing softwares.

And of course it has to do with the quality of the sensor too.

RAW images can include a greater range of colours and brightnesses than your monitor can display. Commonly they’re displayed by reducing their contrast and saturation enough so that your monitor can cover the range. While this means you can see a representation of the full range of colours captured, it also means they don’t look realistic and have a muted appearance.

The idea is that RAW images can be processed in a range of ways to convert them into images that use a colour gamut standard that’s understood by display devices. By delaying this processing until after the images have been taken, you can take full advantage of the camera’s image quality and make colour grading decisions at a more convenient time.