As others have said, it’s “raw”, unprocessed sensor data. The processing is primarily color balance (gray point). Light has very different color, some “warmer” which is red/amber vs “colder” which is more blue. Normal images adjust for this colored light to make colors look normal. Raw data does not. So the main difference is normalizing that color to look like you’d expect it to look.
The raw sensor data has a crazy, varying hues, based on the sensor and filters used. (Filters are tiny colored lenses in front of a block of pixels.) When this sensor data is recorded, it doesn’t adjust for these colors, so you’ll have a block with one pixel with a red filter, one with a blue, and two with green. This is the typical “bayer” filter that most cameras use. To make the full image, the missing colors for each pixel are taken from adjacent pixels. So despite one pixel having only red data, the blue and green is taken from adjacent pixels, to get a full color pixel.
Latest Answers