What is the difference between RAW images and another uncompressed format like BMP?


What is the difference between RAW images and another uncompressed format like BMP?

In: 35

10 Answers

Anonymous 0 Comments

RAW images are image sensor data without (or with minimal) post-processing.

BMP can come from any source, and is either uncompressed, or RLE-compressed (a very simple algorithm that essentially replaces rows of identical pixels with “put x pixels of color y here” instructions).

Anonymous 0 Comments

RAW contains the exact data recorded by the sensor as well as some metadata required to interpret it.

The sensors themselves have some number of bits of info which doesn’t match up with the typical bit count of a BMP. I.e 12 or 14 bits in the sensor, vs 8/16/32 bits in a BMP.

Also there’s usually an extra ccd pixel for green, or there might be like an infrared pixel. Additionally the color filters used don’t correspond exactly to the RGB color space a BMP would typically use.

End result is that converting from RAW to RGB will generally end up with a loss of data. But if you have the raw data you get the ability to tweak the conversion to do things like preserve shadows and highlights that might otherwise be lost.

Anonymous 0 Comments

Raw image data is useful for science where you can tweak the LUT (look up table) for the color/grayscale image. This is important in space and medical because a lot of the time the image you get will be mostly junk(too black/dark) but with a bit of tweaking you can highlight important areas without compromising the integrity/validity of the data.

Anonymous 0 Comments

A raw image is the image captured by the sensor. If you display that image directly it looks very different from what the human eye would see.

Raw images essentially have significantly more colour data than a BMP. That’s why it’s good to keep them if you want to adjust the lighting and colour in the final image.

The conversation from a raw image to a BMP isn’t compression, it’s more like a “translation” to make the colours suitable for humans.

Anonymous 0 Comments

RAW files aren’t even exactly images, they are data from the camera’s sensor, and are created by most mid to high-end cameras if selected. They aren’t displayable as-is, they need to be converted to a color space (the set of colors a monitor can display), and white balance needs decided before it is really able to be a true image. RAW also is a different format for pretty much every different camera vendor, so it’s more of a family of formats.

The benefit is there is more data and thus you can do more advanced post-processing with that data compared to a bitmap, which has a lot less data than the raw file, but is made to be displayed directly.

Anonymous 0 Comments

RAW is what it says on the tin really, raw output from the sensor, without or minimal processing. Sensors are different and RAW file need not have proper metadata telling what it is so mostly image viewers have no idea how to display them, they are meant for further processing not direct display.

BMP however is meant for displaying directly, it’s the simplest standardized format to display an image.

Anonymous 0 Comments

In other words: RAW is what the camera actually “sees”. BMP is the dumbed down version that human eyes can interpret. Because camera sensors can (and usually do) have significantly more visual range than human eyes, when one translates from RAW to BMP, a lot of the information is lost (either through compression or clipping, i.e. smashing bunches of RAW range into a single number OR up or down-pushing the high/low frequencies into frequencies that our eyes can see e.g camera sees infrared, the whole infrared gets smoothed down into red OR camera sees UV, the entire UV gets smooshed up into Violet) cameras sees (RAW) distinction at 440, 441, 442, 443, 444 and 445 nm, BMP gets 440 and 445 nm and that’s it. The others are floored or ceilinged to the closest number (depending on the process). Not at all unlike audio-processing from WAV to MP3

Anonymous 0 Comments

As others have said, it’s “raw”, unprocessed sensor data. The processing is primarily color balance (gray point). Light has very different color, some “warmer” which is red/amber vs “colder” which is more blue. Normal images adjust for this colored light to make colors look normal. Raw data does not. So the main difference is normalizing that color to look like you’d expect it to look.

The raw sensor data has a crazy, varying hues, based on the sensor and filters used. (Filters are tiny colored lenses in front of a block of pixels.) When this sensor data is recorded, it doesn’t adjust for these colors, so you’ll have a block with one pixel with a red filter, one with a blue, and two with green. This is the typical “bayer” filter that most cameras use. To make the full image, the missing colors for each pixel are taken from adjacent pixels. So despite one pixel having only red data, the blue and green is taken from adjacent pixels, to get a full color pixel.

Anonymous 0 Comments

Well, think about it this way. Let’s say you have a book. The story has not yet been written down, it exists only in your mind. It’s hard for me to peer into your mind and turn that into words, even with something like a mri. You have to first translate those thoughts into text, and then turn that text into sentences and finally put all those sentences together in a story. There’s some editing that will take place, just so that you can actually make it a coherent story.

That’s what happens when you convert a raw file to an image. That raw file contains lots of data that is simply unusable by image formats. For example, the sensor may pick up colors that are not actually present in any color gamut. In the future you could possibly convert that same raw file into a compatible format for some future technology that has better color. However, even an uncompressed format like bitmap is going to have much of the original data removed in order to fit the image fomat.

Bitmap itself is uncompressed, but it is not necessarily lossless compared to a raw file. It throws away all the pieces that don’t fit the bitmap format just as a writer throws away all the pieces of the draft that don’t fit a finished story. However, if they keep that original draft they might take those pieces in the future and make another better story.

Tldr RAW just has more data than can fit into a standard image format because of color space limitations. Many formats are uncompressed, but not lossless compared to raw.

Anonymous 0 Comments

The big difference with raw images is that they offer more dynamic range – the difference between the darkest value and the lightest value.

Typical image formats use 8 bits per color per pixel, so the red values can range from 0 to 255.

The actual sensor on the camera has 13, 14, or even more bits per pixel, so they have far more distinct intensities for each of the color. The RAW format stores all of that information.

If you use a program like Adobe’s lightroom, you can then choose how to map those 14 bits of red (and the other colors) to the 8 bits of the final image.

Here’s a wonderful example using an image captured by space photographer John Kraus:


The left image shows the unprocessed version. He is capturing the detail in the exhaust of the rocket, which is extremely bright, so everything else is just black in the original 8-bit image. This is a daytime launch image.

But the camera he uses recorded lot of detail in the black areas, and he is able to increase the brightness of that area of the image, and it turns out it recorded a ton of useful detail.

Working with raw images in a program like lightroom is magic.