What are RAW images and how do they differ from normal images?


I’ve listened to many tech reviewers and photo editors saying that RAW images are better and have been intrigued by it. Also what are it’s merits and demerits?

In: Technology

RAW format usually includes everything the camera picked up at the moment of taking the photo, so you have a lot of data to work with if you want to make a good picture out of it.

The RAW images basically have no compression or image processing. Straight up raw byte values for each of the colours of each pixel direct from the sensor. Along with metadata and if your camera has a built in screen usually a thumbnail for that.

The downside is they are massive. If you have a 30 Megapixel DSLR, each picture is going to be some multiple of that depending on how many bytes per pixel it stores and how the sensor encodes its data. But at the most basic, you’d have 3 bytes per pixel so 30 Megapixel ~ 90 MBytes per image. And once you get to massive image size, not only is storage space an issue (less so now that 100+ GB flash storage is a thing), the storage _speed_ becomes an issue. Unless you have lots of onboard RAM that can store dozens of sequential exposures, how fast you can write the data to it becomes an issue. This is what separates a consumer from a pro DSLR camera that is capable of doing 15 frames per second in a multi-exposure like for sports photography, its memory cache and data handling speeds that can keep up with such a ludicrous data rate.

Upside is it has absolutely no processing, smoothing, image correction or anything. Its straight from the sensor, so it really is a true representation of what you viewed through the viewfinder.

RAW images = what your camera sensor sees, normal images = what your screen shows.

There are a few differences: A sensor doesn’t really record colored pixels. Color is achieved by placing a filter in front of it – so some pixels will record only red light, some only green and some only blue. While your monitor does works in a similar way, generally there those “subpixels” are grouped together to full-color pixel units. This translation has to be done before an image can be shown.

And there are other things that have to be done in between the sensor recoding an image and a monitor displaying it. A camera would typically remove image noise, correct white balance (basically, remove any tinting that the lighting of your scene might have), and a lot of other different things. RAW images don’t include all of that and are taken directly from the sensor data, including more details about some over- or underexposed areas, that would just appear “completely black” or “completely white” in a processed image.

That means that RAW images aren’t really “better”, in fact they themselves are much worse pictures than a processed JPEG. They just allow you the freedom of doing those processing steps yourself (and adjusting the parameters to your liking) to get an even better end result – assuming you and your software are more capable than your camera (which might make some wrong guesses about how you want your photo to look).


Raw Image is every bit of data the camera sensor captured. Huge files, but the maximum amount of information to play around with when editing.

Normal Images (typically this means JPGs) are compressed, so they are much smaller and easier to share, but a lot of data has been lost so they can’t be edited as much.