With HDR-capable TV becoming commonplace, why can’t we have a HDR format for pictures that takes advantage of this HDR capability?



Please don’t confuse this with the single HDR image that is created post processing. Displaying that image on a ‘HDR’ image will not take advantage of the HDR capability of the TV

In: Technology

There are lots of images format with higher than 9 bit color depth. You have been able to get it as output from the scanner for decade often in TIFF format.

Common PNG images support 8 and 16 bits per pixel. The RAW you can get out of cameras have more than 8 bits in most cases but the format is camera-specific.

So there are lots of image formats for higher bit depts then 8 there is just not one common that cameras output and the program can handle. Most images are jpg and that format support only 8 bits images.

Such a file format is possible, but not as desirable as you’d think. Using two TIFF files for HDR allows all the code that works on TIFF images to be reused. Since TIFF is a well established standard, the idea of making a new standard doesn’t draw a lot of volunteers.

HDR in pictures is usually achieved by taking three separate photos – one bright one, one regular one, and one dark one. The brightest, darkest, and mediumest spots are put where they are supposed to be.

HDR doesn’t actually refer to taking three pictures. All HDR means is that whatever you’re using can show very deep blacks and very bright whites. It’s basically a high-contrast picture/video.

HDR video content contains just a single image for each frame to be displayed. Why would HDR still images be different? Sure, the limitations of cameras mean that both still and video HDR may be captured as multiple images and then merged, but that merging is not done by display devices.

Are you asking about a format for use by display devices or about a format for use between the camera and post-processing?