There are lots of images format with higher than 9 bit color depth. You have been able to get it as output from the scanner for decade often in TIFF format.
Common PNG images support 8 and 16 bits per pixel. The RAW you can get out of cameras have more than 8 bits in most cases but the format is camera-specific.
So there are lots of image formats for higher bit depts then 8 there is just not one common that cameras output and the program can handle. Most images are jpg and that format support only 8 bits images.
HDR video content contains just a single image for each frame to be displayed. Why would HDR still images be different? Sure, the limitations of cameras mean that both still and video HDR may be captured as multiple images and then merged, but that merging is not done by display devices.
Are you asking about a format for use by display devices or about a format for use between the camera and post-processing?
HDR in pictures is usually achieved by taking three separate photos – one bright one, one regular one, and one dark one. The brightest, darkest, and mediumest spots are put where they are supposed to be.
HDR doesn’t actually refer to taking three pictures. All HDR means is that whatever you’re using can show very deep blacks and very bright whites. It’s basically a high-contrast picture/video.
Latest Answers