How can Dolby Vision and the like provide more saturation and more brightness vs SDR? Why can’t SDR just tell the monitor to go to its maximum brightness or use the saturated colors that it can?

302 views

In all “demos” they show a slightly desaturated / dull / dark image as the SDR and the opposite as Dolby Vision (or whatever).

While I understand that DV can provide more resolution (e.g. finer / more grades between shades or colors) I don’t understand what stops SDR to simply ask maximum pure red (as in 255:0:0 in RGB). I understand that the equivalent in DV would be something like 1024:0:0, but if both are mapped to the maximum red a monitor can produce I don’t get how DV can be brighter or more saturated..

In: 17

7 Answers

Anonymous 0 Comments

The problem here is the video format. Many SDR displays are physically capable of HDR-like brightness, but cannot process HDR video signals. The CPU in the TV cannot tell the panel to output so many lumens automatically, even if you could do so when you put the brightness controls manually.

The way SDR digital video works for brightness is with something called a gamma curve. This curve is basically an exponential function that says “when the video file says this pixel is X, you output Y amount of light”. This allows video to have bright scenes without the dark scenes being a black mess and viceversa. Now, digital SDR video doesn’t allow for setting the gamma function in the video file, which means the gamma curve is the same for ALL video files in existence, so it needs to be generalistic enough to be able to reasonably fit all kinds of video content.

HDR, however, uses different functions that allow for brighter images without crushing the black details in dark scenes. Two of the functions are called hybrid log-gamma (HLG, used in an HDR format of the same name) or perceptual quantization (PQ, used in HDR10 and Dolby Vision), that work better in both extremes.

So in your example, the limit is the correspondence between what 255 means in a standard gamma curve, which “bright but not that much”. In HDR, the TV now knows that with the other functions, 255 means “really, *really* bright”

Now, trying to add detail to the bright areas without removing it from the dark ones with the same amount of information is hard, so most HDR videos use 10 bits (instead of 8 bits in normal SDR) per color per pixel to prevent color banding. This is separate from any HDR standard; you could in theory have 10-bit SDR video or 8-bit HDR video, but they would respectively be overkill and look kinda bad, respectively.

Now, you might have heard of “HDR metadata”; all that is just a bunch of extra information at the beginning of the file (for HDR10) or alongside the video data (for Dolby Vision) telling the display how to present the image in case it doesn’t have the required brightness (since these standards are calibrated to 1.000 of 10.000 lumens, which very few large TVs can achieve). A normal SDR-only TV cannot use this information to present the video because it does not know how to read it.

Also, color saturation has more to do with a larger color space or gamuts. Normal video uses BT.701 color, while some HDR video uses BT.2020, which has more colors. [This] is a comparison of what I’m talking about; so when an HDR video tells “green: 255”, it not only means a brighter green (thanks to the modified brightness curves) but also a deeper green (thanks to the wider color gamut). Again, like 10-bit depth, these are not technically related (you *could* use one without the other), but they often go hand in hand.

You are viewing 1 out of 7 answers, click here to view all answers.