How can Dolby Vision and the like provide more saturation and more brightness vs SDR? Why can’t SDR just tell the monitor to go to its maximum brightness or use the saturated colors that it can?

415 views

In all “demos” they show a slightly desaturated / dull / dark image as the SDR and the opposite as Dolby Vision (or whatever).

While I understand that DV can provide more resolution (e.g. finer / more grades between shades or colors) I don’t understand what stops SDR to simply ask maximum pure red (as in 255:0:0 in RGB). I understand that the equivalent in DV would be something like 1024:0:0, but if both are mapped to the maximum red a monitor can produce I don’t get how DV can be brighter or more saturated..

In: 17

7 Answers

Anonymous 0 Comments

So it depends on the types of panel. If it’s OLED, then it’s very easy to get that high contrast, because individual pixel is lit independently of each other and it emits light itself.

If we are talking about LED panels (actually just LCD screen over LED backlight), there are 3 types: backlight, edge-lit, full array local dimming. LCD pixels themselves are not able to provide sufficient contrast, they are colored filter which filters the backlight (LED) and display the colors.

To enable higher contrast, the intensity of the backlight need to differ, so the edge-lit have individual LEDs around the edge of the display so they can adjust the brightness depends on the stuff being displayed on the monitor, if that area is block, then it can shut down the LED and make that zone darker, compare to a single backlight which is always lit.

Full Array Local Dimming takes this a step further by having an 2D array of LED behind the LCD, so the localized contrast is even more dramatic, obviously the more zones that it has, the better the local contrast will be.

Dolby Vision is just a certification to say that a particular monitor meet a certain criteria of color/contrast as defined by Dolby Vision parameters. Some companies might choose to not pay for such certification.

Anonymous 0 Comments

It doesn’t. Not necessarily.

You need all the pieces. Brightness and contrast constraints are only part of the requirements for HDR / Dolby Vision certification.

You could have a regular SDR monitor that produces colors just as bright and dark as HDR, of course. Earlier OLED sets would probably fit the bill there brightness-wise even though they don’t support all the other specifications.

For me the appeal of HDR was always in the color detail – finer steps between colors. I watch a lot of scifi and the terrible color banding in space shots has always bothered me. I was so excited to switch to an HDR tv. Sadly, convenience won out in my household and my switch to primarily consuming streaming video happened at about the same time. So I’ll be forever plagued by video compression that negates much of the benefit of the improved physical device.

Anonymous 0 Comments

Sounds like you have a pretty good understanding of the increase in steps of colour information.

I believe that stretching 256 “steps” across a wider 1024 darkest to lightest “steps” would greatly increase colour blocking.

Anonymous 0 Comments

Two things here:

1. The TV’s ability to actually produce very bright highlights and inky black blacks *at the same time*. Traditional backlight LCD/LED screens are very bad at this because the solid backlight is always on and “bleeds,” through the screen in the dark areas. This means if the brights get brighter, then the darks also get brighter. Newer OLED and other HDR screens are much better at handling this.
2. The processing side of things. Almost everything sans professional video/photo work is done in “8 bit color,” i.e. 3 separate numbers, 0-255 are sent for each red, green, and blue pixel. This content *already exists* and wouldn’t look correct if you tried to map that 0-255 range to an HDR 0-2000 or whatever. You’d get things like a white piece of drywall in a movie suddenly shining like the sun. Therefore you actually need processing and software to handle the different video formats, and differentiate between SDR and HDR content, while also allowing some user-level control over it.

Anonymous 0 Comments

This will be slightly oversimplified for ELI5 purposes.

This involves not just SDR vs HDR but also the question of color spaces. When we talk about SDR for TV, we usually assume a particular color space, like BT.601 which was used for SD TV, and BT.709 which is used for HD. Computer images usually use sRGB or Adobe RGB. These color spaces map the wavelengths of light that you should get for a particular RGB value, in other words, when a pixel is R=255, what exact value of red does that refer to? HDR uses yet another color space called Rec 2100, which defines much wider values, so a larger color space can be represented. That’s why HDR can show a “deeper red” than SDR can. Here’s a comparison:

https://en.wikipedia.org/wiki/Rec._2100

Edit:
I previously included this link which was the wrong spec, but it has a better chart with the other color spaces for comparison:
https://en.wikipedia.org/wiki/Rec._2020

Anonymous 0 Comments

> While I understand that DV can provide more resolution (e.g. finer / more grades between shades or colors) I don’t understand what stops SDR to simply ask maximum pure red (as in 255:0:0 in RGB).

Because if the video file is SDR, then the monitor sees that and knows the color space is Rec.709. When the monitor sees an HDR video file, it switches modes to to Rec.2020. It also knows that the luminance data changes as well and that pure white is 100nit for SDR whereas HDR is 10000nit.

You could theoretically take a 1080p SDR video and alter the data to be HDR, but it would take a lot of effort for it not to look terribly inaccurate.

Anonymous 0 Comments

The problem here is the video format. Many SDR displays are physically capable of HDR-like brightness, but cannot process HDR video signals. The CPU in the TV cannot tell the panel to output so many lumens automatically, even if you could do so when you put the brightness controls manually.

The way SDR digital video works for brightness is with something called a gamma curve. This curve is basically an exponential function that says “when the video file says this pixel is X, you output Y amount of light”. This allows video to have bright scenes without the dark scenes being a black mess and viceversa. Now, digital SDR video doesn’t allow for setting the gamma function in the video file, which means the gamma curve is the same for ALL video files in existence, so it needs to be generalistic enough to be able to reasonably fit all kinds of video content.

HDR, however, uses different functions that allow for brighter images without crushing the black details in dark scenes. Two of the functions are called hybrid log-gamma (HLG, used in an HDR format of the same name) or perceptual quantization (PQ, used in HDR10 and Dolby Vision), that work better in both extremes.

So in your example, the limit is the correspondence between what 255 means in a standard gamma curve, which “bright but not that much”. In HDR, the TV now knows that with the other functions, 255 means “really, *really* bright”

Now, trying to add detail to the bright areas without removing it from the dark ones with the same amount of information is hard, so most HDR videos use 10 bits (instead of 8 bits in normal SDR) per color per pixel to prevent color banding. This is separate from any HDR standard; you could in theory have 10-bit SDR video or 8-bit HDR video, but they would respectively be overkill and look kinda bad, respectively.

Now, you might have heard of “HDR metadata”; all that is just a bunch of extra information at the beginning of the file (for HDR10) or alongside the video data (for Dolby Vision) telling the display how to present the image in case it doesn’t have the required brightness (since these standards are calibrated to 1.000 of 10.000 lumens, which very few large TVs can achieve). A normal SDR-only TV cannot use this information to present the video because it does not know how to read it.

Also, color saturation has more to do with a larger color space or gamuts. Normal video uses BT.701 color, while some HDR video uses BT.2020, which has more colors. [This] is a comparison of what I’m talking about; so when an HDR video tells “green: 255”, it not only means a brighter green (thanks to the modified brightness curves) but also a deeper green (thanks to the wider color gamut). Again, like 10-bit depth, these are not technically related (you *could* use one without the other), but they often go hand in hand.