how is HDR different than simply adding more bits for finer steps between the same existing colors?

159 views

how is HDR different than simply adding more bits for finer steps between the same existing colors?

In: 6

7 Answers

Anonymous 0 Comments

It allows you to store values beyond 1.0 in each colour, too. Depending on your software and hardware, this can be used in a few ways but it basically books down to you storing… Well, a higher range, not simply more values within the same range.

Anonymous 0 Comments

I assume you mean video HDR and not photography HDR via editing.

HDR includes a few things:

**10-Bit color**: Normal (standard) is 8-bit, meaning 2^8 = 256, so red/green/blue each have 256 levels of saturation/brightness, giving you nearly 17M color combos. 10-bit is 2^10 , which allows for over 1B combos. This leads to finer color accuracy.

**Wider Color Gamut**: normal tv is rec.709, whereas HDR is rec.2020 (though most televisions, and many cameras go only up to P3). [Here is the comparison](https://image.benq.com/is/image/benqco/color-gamut-8?$ResponsivePreset$&fmt=png-alpha), biggest difference is in the greens.

**Higher nit count**: nits are a measurements of brightness, like lumens. For normal content the max is 100 nits. For Dolby Vision it is 10,000 (though many televisions/projectors obviously cannot get that bright).

Anonymous 0 Comments

From a photography perspective, which is the root of HDR, it’s about capturing color/detail where the sensor would’ve otherwise only seen black or white. Without getting too bogged down in details, the camera sensor has a ‘range’ of brightness where it perceives color and detail. Anything higher than that shows pure white, anything lower than that, pure black. The job of the sensor is to pick the right range so that the maximum color/detail is displayed and the least color information is clipped (pure black/white.) What HDR does is recover the color and detail in those clipped areas. Before HDR tech, we would achieve this by ‘bracketing’ exposures – set camera on tripod and take 1 picture using the correct exposure, then take multiple other frames that are over and underexposed to capture the maximum information, then fill in the clipped areas in the first frame with the under/over exposed information (detail/color) from the other frames in Photoshop.

Anonymous 0 Comments

A lot of technical answers, because in the end it’s a technical issue and the exact answer will depend on which HDR standard you’re referring to.

But ignoring those, your question almost answers itself – there isn’t.

Let’s say you only had 1 bit, and assigned those to black and white where you presume white to be 300 nits on common displays.

Now you add another bit for four values. Suddenly you can have, say, 33% grey and 67% grey as well, and increased your color fidelity.

But what if, instead, you said to keep the old black and white, and the two new bits should be interpreted as being ‘brighter than white’, to be displayed on displays that can go up to 600 nits.

You just invented an HDR standard. A crappy one with only 4 levels, and between drivers, gpus, and displays those 300 nits ones might still just display it as 4 levels of grey within their capabilities, or do tonemapping on the fly.

Anonymous 0 Comments

You basically answered your question.

The difference is they not only adding more steps (higher bit depth), but also adding more color “ranges”. So the (1,1,1) in some HDR standard would be brighter than (1,1,1) in say, sRGB.

Also adding more steps in-between isn’t “required” for HDR per se; it’s just that the typical 8-bit/channel (256 color) is already stretched thin in even sRGB (you got banding issues because the steps are too rough). By having higher dynamic range to express, it’s only getting worse. So almost all HDR standards use at least 10-bit/ch (1024) colors or more.

Anonymous 0 Comments

HDR refers to the range between “darkest” and “brightest” being higher than older technologies. The “dynamic range” is that range of brightness.

The problem with having a higher range – mainly achieved by brighter whites, is that you need finer gradations between individuals steps so that you don’t get visible stripes. The solution is to add more bits to have finer steps.

A similar issue occurs with modern wide colour gamuts. The range of colours recordable is greater and so the steps between individual levels is greater, and so you may get colour banding unless you have more bits to define finer steps.

One problem with HDR is what you do when you have an image which has higher dynamic range than the screen it is to be displayed on. There has to be some type of ‘”tone mapping” process to convert the levels. Different tone mapping techniques have different effects, so modern HDR data formats include hints as to what type of tone mapping is likely to work best according to the creator’ s intent.

Anonymous 0 Comments

“Just adding bits” costs memory, size, power, and speed, and makes it more difficult to recover a clean signal. As others have said it’s more about adding range than resolution; HDR allows you to “hack” more bits out of the system.