Computer monitors use the R,G,B system to generate colors. They do not generate spectrally pure light, but provide a mixture that stimulates your eyes the same way a more complicated spectrum of light would. Since there are maybe 100,000 meaningfully distinct wavelengths in the visual range, and 3 primary colors, a *lot* of information gets lost. If you google “sRGB” and head over to Wikipedia, you will see a nice representation of the so-called “color space” of the human eye.
The thing is, an R,G,B system can only represent colors inside a triangle on the plane “color space”. That is because you can’t have negative brightness of any of the three primary colors. The triangle formed by three primary colors is called the “gamut” of an R,G,B system, and your monitor can normally only reproduce colors inside that gamut.
If you look on Wikipedia for “chimerical colors” you will see some interesting demonstrations of how you can exploit a peculiarity of your visual system to see colors outside the monitor’s gamut.
Colors are a continuous phenomenon. The exact wavelength (or combination of wavelengths) of the thing you are looking at will register a particular color. Even a slight change in the wavelength means you’re looking at a slightly different color.
But computers do everything in discrete intervals. Say you measured the brightness of the red pixel from 1 to 10. The pixel cannot have a brightness of “5.5.” If you want more than 10 values, you can create a larger scale, but eventually you have to stop somewhere. The most common system in digital color uses 256 values for each color.
As you tick from one brightness setting to the next, you’re theoretically skipping over an infinite range of actual colors that occupy the space in between. In practice, this is rarely an actual problem. The RGB system can represent about 17 million colors, and one of them will be very close to the “true” color of interest. It’s mostly interesting as a piece of trivia or to people who are *really* interested in color fidelity across a range of digital and non-digital applications (i.e. Pantone and their corporate clients).
because what is every colour? it’s not about brightness its about the steps from 0 to its maximum each R G B sub pixel can operate on.
8bit 16.7m is the norm because its fine for 99% of the stuff we watch. but it you need more then you can get monitors that can do 10bit 1.07b or even 12bit 68b colours, but we don’t because it’s expensive and most people would not be able to tell. It would also make streaming more bandwidth-heavy (much more costly) for no real point.
Because despite what you may have been told, you cannot generate every visible colour from a finite set of primaries. Saying that red, green and blue are the primary colours of light means that they’re the optimal choice for generating as large a spread of colours as possible (assuming you’re limiting to yourself to three, which you don’t have to: Sharp’s Quattron LCD screens use four primaries, red, green, blue and yellow). It doesn’t mean that they are fundamental in the way that elements are the fundamental components of chemicals.
there’s a few things here to your question.
a) even if it was “just” a matter of making pixels “bright” (or more saturated), it’s easier said then done. tvs and monitors are made to specific standards and to understand certain formats; they literally wouldn’t understand a signal that somehow made them more “bright” (assuming you mean color saturation). it’s a chicken-and-egg problem that necessarily limits progress.
b) the colors you see on a monitor are “simulated” by red, green, and blue light coming from pixels. the pixels are not “actually” producing the colors, your eyes are essentially being “tricked” by having the red, green, and blue cones in your eyes excited. this is important to remember, because the more light you add from pixels, the closer you actually get to white (which is when all your cones are excited equally). it becomes pretty hard to produce colors at certain parts of the color gamut, because on a display it starts becoming white.
c) even if the tech practically existed, it likely isn’t cost-effective to mass-produce, especially for colors we hardly ever see. uptake on HDR/UHD vs SDR is a lot slower than HD vs SD and even slower than color vs black and white. very few people would pay huge amounts of money for a screen that replicates colors that are hardly ever used in media (precisely because they aren’t represented very easily on screens), which would make it even more expensive to produce.
There’s simply only so many colors you can represent with three mono-color emitters of limited intensity. (Red, Green, and Blue.) With pigments, it’s a similar story. The CMYK system (Cyan, Magenta, Yellow, and Black) used in printing has similar limitations; in fact, the color gamut of standard printing processes is even more-limited than monitors. (This is why your local paint store has a heck of a more than four pigments in their machine; it’s necessary to cover a wider range of colors.)
A pretty short, and funny, summary of how computer colors and printed colors work, in the context of The Pantone System to describe colors, is [here.](https://youtu.be/_b78gAbGwVI?si=AJcllnLu7PFsRPvf)
Let’s call the two problem regions “violet” and “cyan”.
—–
**Violet** is a color that is even further from red and green than blue is. So to make it by mixing red, green, and blue light together, you’d need your red and green pixels to have *negative* brightness, which isn’t possible.
We could build computer screens that mixed red, green, and violet light together instead of red, green, and blue light, but our eyes aren’t very sensitive to violet light and so the violet pixels would have to be *very* bright. And violet is already the highest-energy color. You don’t want your computer screen to be able to sunburn or blind you.
—–
Meanwhile, **cyan** is a color in between green and blue. You might think: no problem, just create it by mixing green light and blue light. The trouble is that the color green is actually very close to the color red, so as you’re adding green light into the mixture, a bunch of redness is going to get in there too. So when we try to display cyan, instead of getting a mixture of green and blue (which would appear to our eyes as the color cyan), we end up with a mixture of green, blue, and red (which therefore looks closer to the color white than the color cyan would).
If we built computer screens that mixed red, cyan, and blue pixels together instead of red, green, and blue pixels, we’d just shift the problem over to the green range: we’d be trying to represent green as a mixture of red and cyan, but as we added the cyan some blueness would get in there too.
—–
The problem *could* be fixed by having more than three colors of pixel. For example, the “hexachrome” printing system uses both green ink *and* cyan ink. But on computer screens (unlike in printing), extra pixel colors comes at the expense of resolution: if you can fit 1000 tricolor dots per inch, you’ll only be able to fit 500 hexacolor dots per inch. So it’s generally not worth it.
None of the answers so far seem to be answering the actual premise of the question.
>What’s stopping them from just having red, green, and blue pixels, and then just making those go as bright (and as dark) as possible?
In short, the thing stopping us is we cannot make red pixels that are perfectly the outer-limit of what is red, and green pixels that are perfectly the outer-limit of what is green.
Longer answer: you’ve probably seen a color space chart ([CIE 1931 color space – Wikipedia](https://en.wikipedia.org/wiki/CIE_1931_color_space)), which maps all the colors of the visible spectrum (the full color gamut). Along the chart are usually numbers that correspond to the wavelengths of all visible colors from about 380nm to about 750nm.
By taking a red, green, and blue pixel and plotting them on that chart and connecting them with lines to make a triangle, all the colors you can make by mixing those pixels and adjusting their brightnesses would be inside that triangle.
Thus, the more of that chart the triangle fills, the more colors you can make by mixing your pixels. However, if your red pixel is not totally red, but more orangeish, there is no amount of mixing you can do to make it redder. As you can see, to maximize how much of the color space you can make, you need that triangle to be as big as possible, which means you need to make red, green, and blue pixels on the very outer limits of what is red, green, and blue. More specifically, the perfect red pixel would emit only ~650nm, the perfect green pixel would emit only ~520nm, and the perfect blue pixel would emit only ~450nm. If we could do that, then our TVs and monitors would be able to show *most* of the colors in the colorspace no problem.
The good news is we can make nearly perfect blue LEDs. That part is (relatively) easy. Red and green are tricky. We can’t make anything cheaply and effectively that emits only those perfect red and green wavelengths. That’s why, right now, consumer TVs and monitors can’t display all the visible colors.
With LEDs, typically we use various phosphors or quantum dots to “convert” blue wavelengths to another color wavelength. But the conversations are not perfect. Each method of “converting” has benefits and drawbacks, and none of them produce only the perfect reds and greens that we need. For example, some methods produce a wide range of wavelengths, which requires us to filter the unwanted ones and thereby sacrifice brightness. Some methods produce a wavelength close to but not ideally where we want it. But if someone invents the perfect red and green LEDs, then we would in theory be able to show all (or nearly all) the visible colors.
Latest Answers