They are separate R G and B individual pixels.
When you take a picture the camera has to convert the raw output into an image so it does a bunch of math called [debayering/demosaicing](https://en.m.wikipedia.org/wiki/Demosaicing) which basically involves comparing the values of pixels around each pixel to get a jpeg image where each pixel now has R G and B data not just one. If you have ever heard of raw photos that is the data before debayering occurs.
There are different ways pixels can be arranged, the [bayer filter pattern](https://en.m.wikipedia.org/wiki/Bayer_filter) is most common, but fuji notably uses [x trans](https://en.m.wikipedia.org/wiki/Fujifilm_X-Trans_sensor) and phones are starting to use [quad bayer](https://m.dpreview.com/articles/4088675984/quad-bayer-vs-quad-pixel-af-what-they-are-how-they-work-and-how-they-differ). Each has their own pros and cons, and affects how the final image will look.
Interestingly, green screens became popular for digital cameras over the blue screens used on film in part because they bayer pattern has more green pixels than R or B.
Latest Answers