I’ve heard about megapixels being the amount of pixels in millions that a camera can take but I don’t understand how a Canon EOS R6 II can take better photos zoomed in than an iPhone 15 PM with optical zoom despite having a lower megapixel count.
I don’t get how a megapixel count correlates to the resolution, and how significant it is to the quality of the image.
In: Technology
TL;DR: The sensor and lenses of a EOS R6 II are larger and of higher quality, so they can collect more light and have to make fewer concessions to the laws of physics regarding optics.
The sensor is composed of tiny elements that are arranged in groups that react to light hitting them. The elements of the sensor react to light hitting them with an electric output which the processor of the camera interprets as “red”, others as “green”, others as “blue” information of a certain intensity.
Each group of those elements represent one pixel, that’s why they are called subpixels. Depending on the mixture of red, blue and green information the pixel returns a certain color (look up “additive color theory” to understand the resulting colors).
The sensor is divided into many rows and columns of subpixel groups. The sensor has “megapixel” number of those groups on it.
Bigger sensor and lower number of division into “megapixel” means each pixel had more light converting into electric charge, so the resulting pixel color is of a higher certainty that the pixel color resulted from actual light hitting it, instead of just assuming a value to “randomness” from the workings of electric chips causing “noise” in the chip. This is especially noticeable in low light photography where comparatively few photons hit each subpixel.
More megapixel means whatever is in the field of view of the lens is divided into more pieces of information, which, if all of those pieces of information are “usable”, will result in an image of more detail. But since small sensors with tiny pieces of information often result in fewer total number of “usable” information pieces, the images from a camera that delivers fewer information pieces in total, but a higher % of those pieces are usable, that image will be better and contain more of the image you have seen with your own eyes.
Newer cameras and smartphone camera apps do a LOT of processing with those subpixel values in order to make an image out of it. That is mostly because the image that forms on the sensor is so very different from what our eyes see and our brain then makes out of that information. Cameras makers are encouraged to have their devices create images that look like humans see it, so a great effort is spent on making this processing result in something our own vision would produce. So especially new phones sometimes really make things up when you take photos.
For example when you photograph the moon with a recent phone, some camera apps will take that milky blob in a puddle of total black that it’s tiny lenses and sensor can actually resolve, process the information and realize “oh it’s a night sky, so this is the moon” and just copy & paste NASA image data of high end telescope photos of the moon in that spot.
Latest Answers