What is a megapixel actually, and how does it correlate to the maximum resolution and picture quality?

517 viewsOtherTechnology

I’ve heard about megapixels being the amount of pixels in millions that a camera can take but I don’t understand how a Canon EOS R6 II can take better photos zoomed in than an iPhone 15 PM with optical zoom despite having a lower megapixel count.

I don’t get how a megapixel count correlates to the resolution, and how significant it is to the quality of the image.

In: Technology

18 Answers

Anonymous 0 Comments

“Mega” is just the prefix for “million” (or sometimes 2^20, in some contexts, but it’s so close to a million that the difference doesn’t matter). A megapixel is a million pixel.

An image of 1000 pixels by 1000 pixels has one mega pixel. An image of 2000 pixels by 1000 has two.

>how a Canon EOS R6 II can take better photos zoomed in than an iPhone 15 PM with optical zoom

The number of pixels says very little about the quality of a photo. You can have a big blurr of several millions pixels. Having a very high resolution in pixels is only useful if the photo stored by those pixels is clear enough, so you can zoom in and still have good details. In general, the number of pixels in an image stops being relevant after a certain amount (and most phones nowadays are way above that limit). After that, it’s starting to make your photo take up more storage space for no reason. Especially when you are going to display that photo on a screen with lower resolution anyway.

Also, the optical zoom is not changing the number of pixels at all.

Anonymous 0 Comments

> don’t get how a megapixel count correlates to the resolution

They are essentially the same thing.

> how significant it is to the quality of the image. 

It’s one of several hard gates to a good picture. It doesn’t matter if a phone has a 600 megapixel sensor if the optics are from the nearest Shenzhen market and the sensor only has pixel number going for it and not pixel quality. The sensor will just end up capturing a shitty image stretched over more pixels. Meanwhile a professional camera will get close to the maximum quality you can pack into 15 megapixels by having great optics to project a good image onto a sensor that has 15 million large and high quality pixels rather than 200 million small noisy garbage ones.

Anonymous 0 Comments

The resolution determines the maximum amount of detail the image can contain. However it doesn’t determine the quality of the camera and the photos it will take. The Canon takes better photos because it has a larger lens and sensor, meaning it can take in more light and have less noise in the photo (among other things).

> I don’t get how a megapixel count correlates to the resolution

It’s simply the number of pixels. If the picture has a resolution of 3000×2000 then it has 6,000,000 pixels, i.e. 6 megapixels.

Anonymous 0 Comments

1 Megapixel is 1,000,000 pixels. It represents the maximum “image information” the camera is capable of. It’s like…. the page count of a book. You just can’t tell a story like Lord of the Rings if you’ve only got 10 pages to do it.

If the camera doesn’t have enough megapixels, then nothing else matters, and you can’t make images that look good.

But the QUALITY of the image is entirely different. If you can’t write, having 1,000,000 pages at your disposal doesn’t really help.

There are lots of different factors like lenses, photo sensor size, and fancy algorithms that make pictures better or worse. Here’s a primer: [https://www.alanranger.com/blogs/beyond-a-point-and-shoot-camera](https://www.alanranger.com/blogs/beyond-a-point-and-shoot-camera)

And here’s the easy chart of sensor sizes that make a HUGE difference: [https://images.squarespace-cdn.com/content/v1/5013f4b2c4aaa4752ac69b17/f1daf258-4822-465e-83b2-963217b2528a/camera+sensor+size?format=2500w](https://images.squarespace-cdn.com/content/v1/5013f4b2c4aaa4752ac69b17/f1daf258-4822-465e-83b2-963217b2528a/camera+sensor+size?format=2500w)

Anonymous 0 Comments

If you zoom in on a picture, it’s made of tiny dots. Each of these dots are a single “pixel”. A megapixel is one million pixels.

The more pixels in a picture, the more detail you can show. A one pixel picture of the Earth will be a square blue dot. A 16 pixel picture might have some green and white and be vaguely roundish.

A 121 million pixel (megapixel) picture of the Earth will look like this
https://www.cnet.com/science/stunning-high-resolution-photo-shows-earths-many-hues/

More pixels means more details.

Anonymous 0 Comments

TL;DR: The sensor and lenses of a EOS R6 II are larger and of higher quality, so they can collect more light and have to make fewer concessions to the laws of physics regarding optics.

The sensor is composed of tiny elements that are arranged in groups that react to light hitting them. The elements of the sensor react to light hitting them with an electric output which the processor of the camera interprets as “red”, others as “green”, others as “blue” information of a certain intensity.

Each group of those elements represent one pixel, that’s why they are called subpixels. Depending on the mixture of red, blue and green information the pixel returns a certain color (look up “additive color theory” to understand the resulting colors).

The sensor is divided into many rows and columns of subpixel groups. The sensor has “megapixel” number of those groups on it.

Bigger sensor and lower number of division into “megapixel” means each pixel had more light converting into electric charge, so the resulting pixel color is of a higher certainty that the pixel color resulted from actual light hitting it, instead of just assuming a value to “randomness” from the workings of electric chips causing “noise” in the chip. This is especially noticeable in low light photography where comparatively few photons hit each subpixel.

More megapixel means whatever is in the field of view of the lens is divided into more pieces of information, which, if all of those pieces of information are “usable”, will result in an image of more detail. But since small sensors with tiny pieces of information often result in fewer total number of “usable” information pieces, the images from a camera that delivers fewer information pieces in total, but a higher % of those pieces are usable, that image will be better and contain more of the image you have seen with your own eyes.

Newer cameras and smartphone camera apps do a LOT of processing with those subpixel values in order to make an image out of it. That is mostly because the image that forms on the sensor is so very different from what our eyes see and our brain then makes out of that information. Cameras makers are encouraged to have their devices create images that look like humans see it, so a great effort is spent on making this processing result in something our own vision would produce. So especially new phones sometimes really make things up when you take photos.

For example when you photograph the moon with a recent phone, some camera apps will take that milky blob in a puddle of total black that it’s tiny lenses and sensor can actually resolve, process the information and realize “oh it’s a night sky, so this is the moon” and just copy & paste NASA image data of high end telescope photos of the moon in that spot.

Anonymous 0 Comments

The number of megapixels refers to the number of sensor elements in the camera. There are separate sensor elements for red, green, and blue light. Commonly you have twice the number of green elements compared to red and blue because we humans are better at telling green apart. Those side-by-side elements are combined mathematically to predict the color of light at all sensor element location.

The size of the sensor and the size of the lense determine how much light can be in each sensor element. Larger elements will collect more light. More collected light means the noise from the electronics has less of an effect on the readout value. So there what you get out of the iPhone sensor is less accurate than the Canon EOS R6 II

A large lens with more elements can be made to bend the light with less error than the small plans of an iPhone.

Light defects when it passes through a small hole, the smaller the hole the more it detracts, A lens is like a hole so a larger lens will protect the lens more accurately onto the sensor. This is a fundamental limit of physics.

That the lens difference is huge is clear but the sensor difference is huge is not as clear. The iPhone 15 Pro Max has a 1/1.28 (9.8×7.3mm) sensor compared to the full 35mm sensor that measures 36x 24 mm Look at [https://en.wikipedia.org/wiki/Image_sensor_format#/media/File:Sensor_sizes_overlaid_inside.svg](https://en.wikipedia.org/wiki/Image_sensor_format#/media/File:Sensor_sizes_overlaid_inside.svg) the iPhone has the smallest square and the Cannon the largest square. The area of the Cannon sensor is 12 times larger.

It is in the lower light conditions that a larger sensor has the main advantages, A high magnification lens has the practical result of reducing light levels,

If you look at the number of pixels the iPhone does have 48MP for the wide-angle camera but only 12 MP for the telephoto sensor. The cannon at 20.1 MP has the higher pixel count.

There is no optical zoom on the iPhone there is a fixed telephoto lens that is equivalent to a 120mm lens at full frame. Magnification is not zoom, zoom is when a lens can change the magnification.

The cannon camera can have lenses a lot more magnification. Telephoto lenses that are zoom lenses are common with max focal lengths of 200, 300, 400 and 500 mm. There are lenses with even longer focal lengths but they get extremely large and expensive. The result is a image that can have optical magnification on the cannon might need digital magnification on the iPhone.

The megapixel count will determine the resolution of an image, there is alos the aspect ratio of the sensor that matters. But the megapixel count will often not matter unless you zoom in on the image digitally. A typical computer screen is 1080×1920 = 2 megapixels, this means the computer needs to reduce the resolution of the image to 2 megapixels before it is displayed. The display contains a red, green, and blue subpixel so if you count it as a camera sensor it is 6 megapixels with an equal amount of each color or 8 with twice the amount of green. This mean something below a 10-megapixel camera has the same number of sensor element as a display have subpixels. If you magnify the image on the computer and only look at a part of it, print it at high resolution or use a higher resolution display you need more pixels.

The point is that more pixels than are needed on the output device might be less useful then you expect. More camera megapixels are more of a marketing advantage. At the same sensor size, more pixels mean less light per pixel. So fewer pixels can be better in the right condition, I suspect that is why the telephoto sensor in the iPhone has fewer pixels than the wide-angle sensor, you get less light, and diffraction limits come into play. A 48MP sensor on that tiny lens like produces a worse image

So for photos in direct sunlight of stuff that is close to the camera, you can have quite similar results on the Cannon camera and the iPhone, But if you look at stuff farther away or in lower light conditions the Cannon has a clear advantage.

Anonymous 0 Comments

Mega is a prefix meaning million. So a megapixel is just one million pixels. Often this will be in a certain aspect ratio (like a 3:2 photograph, or a 16:9 smartphone camera).

Now, the question is how can something with a lower megapixel count take higher quality pictures? Two basic reasons.

The first is that the quality of the components, especially the sensor. The sensor in that camera may be more color accurate. While there are less pixels, they’re more likely to be completely color accurate to what you’re seeing yourself. More expensive cameras generally have better sensors that are more accurate. Other things like actual shutters can control how much light comes into the camera, which better controls how accurate the sensor can be.

The second reason is that there are two different kinds of zoom on cameras. One is optical zoom, where the lenses physically move to change the zoom level of the picture, while still filling the whole camera sensor. The second is digital zoom, which is a fancy way of saying “crop out the relevant portion”. Digital zoom effectively reduces your pixel count.

While some phones these days have two or three back cameras with different levels of optical zoom (and the camera software automatically switches to the appropriate camera for the zoom level), once you get to the highest one, that’s it. All you can do is crop. If you zoom a photo in 30x, but the best lens is a 3x lens, you’re effectively cropping to 1/100th of the original pixel count in the photo (1/10th in each direction). That’s going to seriously kill your quality.

Anonymous 0 Comments

What others have said is correct (mostly) but i want to point out an error in your premise.

The iphones main camera is 48MP compared to the canon’s 20MP however at 5x zoom it the iphone switches to the telephoto camera which is only 12 MP.

And outside of those zoom ratios the iphone uses digital cropping so at 2x it will use 1/4 of the main sensor so 12MP and at 10x it will use 1/4 of the telephoto sensor so 3MP. While with the correct lense the canon can maintain full resolution at any zoom.

TLDR: no when zoomed in the canon does not have a lower megapixel count than an iphone over ~1.5x zoom.

Anonymous 0 Comments

You’ve got a lot of good answers here so I’ll just provide a little context.
A full HD tv has approximately 2 million pixels, it could display a 2 megapixel picture without losing any quality. A still image from a blu ray would basically be 2mp.

A 4K TV is the equivalent of about 8mp.

The actual numbers we throw around now (100mp+ on Samsung phones etc) are so meaningless since we can blow up an 8mp image to the size of a wall and it still looks good.