why cameras/pictures get more blurry and pixelated the more you zoom in.

460 views

why cameras/pictures get more blurry and pixelated the more you zoom in.

In: Technology

4 Answers

Anonymous 0 Comments

Youre not getting closer to the object so youre not picking up more detail. So say you zoomed in 200%. It would just double the pixels.

Anonymous 0 Comments

That only happens with digital zooming, you are taking the picture and magnifying, discarding what no longer fits in the frame. You don’t get that with optical zoom where you are using optics to get closer.

Anonymous 0 Comments

Because you can’t create information out of nothing. The “zoom! enhance!” thing from TV is a lie. A digital picture only has so many pixels in it, and zooming in on the picture can’t create more detail because the original picture doesn’t have that information. The only thing the computer can do is duplicate the pixels. So if you zoom in on a picture to 2x, each pixel becomes a 2×2 square of the original pixel’s color.

Anonymous 0 Comments

I mean the way a camera sees and afaik how we see, is that as long as there is a light source (sun, candle, flashlight, aso) the light basically goes off the source (in optics you might actually picture that as rays of light or “photons” (small “light” particles)) hits the object in it’s surrounding and get absorbed (appearing dark or black) or reflected (appearing light or white or whatever color was not absorbed) and then reaches “us”. And “we” essentially run it through a condensing lens upon a receptive field. So that the whole light from the entire world that is in front of use can be mapped to a “small image” of what is there.

And this “receptive field” can be a film of a chemical that is transformed by the energy that is light. So idk more energy could mean that you “burn” the film so that it gets darker whereas less energy means it stays “unburned”. Which is how you could make black/white photography, you’d then take that “negative” (because the shades are inverted more light made it dark less light made it bright) and run light through it so that the dark parts again absorb the light and the light parts let it run through so that you can get the actual colors.

So that’s why it’s important how you use the aperture and development time, because the more light that comes in the more details you can see, but if you take in too much light everything will be universally black or white because all part of the developed film have reached their absolute maximum of energy and are “burned”.

The way how this is done varies for our eyes, classical cameras and digital ones but the idea is still mostly the same: You run all the light that is hitting you from all the places in your environment through a lens, focus it on some material that acts as a sensor for intensity and wavelength (color) and then further process that raw information (idk for example the original image is actually upside down due to the lens so that is inverted and stuff like that).

Now there are 2 other problems with that. First is the focus of the lens, meaning that each lens has a point where it’s focuses all the light to one point and before and after that point it covers a wider area, so you have to pick the focus point correctly to get the object that you’re interested in sharp. And the second is the resolution of your receptive field.

Like idk think about dripping ink on a white sheet of paper, when it will hit the sheet it will not just paint on one spot but cover a larger or smaller area (usually somewhat roundish with the majority of the color in the middle). So instead of one sharp peak, that is high intensity at a very small spot, you rather get something like that:

[https://upload.wikimedia.org/wikipedia/commons/e/ee/Gaussian_2d_surface.png](https://upload.wikimedia.org/wikipedia/commons/e/ee/Gaussian_2d_surface.png)

So instead of having perfectly distinct information for every atom sized ray of light you get some mixing. Which usually doesn’t matter because your eyes also have some resolution problems where you can only objects that are 0.2mm or more away from each other. However if you wanted to infinitely zoom in that would become relevant.

Now in terms of digital photography, the resolution is determined by the amount of pixels. Where pixels are really just squares with the exact same color throughout the whole square, so at some point you just take the average and drop all the more specific information for how the light behaved in between pixels, because you have no pixels there to measure that.

And last but not least, after the image is taken, it’s taken. That’s all the information that you have and information that isn’t in that picture cannot be retrieved from that picture. So if you enlarge it you can only see details that were already in the picture when it was taken (and that you couldn’t see because of the resolution of your eyes), however if it were already pixelated below that point, then enlarging a pixel won’t do you any good because you’re essentially enlarging a canvas painted in just 1 single color.