Why does a picture of a LED/LCD screen appear super pixelated and distorted, but when you zoom in, much of the distortion goes away?

256 views

I wish I could attach pictures to show what I mean, but I’ll paste an imgur link that shows what I mean because it’s hard to explain. When taking a photo of an LCD/LED screen, like a computer or television, its super pixelated and distorted. However, if you zoom in on the same image, while still pixelated, a lot of the pixelization goes away. Here’s an imgur link with two pictures: [https://imgur.com/a/fWMBJ8x](https://imgur.com/a/fWMBJ8x) It’s kind of hard to tell from the pictures, but maybe y’all will know what I’m talking about.

In: 64

4 Answers

Anonymous 0 Comments

Moirés patterns. The pixels in the camera aren’t aligned with the pixels on the screen, so each pixel of the image is centered on a slightly different part of the screen pixels. When zoomed out, this causes the resulting image to be a mismatched collection of parts of screen pixels. This random assortment can’t be blended together by our eyes. Zooming in produces a more accurate picture of the screen.

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

Pixelated means “displayed in such a manner that individual pixels are discernible” which is exactly what is happening when you enlarge the image by magnification. Another definition is “(of an image on a computer screen or other display) be enlarged so far that the viewer sees the individual pixels that form the image, the enlargement having reached the point at which no further detail can be resolved.”

Once you shrink it down past a certain point and look at it at regular sizes the pixels blend together into the shapes they represent. So your brain doesn’t see the grid of dots that make up, for example, a letter, and processes it as that letter.

Another aspect is that the grid is visible. Image compression (and all compression) can be lossless or lossy. Lossless can be decompressed into the exact data. Pretty important for binary data or text. Lossy can compress data more by throwing away information that wouldn’t be noticed. Basically like when painting a dog you don’t need to paint every single hair but broader areas of fur colors. Photo image compression does not handle grids like that very well at the usual settings. So it makes grids that mathematically are similar but different enough that it looks weird.

That being said I had to guess a little at what you’re asking about, so I covered the stuff that doesn’t arise from Moire patterns.