eli5: Why do computer screens appear distorted in pictures, unless you zoom in?

596 views

Whenever you take a picture of a computer screen, the screen looks distorted, but when you zoom in closer on the picture, it focuses a lot more

In: Technology

3 Answers

Anonymous 0 Comments

I suspect it has to do with the refresh rate and the camera being much faster than the human eye. So we don’t see all the distortions and it just blends into a coherent image.

Anonymous 0 Comments

What you are seeing is a moire pattern.

When you take a photo of a screen with a digital camera, you are taking a pattern made up of millions of tiny squares (pixels), and recording it on a sensor as an image made of millions of tiny squares.

If you managed to align everything absolutely perfectly, you could record each pixel of the screen 1:1 to a pixel on your camera and it would look perfect.

Zoom out or zoom in a bit however and your pixels don’t necessarily line up right, and each pixel ends up recording a pixel and a bit of the screen or similar. This is the effect you are seeing where you get odd colour banding – the pixels are aligning in such a way that certain pixels become more prominent than others.

This gets even worse when you then view your picture on a monitor, as you have a rectangular pattern, being recorded by another not-quite-aligned rectangular pattern, and being displayed on a third not-quite-rectangular pattern.

Anonymous 0 Comments

The reason is [Nyquist–Shannon sampling theorem](https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem).

For an ELI5 explanation, this phenomenon has the same cause (mathematically) as the “videos of spinning wheel” phenomenon. You know when you record the image of a car with their wheel? If the wheel rotate at the right speed in real life, the video will show the wheel rotating *backward*.

This is because we are subsampling a continuous series of information, ie. capturing image at a discrete point in time. Then when we watch the video, we are reconstructing this continuous information based on the subsamples, which require us to interpolate between the samples. Unfortunately, this interpolation can be wrong: if 2 consecutive pictures showed that the wheel turned 3/4 of a round forward, the interpolation would be that the wheel turn 1/4 of a round backward.

This problem can be avoided by increasing the frequency of the subsample. For example, if you take video at 3x frame rate, a 3/4 round forward turn into 3 consecutive frames of 1/4 round forward, and we will reconstruct the motion correctly.

Same thing about camera and screen, but now we have a 2-dimensional subsampling problem instead. The pixel on the screen never 1-1 match the tiny sensors inside the camera, so we end up where we are picking up the pixels in the wrong distribution: some pixels have more influence than other, or a straight row of pixels could influence multiple different rows of sensors. This doesn’t affect just video screen, but can affect any patterns that are small and changing rapidly, such as a brick wall from a distant.

Then there is also the second layer of subsampling problem, when you view a camera picture on your screen. In this case though, it’s technically possible to match the pixels of the picture to the pixels on the screen. But if it doesn’t you also run into the same problem.

The answer is still the same, make more sampling, then you can faithfully reconstruct the original signal.