What’s the digital process of lowering the quality of a photo or video? Also is the opposite process a thing or just movie things?

2.70K views

What’s the digital process of lowering the quality of a photo or video? Also is the opposite process a thing or just movie things?

In: Technology

5 Answers

Anonymous 0 Comments

The opposite is called super resolution in computer vision. I’ve used it in an iris recognition system but there’s a hard limit to how much information you can extract from a static neighbourhood of pixels in a single image.

In the iris recognition system we take say 10 pictures rapidly, each offset from the last by a tenth of a Pixel, and use a fancy algorithm to combine it into a picture 10x the resolution. This let’s us zoom in on your eye from a distance and identify who you are from 15-20m away. Scary!

Anonymous 0 Comments

Lossy compression.

The opposite is largely just a movie thing. Only in the last few years has machine learning-based image interpolation become a practical thing, but it’s just *interpolation*, basically an educated guess. It’s reasonably good at sharpening some contours on upscaled images or guessing a generic replacement for a missing part of an image from the surroundings, but it’s not some magic “enhance” that can bring back data lost in compression.

Anonymous 0 Comments

Removing information from the file lowers quality, basically. Some things that can be done: switching millions of colors into thousands, blending similar pixels together, removing light beyond the visible spectrum of the eye (but that the camera still captured), compressing audio above or below the threshold of human hearing, etc. All of these tactics can vastly lower quality.

Restoring information that isn’t there anymore isn’t really possible, unless it’s saved in the file through a revision history. (See Obama’s birth certificate – it was a joke people calm down.) You might be able to smooth out some details or blow things up with a magnifying glass or guess what color something was (black and white colorization does this for old movies, for example). Regardless, you can’t read a license plate in a 240p video when the camera never captured it in the first place.

Anonymous 0 Comments

The process itself depending on algorithm. But generally, algorithm tries to analyze an image and find it’s less significant details and then remove it. For example if you have three gray pixels in a row with middle one just barely darker, algorithm could decide that this color deviation isn’t important and make it same color as neighbouring pixels. Less details – less information – less bytes to store this image/video.

But doing opposite is… well, technically impossible (because if we don’t have an information about smallest details, we obviously can’t know about those details).

As u/Pocok5 mentioned, there’s exists algorithms that actually could restore lost information, but that just means such an algorithm should store information about which details usually getting lost, how they appear, when and where. So, basically, that means that we store such information not in image/video data, but in algorithm itself, but still we need some bytes to store it, sometimes even more than in uncompressed file. And such process is not accurate, it just gives us resulting image that looks same **quality**, but not same **image**. For example, if we had a picture of a grassy plain, which is compressed so we couldn’t see any plant details, we theoretically can improve it and see an actual plants with a leaves, which will look naturally, but that’ll be a completely different plants, not having anything in common with the original image except that it’s a same looking grassy plain.

Anonymous 0 Comments

The most straightforward way to lower the quality of an image is by reducing the number of pixels. Usually this is done by taking the average color of adjacent pixels and creating new pixels based on that. So if you have an image that’s 100×100 and you want it to be 50×50 then every block of 4 pixels (2×2) will be averaged and turned into a composite pixel.

The opposite isn’t really a thing, at least to the extent it works on TV. You can upscale images and some software like Photoshop will try to follow patterns and make guesses about what it would look like. With a good algorithm you might get an output that looks better than just zooming in on the original image, but you aren’t going to be able to make readable text where previously there was none like on TV.