How does resolution upscaling and remastering works in retro video games and movies?



How does resolution upscaling and remastering works in retro video games and movies?

In: Technology

Imagine you have 5 dots on a piece of paper, in a line. You could take a pencil and draw that line through the 5 dots. Now, looking at your line, you could draw lots more dots on the same line – maybe 50 or 100! A computer can use math to look at an existing picture/shape/whatever and basically compute where to add new dots (ie upscale resolution) – the same as you did with your line and pencil.

For video games that are 3D there’s nothing that needs to be done, it just accepts whatever resolution it’s given. This is because 3D elements are stored as a series of points with lines connecting them. A 1 inch long 3D line and a 1 mile long 3D line take the same amount of space to store because you only need the starting point and ending point to draw them.

Since most games used to (and some still do) not make the user interface (UI) independent of the render resolution this can result in tiny and unusable UIs. The UI will shrink as the resolution increases unless the developers account for this, such as using a UI that uses relative sizes rather than absolute sizes. By this I mean a UI element might be set to take up 50×50 pixels no matter what the resolution of the game is. If instead they make the UI element take up a percentage of the screen then it will stay the same size as resolution increases. If the aspect ratio changes though this will still cause the UI element to warp. This can cause things that are supposed to be circles to become ovals, and in the case of 2D game stretch them out.

For 2D games all of the elements are set to a specific size unless they use a vector format (this works the same as I described for 3D), but let’s say they all use a specific size. In this case when you upscale the image there are algorithms that try to increase the size of the image while making it still look like the original image. None of these algorithms are perfect and you can only upscale a 2D image so far before the image starts getting messed up.

For movies if it’s film they go back to the original master and scan it in at a higher resolution. Film is a visual format that doesn’t have a resolution, the image is stored physically on the medium. Sometimes when they shot on film they knew they were going to only show it on TV so they would frame shots for 4:3. This happened with Star Trek TNG. Just outside the 4:3 frame were the edges of the set, crew members, lights, or other things. Even though they could have made it widescreen they had to stay with 4:3.

If it’s video or digital they can’t do this as there’s nothing to scan in. The resulting video output is a specific resolution. Upscaling is still possible, but you can’t get extra detail that doesn’t exist…or can you?

There’s a new method called super resolution that uses the magic of AI to upscale the image. Unlike other upscaling methods super resolution can create detail out of nothing. DLSS is a real time implementation of super resolution, it uses multiple frames to intelligently create the output frame. There’s many picture upscalers out there as well. It’s also possible to do this with video but I’ve not seen any publicly usable implementations yet.