Basically the original video has a lot of bits (aka details in the images) and would be scaled down (ie down sampled) to fit your monitor resolution. The downsampling process would make the new video have much lesser bits/information/details than the original. But the funny thing about this process is that new down samples video has much more details because the original video was super rich in info (like a 4K video) in the first place.
I think graphics cards used to have this super sampling feature where the GPU would render the images in some higher resolution and down sample it for much better quality than just rendering it once in the base resolution.
Idk if it’s still there but at least we got this totally bogus deep learning nonsense instead.
Latest Answers