Let’s consider a simple example. Imagine you have a large painting, but you only have a small window to look at it. You can’t see the whole painting at once, but you can see different parts of it by moving the painting behind the window, right?
Now, if you have a 4K video (which is like the large painting), but your iPhone screen has less than 4K resolution (which is like the small window), you still can see the video. But you can’t see all the detail at once because your screen doesn’t have enough pixels to display all the detail that the 4K video has. The video has to be squeezed down to fit your screen.
This process is called “downscaling” or “downsampling”, and it’s done in real-time by your phone’s hardware. It takes the detail from multiple pixels in the 4K video and merges it into fewer pixels to be displayed on your screen. This can still result in a sharper image than if the video was originally in a lower resolution, because it’s using more source data to generate each pixel, which can result in more accurate colors and smoother gradients.
So, even though your screen doesn’t have 4K pixels, it can still display a 4K video (In a lower resolution). But you won’t be able to see the full level of detail that you’d see if you were viewing it on a 4K screen.
Imagine a square painting. Cut it into 4000 pieces from top to bottom, and then again from left to right. Remove every other square from left to right, and then again top to bottom. Push the remaining pieces together. You just went from 4000×4000 pieces (pixels) to 1/4th the size, 2000×2000 pixels. Your phone does this for each frame.
Note, 4k means 2,160 x 3840 pixels. Images get every _n_ pixels dropped, based on the target screen ratio.
Squeezes it down to size. Just like how it stretches stuff up to size when something lower resolution is played. It would stand no chance playing things back otherwise, because Apple mobile device resolutions are really nonstandard.
The big boy word for squeezing is filtering, and the big boy word for stretching is interpolation. There are several approaches, all with their own big boy names, like bilinear, nearest neighbor and so on. A general name would be scaling or (re)sampling. It’s just math, if you want to learn more, search for the formulas on Wikipedia or algorithm walk-throughs on YouTube.
Through uniquely human ability called bullshitting. Also known as lying without directly lying. This allows someone to make a claim or statement that is believable enough so that the other person believes the statement to be true, even if it is false.
If I have a single pixel. A 1×1 display. I could downscale a 4K video and display it on that single pixel. Does that mean my single 1×1 display can display 4K video? Sure! If I’m bullshitting. In reality, no, it does not display a 4K video at 4K resolution.
In this same way I could say that I have a 10-foot vertical jump – If I can “downsample” a foot to be an inch.
An iPhone can shoot in 4K, and can playback 4K video, but it’s display is not 4k. Leveraging our innate ability to bullshit we can now *sell* someone something. “You want to watch that in 4K on your iPhone right? Better pay for that mega-super-primo plan otherwise you’re going to be missing out on all that great 4K video!”. This may not be 100% false, newer iPhones can downscale and playback 4K video, they are bullshitting you a bit if they get you to believe it’s better than video at your native resolution and a similar bitrate*.
* I’m kind of excited for someone to school me on the pro/cons of resolution vs bitrate above a display’s native resolution.
A 4K picture is literally just a rectangle of 4x 1080p pictures. Every four pixels can be combined (or averaged) to display a 1080p image, and the overall shape stays the same (rectangle).
This averaging effect would create an image with less detail (think jagged edges or less defined color transitioning), but since the pixel count is reduced on a non-4K display, these defects won’t as noticable/nearly invisible on a native 1080p screen.
Video has 2 specs: The resolution it’s *recorded* in, and the resolution it’s *displayed* in.
The software that turns the 1’s and 0’s of a digital file recording into a pciture we can see on the screen has the ability to *scale* the picture to fit the display. It’s taking the full 4096×2160 4k source file recording, and reducing it down to the 2796×1920 display resolution of your screen.
You’re not actually seeing 4k video (2160p), you’re seeing 1920p, because that’s all your screen can display.
Latest Answers