Does viewing livestreams consume more data than doing so with a normal video?

458 views

Let’s say both the livestream and the video are viewed in 480p. How much data would be consumed in 1 hour for each?

In: 263

12 Answers

Anonymous 0 Comments

ELI5: All things being equal about the video content, you would consume the same amount of data in a live stream as you would downloading a standard file. However, realistically, you won’t be able to make all things equal.

Now, let’s get into the long answer.

First, your question is a bit too vague to answer definitively. First, your question asks about 420p content – however that measure speaks to only the resolution of the video, which is only one factor in a video’s total size. The shorthand measurements are used to denote the pixel measure of the source content; 420p, 720p, and 1080p give us the resolution height measurement. And then, you can derive the width measurement according to the aspect ratio (16:9, or 16 pixels wide per 9 pixels tall). So, 420p is a frame measurement; in this case, 640px wide x 420px tall. If we do that math, that gives us 268,800px total for each frame.

Next, we need to consider the color depth. For each pixel in that frame, we need to store a color value. Typically, for web content, we can assume 32bit color. I’m not going to break that down because it gets weedy really fast. for our sake, all you need to know is that 32-bit color = 4 bytes per pixel to render each color. So now we know each frame will be 268,800px * 4 bytes/px brings us to roughly 1.1MB per frame (rounding for simplicity here.)

Great, so now we know each frame in our video is 1.1MB; the next thing we can tweak is our video’s frame rate. If you aren’t familiar, think of frames as the number of still images displayed on screen per 1 second of playback (Hence measures like 30 FPS, or “Frames per second.”) So if we assume our video is 30 FPs * 1.1MB/frame, each second of the video would be 33MB/second, or ~2GB/minute, or ~120GB per hour.

That’s the measure of our **raw video** content. All this math gets lumped into a single measure we call a bitrate for video. You can tweak any of these levers to reduce your base video size, reduce resolution, shrink the color pallet, or reduce the FPS of the content.

Here’s where it gets fun. No one would serve you raw content on the web. The data usage would eat the video provider and likely the customer’s internet data cap alive. So, we need to use transcoding to move the raw video into a compressed format that can be transmitted in a smaller file size. To do this, we use video codecs to encode and decode the file into this compressed format. We need to have access to the codec on both sides of the equation; this means the video provider needs to be able to encode the format, and you, on the other end of this equation, need to be able to decode the content back into a readable form. This is where streaming content runs into its biggest disadvantages.

To encode a video, you’re using an algorithm to store data more efficiently you’re doing math. Codecs have been in use for a long time, and technology is getting better all the time. However, more modern compression formats require more complex mathematics to produce the optimized file. This complexity adds compute time and interferes with the ability to stream in real time.

For pre-rendered content, most websites would load content into three different formats if they have a solid compression strategy. You must provide video in multiple formats to your users because their device/browser won’t support certain codecs. So you provide devices the option of downloading content in a format they can handle. Right now, that shakes out to three codec strategies; One, called VP9, which drives most of the Chrome/Google world, another called H265, which gives you coverage for the Apple/Safari world; and finally, a third, called H264, which covers legacy devices and content.

The file-size savings for using modern H265 or VP9 are huge – as a rule of thumb, we’d expect VP9/H265 content to be about 25% as large as the same content compressed in H264. It’s a big deal when you consider the H264 is already a compressed format.

So, when you’re encoding video to playback later, it’s very much worth encoding in these formats because you’re not restricted in the amount of time you have to produce content.

However, with streaming content, you want to prepare a video in “real-time.” However, you have several latency issues to deal with. We need first to transmit the data to the central server (such as twitch), then make any encoding changes we need to optimize content, and then transmit data back over the wire to users. So we need to minimize that encoding time as much as possible to turn around a smooth stream for playback. This drives us away from those more complex and efficient compression strategies.

So… while theoretically possible to create a 1:1 comparison between streamed and prerecorded content, in truth, the real-world realities of the problem are probably going to get in the way.

(PS; I’ve ignored in this example that some streamers may use client-side compression to reduce the streamers’ broadcast size as well. I can’t speak to this; I’ve never dug into the tech enough to say, so I’ve worked on the assumption that we’re dealing with a raw broadcast.)

Thank you for coming to my TED Talk.

You are viewing 1 out of 12 answers, click here to view all answers.