Does viewing livestreams consume more data than doing so with a normal video?

393 views

Let’s say both the livestream and the video are viewed in 480p. How much data would be consumed in 1 hour for each?

In: 263

12 Answers

Anonymous 0 Comments

I’d say the compression algorithm would make a difference. The one with the “better” compression algorithm whilst mantaining indistinguishable video quality would consume less.

As for how much data again, it would depend on who’s the most compressed here. 480p is usually not that expensive in terms of size so I would say less than a gigabyte in both cases?

Anonymous 0 Comments

No. The reason is because one way or another the data is streamed at 4 to 8Mbit, the only difference is that with streaming the content is being uploaded at the same time it is being watched.

It’s like drinking water from a fountain and drinking water from a bottle. Either way you drank the same amount of water.

The only differences come from if the stream is only available at lower bitrates/resolutions, such as 480/720p. But if the bitrate (quality) is the same there is no difference

Anonymous 0 Comments

A video that is encoded as a whole has more compression options, such as encoding the frames backwards. A livestream can’t scan ahead and see what frames are coming in order to optimize their compression. Hence a livestream (assuming same resolution and apparent visual quality) will consume more data.

Anonymous 0 Comments

1 hour of 720p 60fps video will consume the same amount of data regardless of whether it is live or not (assuming the same compression is used for both). However, there is a small difference.

With a livestream, the goal is to deliver the video to you constantly and quickly. So you will usually have a consistent bandwidth consumption with livestream (more or less).

Normal videos have 1 advantage – if your internet is fast enough, it can download part of the video early and store it in memory until they are ready to play. This is called “buffering” because the video player is building a buffer against short connection disruptions. Outside of technical use, the term buffering is usually only used when the buffer runs out, and the video has to wait for more of it to be downloaded before it can resume. Technically, however, videos are always buffering.

On a player like YouTube, it will allocate a certain buffer of, for example, 15 seconds. That whole buffer will fill quickly, taking up more bandwidth temporarily. Once the buffer is full, it will fill back up as it is consumed. At this point, the bandwidth will start to resemble a livestream. So it may be one quick “burst” and then normal bandwidth usage going on.

Of course, a buffer could be done differently. I could use a 15 second buffer, and only pull data when there are 10 seconds or fewer left. This would result in bursts of bandwidth every 5 seconds as the buffer refills. But with livestreams, a buffer isn’t really usable since you can’t grab “future content” that hasn’t happened yet.

Compression is another issue – a livestream can’t spend too much time compressing it’s data or it will add delay, whereas a normal video can spend a bit more time getting better compression. So, a normal video might use slightly less data thanks to better compression, but the difference between any two compression algorithms is going to be fairly small in most cases, even over the course of an hour.

Tldr; there are small differences in how the data is delivered, but ultimately, the total data over a longer period of time is going to be just about the same.

Anonymous 0 Comments

If you are talking about the bandwidth used to send one pre-recorded video vs. one from a livestream, both from the same server to the same device, then the amount of data is largely the same (with some small variance based on the video actually being sent).

However, the way most video platforms are designed, they make use of Content Delivery Networks (CDNs) to deliver the video. The source server will send the video content to several servers strategically placed around the world. When a consumer requests the video, they are directed to a nearby server to fetch the video, so that there is less network congestion , latency , and less contention of resources .

Congestion – How much traffic is filling up the lanes

Latency – Time waiting. It takes longer to send data thousands of miles than tens of miles

Contention of Resources – One server can get overloaded with too many requests. Hundreds of servers around the world can handle many more requests.

So, for a pre-recorded video – there is a higher likelihood that the content has already been delivered to a CDN satellite server that is nearby to your location when you start to watch the video. If 1000 people in your city are all watching the same video in the same hour, they all are just fetching it from nearby.

A live-streamed video can also be delivered by a CDN, but it’s usually less efficient, and there’s a chance that everybody watching it is having to retrieve the data from the source. So more data is congesting larger portions of the entire internet, instead of just congesting local pipes.

Anonymous 0 Comments

Generally speaking there’s no difference, as the data needs to get to your computer one way or another. However, non-live videos may benefit from slightly better compression, since the compression algorithm can look at the whole file and optimize based on that.

Anonymous 0 Comments

I assume by “normal video” you mean something like regular YouTube or Netflix.

*At the same perceived video quality, all else being equal*, a Netflix stream would consume less than a YouTube video, and a livestream would consume significantly more.

The reason is that at Netflix, the same video is watched a lot, so they can afford to put a huge amount of compute power into compressing the video in the best way possible.

A live stream, on the other hand, needs to be compressed in real time, no time to do it well.

However, in reality, the chosen video quality will differ a lot. So Netflix may not actually use less data – but it may look significantly better than the same content would look like if it were livestreamed.

The content itself also matters a lot. A quiet meditative view of a sunset with barely anything moving will be much smaller than a fast paced video game stream (again, assuming the same quality – you can make the game stream just as small but it’ll look like crap).

So: in theory, yes, in practice, the other factors will be more important. A 720p Twitch live stream looks better than a 720p YouTube live stream of the exact same game, but will likely also use much more data.

Anonymous 0 Comments

Simply put this depends heavily on the codec used (and bitrate of course) but given same codec/bitrate watching live or normal video should be the same amount of data

Anonymous 0 Comments

ELI5: All things being equal about the video content, you would consume the same amount of data in a live stream as you would downloading a standard file. However, realistically, you won’t be able to make all things equal.

Now, let’s get into the long answer.

First, your question is a bit too vague to answer definitively. First, your question asks about 420p content – however that measure speaks to only the resolution of the video, which is only one factor in a video’s total size. The shorthand measurements are used to denote the pixel measure of the source content; 420p, 720p, and 1080p give us the resolution height measurement. And then, you can derive the width measurement according to the aspect ratio (16:9, or 16 pixels wide per 9 pixels tall). So, 420p is a frame measurement; in this case, 640px wide x 420px tall. If we do that math, that gives us 268,800px total for each frame.

Next, we need to consider the color depth. For each pixel in that frame, we need to store a color value. Typically, for web content, we can assume 32bit color. I’m not going to break that down because it gets weedy really fast. for our sake, all you need to know is that 32-bit color = 4 bytes per pixel to render each color. So now we know each frame will be 268,800px * 4 bytes/px brings us to roughly 1.1MB per frame (rounding for simplicity here.)

Great, so now we know each frame in our video is 1.1MB; the next thing we can tweak is our video’s frame rate. If you aren’t familiar, think of frames as the number of still images displayed on screen per 1 second of playback (Hence measures like 30 FPS, or “Frames per second.”) So if we assume our video is 30 FPs * 1.1MB/frame, each second of the video would be 33MB/second, or ~2GB/minute, or ~120GB per hour.

That’s the measure of our **raw video** content. All this math gets lumped into a single measure we call a bitrate for video. You can tweak any of these levers to reduce your base video size, reduce resolution, shrink the color pallet, or reduce the FPS of the content.

Here’s where it gets fun. No one would serve you raw content on the web. The data usage would eat the video provider and likely the customer’s internet data cap alive. So, we need to use transcoding to move the raw video into a compressed format that can be transmitted in a smaller file size. To do this, we use video codecs to encode and decode the file into this compressed format. We need to have access to the codec on both sides of the equation; this means the video provider needs to be able to encode the format, and you, on the other end of this equation, need to be able to decode the content back into a readable form. This is where streaming content runs into its biggest disadvantages.

To encode a video, you’re using an algorithm to store data more efficiently you’re doing math. Codecs have been in use for a long time, and technology is getting better all the time. However, more modern compression formats require more complex mathematics to produce the optimized file. This complexity adds compute time and interferes with the ability to stream in real time.

For pre-rendered content, most websites would load content into three different formats if they have a solid compression strategy. You must provide video in multiple formats to your users because their device/browser won’t support certain codecs. So you provide devices the option of downloading content in a format they can handle. Right now, that shakes out to three codec strategies; One, called VP9, which drives most of the Chrome/Google world, another called H265, which gives you coverage for the Apple/Safari world; and finally, a third, called H264, which covers legacy devices and content.

The file-size savings for using modern H265 or VP9 are huge – as a rule of thumb, we’d expect VP9/H265 content to be about 25% as large as the same content compressed in H264. It’s a big deal when you consider the H264 is already a compressed format.

So, when you’re encoding video to playback later, it’s very much worth encoding in these formats because you’re not restricted in the amount of time you have to produce content.

However, with streaming content, you want to prepare a video in “real-time.” However, you have several latency issues to deal with. We need first to transmit the data to the central server (such as twitch), then make any encoding changes we need to optimize content, and then transmit data back over the wire to users. So we need to minimize that encoding time as much as possible to turn around a smooth stream for playback. This drives us away from those more complex and efficient compression strategies.

So… while theoretically possible to create a 1:1 comparison between streamed and prerecorded content, in truth, the real-world realities of the problem are probably going to get in the way.

(PS; I’ve ignored in this example that some streamers may use client-side compression to reduce the streamers’ broadcast size as well. I can’t speak to this; I’ve never dug into the tech enough to say, so I’ve worked on the assumption that we’re dealing with a raw broadcast.)

Thank you for coming to my TED Talk.

Anonymous 0 Comments

A livestream will use up more bandwidth than a video. The reason is that a video is complete before it is streamed and so it can be compressed better. Video compression using something like H.264 examines the entire video and uses “past” and “future” frames of video to compress video. ie; frame 5000 is expressed as frame 4990 and frame 5010 with some transitional data.

Let’s imagine that you have a livestream of someone skiing down a hill, and you also have a video of the exact same thing.In a compressed completed video, frame 1000 will be a picture of the skiier at the top of the hill, frame 2000 is a picture of the skiier at the bottom of the hill. Frame 1500 can be compressed to show the bottom of the hill, with no ski track using frame 1000, it can also show the top of the hill with the ski track made by the skiier since that track exists in frame 2000. Then you add in extra data like the skiier halfway down the hill.

If you were to livestream that, you do not have the future data of the hill with the ski track in it because it doesn’t exist yet.

I had to dig deep into streaming protocols and H.264 compression for my job last year (I write network testing software and wrote a video streaming server to stream H.264 compressed video from MP4 files over RTP/RTSP)