Does viewing livestreams consume more data than doing so with a normal video?


Let’s say both the livestream and the video are viewed in 480p. How much data would be consumed in 1 hour for each?

In: 263

I’d say the compression algorithm would make a difference. The one with the “better” compression algorithm whilst mantaining indistinguishable video quality would consume less.

As for how much data again, it would depend on who’s the most compressed here. 480p is usually not that expensive in terms of size so I would say less than a gigabyte in both cases?

No. The reason is because one way or another the data is streamed at 4 to 8Mbit, the only difference is that with streaming the content is being uploaded at the same time it is being watched.

It’s like drinking water from a fountain and drinking water from a bottle. Either way you drank the same amount of water.

The only differences come from if the stream is only available at lower bitrates/resolutions, such as 480/720p. But if the bitrate (quality) is the same there is no difference

A video that is encoded as a whole has more compression options, such as encoding the frames backwards. A livestream can’t scan ahead and see what frames are coming in order to optimize their compression. Hence a livestream (assuming same resolution and apparent visual quality) will consume more data.

1 hour of 720p 60fps video will consume the same amount of data regardless of whether it is live or not (assuming the same compression is used for both). However, there is a small difference.

With a livestream, the goal is to deliver the video to you constantly and quickly. So you will usually have a consistent bandwidth consumption with livestream (more or less).

Normal videos have 1 advantage – if your internet is fast enough, it can download part of the video early and store it in memory until they are ready to play. This is called “buffering” because the video player is building a buffer against short connection disruptions. Outside of technical use, the term buffering is usually only used when the buffer runs out, and the video has to wait for more of it to be downloaded before it can resume. Technically, however, videos are always buffering.

On a player like YouTube, it will allocate a certain buffer of, for example, 15 seconds. That whole buffer will fill quickly, taking up more bandwidth temporarily. Once the buffer is full, it will fill back up as it is consumed. At this point, the bandwidth will start to resemble a livestream. So it may be one quick “burst” and then normal bandwidth usage going on.

Of course, a buffer could be done differently. I could use a 15 second buffer, and only pull data when there are 10 seconds or fewer left. This would result in bursts of bandwidth every 5 seconds as the buffer refills. But with livestreams, a buffer isn’t really usable since you can’t grab “future content” that hasn’t happened yet.

Compression is another issue – a livestream can’t spend too much time compressing it’s data or it will add delay, whereas a normal video can spend a bit more time getting better compression. So, a normal video might use slightly less data thanks to better compression, but the difference between any two compression algorithms is going to be fairly small in most cases, even over the course of an hour.

Tldr; there are small differences in how the data is delivered, but ultimately, the total data over a longer period of time is going to be just about the same.

If you are talking about the bandwidth used to send one pre-recorded video vs. one from a livestream, both from the same server to the same device, then the amount of data is largely the same (with some small variance based on the video actually being sent).

However, the way most video platforms are designed, they make use of Content Delivery Networks (CDNs) to deliver the video. The source server will send the video content to several servers strategically placed around the world. When a consumer requests the video, they are directed to a nearby server to fetch the video, so that there is less network congestion , latency , and less contention of resources .

Congestion – How much traffic is filling up the lanes

Latency – Time waiting. It takes longer to send data thousands of miles than tens of miles

Contention of Resources – One server can get overloaded with too many requests. Hundreds of servers around the world can handle many more requests.

So, for a pre-recorded video – there is a higher likelihood that the content has already been delivered to a CDN satellite server that is nearby to your location when you start to watch the video. If 1000 people in your city are all watching the same video in the same hour, they all are just fetching it from nearby.

A live-streamed video can also be delivered by a CDN, but it’s usually less efficient, and there’s a chance that everybody watching it is having to retrieve the data from the source. So more data is congesting larger portions of the entire internet, instead of just congesting local pipes.