How does setting the bitrate of your video recording / stream software affect its output?


From a very basic perspective, the notion of lower bitrate = worse quality video and higher bitrate = higher quality video makes sense, but, given a specific numeric bitrate, how does your software determine “what to keep” with the bitrate its been given?

**Edit:** I also understand that higher bitrates don’t always equal better video if the software gets to “keep” everything based on the number its been given. I’m more curious to know how the software decides what to cut as the number gets lower.

In: 2

Saw this about 3-4 days ,might not be exact what you asked, but you could extrapolate the info I think

the encoder has logic to decide which type of detail to keep for each quality level. and the quality level reflect the output bitrate heavily. by setting the “target” bitrate, the encoder will try to cut off the detail of the video with that quality logic until the output is estimated to match the expected bitrate.

This doesn’t have an ELI5, really. It’s going to be incomplete or inaccurate. [Discrete cosine transform](, specifically JPEG. Signal theory doesn’t distill well. At the heart of it is a few really technical concepts, e.g. the [Fourier Transform]( These are fundamental, and you can’t really get around it.

Here’s a really, really loose explanation: instead of storing colors, they store data based on how quickly colors change (and direction). Each rate of change is assigned a value. If a certain square block is a solid blue, only the constant blue term is not zero. If it fades from blue to green, left to right, the blue-decreasing term is positive, and the green-decreasing term is negative. Quickly changing colors are typically noise and/or unimportant, so they get thrown out first. (This explanation falls into the incomplete category, but also has an inaccuracy.) That’s a single image, and videos are just sequences of images. However, you can get a little more saving if you also only store the changes from one image to the next.

the bitrate of a video depends on few factors like the resolution (how many pixels), type of compression/encoding, frames (fps) and lenght, audio (if any) and hardware that’s supposed to play/stream from (web/stream/dvd/bluray/etc).

Your input will be matched in the software to the nearest “optimal bitrate” depending on “calculators”.

examples of generic calculators: