Low light requires high ISO, basically turning the video amplifier up to max, and this causes video noise. That shows up as random grain-like effects in the images, with pixels randomly brighter or darker than their neighbours.
And randomness is the enemy of compression; much of image compression relies on most pixels being the same as their neighbours, and video compression relies on one frame being largely the same as the preceding and following frames. The very randomness of noise means it takes more bytes to encode in compressed formats.
The only reason I can think of is artifacting combined with a variable framerate. Visual noise in low light conditions act similar to any snow effect, which takes more space to encode. Normally, if the encoding has a set bitrate, the quality would just drop to compensate. But if the bitrate is variable, it would shoot up and create a larger file due to needing more data to encode the noise.
Latest Answers