Why do certain songs sound louder than others even when they’re played at the same volume?

2.35K views

Why do certain songs sound louder than others even when they’re played at the same volume?

In: Technology

25 Answers

Anonymous 0 Comments

Side note: Spotify takes the various volumes of all their songs and averages them out so you don’t have to keep adjusting your volume nob between tracks and artists

Anonymous 0 Comments

It’s a complicated answer and hard to ELI5. A lot of people are correct when they bring up the loudness wars and mastering levels but there is more at play. If you put your volume knob at 50% on your stereo and play one song and then play another song without changing the volume knob, one may sound louder than the other. That is probably because one song was “mastered” louder than the other (especially if switching between genres or mediums). But even if you used an SPL meter to match the level of the two songs by adjusting the volume knob, one may sound louder than the other. That is because there is a psycho acoustic element to volume that people call perceived loudness. Volume is a subjective term that tries to describe the strength of sound perception through our sense of hearing. Things like frequency, bandwidth, spectrum composition, duration of exposure to sound source, and time behavior of a sound can cause changes in perceived loudness and those changes vary from person to person. Loudness is a complex thing and is not fully understood even by experts!

Source: 10 years as a professional audio engineer and a degree in audio engineering.

Anonymous 0 Comments

Compression. Imagine two tracks, where one is compressed but the other isn’t.

Compression takes the parts of the track which are lower in volume (think subtler instruments, etc.) And boosts them up to match the louder elements.

The compressed track thus has a louder average volume compared to the non-compressed one. It can also loose character and nuance, according to some people.

As an example, an old rock song and a recent hard electronic banger can sound world’s apart in terms of loudness at roughly the same volume.

Mostly because recent (especially harder) electronic music uses compression to absurd levels.

Anonymous 0 Comments

I don’t know much about history, but, in digital sound, there is a fixed amount of maximum loudness in any song. E.g. In 8bit audio, there would be 65535 levels. When you adjust volume, you define what that 65535 is.

In a song, different instruments and vocals will be in different loudness levels. So you set the recording level so that the loudest part of the song doesn’t crackle. That would mean most other stuff would be at a lower volume than max possible.

e.g. if drums are the loudest instrument, then it would have the loudness of 65535. That would mean vocals maybe 30000 or something.

Over time, songs evolved to make use of the levels and instruments so that loud parts composed more of the song. So the average loudness of the song can be different according to amount of different instruments etc.

And some songs might not even use the max possible volume, that would be quite than other songs.

To better understand, look at any song in audacity. You can clearly see different loudnesses and max loudness.

Anonymous 0 Comments

During the mastering process of a song you apply a tool called a limiter which in short is a tool used to bring the percieved loudness to the highest it can be without distorting the sound or ruining the dynamics of the song. at that point the decibel levels are not what you’re counting, but you count RMS and LUFS.

Anonymous 0 Comments

Equalizers in the 2000s made voice as loud as the music to give the impression they are more powerful

Anonymous 0 Comments

Dynamic audio compression.

It levels the audio throughout the track, so that every sound is equally load, and then amplified it to the max.

If you search for it on YouTube you’ll see some informational videos.

Anonymous 0 Comments

A culture obsessed with over-compressing music to make it louder and shittier. Listening to mixes/masters from the 90s is a world apart from more contemporary stuff.

Anonymous 0 Comments

Music naturally has quiet bits and loud bits. If you strum a guitar chord, it starts off loud and then gets quieter as the notes die away.

Now imagine if you had your hand on the volume control, and you turned up the volume as soon as the chord started to fade out. It would make the chord sound louder for longer, which would make the whole thing sound a lot louder.

Music producers have access to computer software (and hardware devices too) that will perform this process automatically. They can tune this software to either have a big impact, or a subtle impact, and this will affect whether the end result sounds louder or not.

Anonymous 0 Comments

There’s a lot of answers on here about modern mixing/mastering, overuse of compression for publication, and all that stuff is accurate. But at the heart of the question, if you really do play two distinct songs (or sounds) at the same exact volume, you will hear certain frequencies clearer than others.

This is described by what’s called the Fletcher Munson curve, and is important for any audio engineer to understand: https://ehomerecordingstudio.com/fletcher-munson-curve/

Basically our ears are tuned to hear the frequencies around the human voice the clearest, and even if there are lower/higher frequency sounds at the same volume, you’ll hear the midrange vocal frequencies the clearest. This is used to certain ends in audio mixing, like electric guitar frequently has a big midrange spike so that it “cuts through the mix”