Why do certain songs sound louder than others even when they’re played at the same volume?

2.34K views

Why do certain songs sound louder than others even when they’re played at the same volume?

In: Technology

25 Answers

Anonymous 0 Comments

Most of these other comments are focusing on actual loudness differences. But perceived loudness can be a psychological thing too. For example, I make a lot of rap beats. If I want the instruments to seem really loud, I might add a crash cymbal at relatively low volume. Your brain notices the crash cymbal in the mix and knows that crash cymbals in real life are loud. Since the other instruments are loud relative to the crash cymbal, your brain decides that these other instruments must be super loud too.

Anonymous 0 Comments

Okay, so one, keep in mind the difference between loudness and volume. Loudness describes the objective amplitude of a sound wave. Volume describes the relative loudness of a sound coming from a speaker. Loudness is measured in decibels, which are units on a logarithmic scale that describes the loudness of a sound based on its energy. Volume is measured, typically, as a percent or a linear value. The loudness of a sound that’s recorded can be, say, 12dB (dB is short for decibels). If recorded on a microphone capable of perfectly capturing the sound, and then played back on a speaker with the same power output at 100% volume, then the playback will also be 12dB.

Second, keep in mind that not all speakers are created equal. The power of a speaker’s driver, the magnet and coil transducer that converts the electrical signal sent to the speaker into sound, varies based on size and specifications. The size and specifications affect that frequencies it can reproduce, as well as how loudly those frequencies can be reproduced. Some speakers will recreate the exact loudness of a sound at 100% volume, but others can only output less and will never reach the same loudness while yet others can max out at much higher values, and require a low volume setting to get the same output.

Thirdly, keep in mind that there is no standard to how loud sounds in music can be when mastered, though there is a physical and digital limit, based on the speakers reproducing the sounds and the hardware the music is mastered on. As well, an amplifier (which is any hardware or software capable of boosting specific or general frequencies of sound) can take a 12dB sound and multiplying it at any factor, resulting in the output sound as much, much louder or softer.

So, to boil it down. Volume is like a multiplier scaling from 0 to 1, that modifies how loud a given sound plays. Low sounds are not suddenly boosted to the same loudness as loud ones just because the volume is 50%. Rather, they’re both half as loud as they could be if it were on 100%. Speakers of different types can result in different loudnesses at the same input volume, since it’s a factor of their maximum power output. And lastly, in general there’s no standard or rule that says sounds have to be within a certain range when recording, mixing, and mastering music.

As a result, there’s a thing people refer to as the “loudness war”. Where commercials have progressively gotten louder and louder especially compared to the program they air with, since the businesses making them want to try to ensure they get the viewer’s attention. Some places have debated and passed bills stopping this practice.

Anonymous 0 Comments

An IT guy at work explained this to me last week! Basically the volume at which they are recorded/how they are mixed are different, so if you have your volume at 10 for song A, song B will sound louder/quieter because they were recorded different.

Anonymous 0 Comments

There are a few factors:

Frequency/special content plays a role. We don’t perceive all frequencies the same in terms of loudness (see equal loudness curves). So one song might be perceptibly louder because of its heavier in frequencies you are more sensitive to.

Dynamics. No song is exactly the same volume for its entire length. The relationship between the quiet and older parts, or the instantaneous peaks versus the sustain portions can have an impact in how you perceive loudness. Also, longer sounds sound louder than short ones at the same volume.

The mix can impact our perception. For example a song whose vocal level is much louder than the instruments versus one where the vocal is quieter might be perceived differently. This is also related to dynamics. Similarly, if one song has a lot of reverb and another is one dry, your perception of loudness might change due to the difference in spatial perception.

Also because of this, is can also be hard to actually set to songs to be “The same volume” without an accurate LUFS meter to do so.

(I’m an audio engineer).

Anonymous 0 Comments

2 main things. 1, the amount of limiting/compression. Compression (and limiting, which is the same as compression, just more extreme) is essentially done by making the loud parts of music quiet, so you can make the quiet parts loud. That is, there is a defined peak loudness that a sound can have. If we take the loudest parts of the sound and smash them down, we can raise the overall loudness of the sound. Imagine recording a gunshot. The very first few milliseconds of the recorded sound are massive, but the rest of the sound (99% of the sound) is much smaller. If we take those first few milliseconds and squish them down, we can raise the entire sound wave.

Next: Perceived loudness vs actual loudness. By limiting the peaks, we’re bringing up the rest of the signal. Even though we have a defined peak (in digital audio there are only so many bits in a sample), we can make it sound louder. If you look at a waveform of a song that hasn’t been limited, it has lots of peaks and troughs of varying magnitudes. If you look at the waveform of a mastered (heavily limited) song, it looks like a sausage. The songs are both the same loudness, but the squished one sounds louder.

Hope this helps. Source: multiple Grammy nominated recording engineer.

Anonymous 0 Comments

Volume is not the same as loudness. There’s several ways to measure sound levels, peak measurements deal with the loudest points, and then there are special algorithms that measure average levels, which are much more accurate at measuring loudness. You can have the same volume playing two different source materials which have been processed differently, and you’ll hear them having different loudness, despite the volume in your amplifiers being the same.

Anonymous 0 Comments

Side note: Spotify takes the various volumes of all their songs and averages them out so you don’t have to keep adjusting your volume nob between tracks and artists

Anonymous 0 Comments

It’s a complicated answer and hard to ELI5. A lot of people are correct when they bring up the loudness wars and mastering levels but there is more at play. If you put your volume knob at 50% on your stereo and play one song and then play another song without changing the volume knob, one may sound louder than the other. That is probably because one song was “mastered” louder than the other (especially if switching between genres or mediums). But even if you used an SPL meter to match the level of the two songs by adjusting the volume knob, one may sound louder than the other. That is because there is a psycho acoustic element to volume that people call perceived loudness. Volume is a subjective term that tries to describe the strength of sound perception through our sense of hearing. Things like frequency, bandwidth, spectrum composition, duration of exposure to sound source, and time behavior of a sound can cause changes in perceived loudness and those changes vary from person to person. Loudness is a complex thing and is not fully understood even by experts!

Source: 10 years as a professional audio engineer and a degree in audio engineering.

Anonymous 0 Comments

Compression. Imagine two tracks, where one is compressed but the other isn’t.

Compression takes the parts of the track which are lower in volume (think subtler instruments, etc.) And boosts them up to match the louder elements.

The compressed track thus has a louder average volume compared to the non-compressed one. It can also loose character and nuance, according to some people.

As an example, an old rock song and a recent hard electronic banger can sound world’s apart in terms of loudness at roughly the same volume.

Mostly because recent (especially harder) electronic music uses compression to absurd levels.

Anonymous 0 Comments

I don’t know much about history, but, in digital sound, there is a fixed amount of maximum loudness in any song. E.g. In 8bit audio, there would be 65535 levels. When you adjust volume, you define what that 65535 is.

In a song, different instruments and vocals will be in different loudness levels. So you set the recording level so that the loudest part of the song doesn’t crackle. That would mean most other stuff would be at a lower volume than max possible.

e.g. if drums are the loudest instrument, then it would have the loudness of 65535. That would mean vocals maybe 30000 or something.

Over time, songs evolved to make use of the levels and instruments so that loud parts composed more of the song. So the average loudness of the song can be different according to amount of different instruments etc.

And some songs might not even use the max possible volume, that would be quite than other songs.

To better understand, look at any song in audacity. You can clearly see different loudnesses and max loudness.