How were the musical notes chosen in terms of frequency?

708 views

Every unit in physics are defined such a way they are exactly precise and non-changeable. I highly doubt that music uses note frequencies randomly in the way that comes from heart.

Can someone explain it to me?

In: Technology

3 Answers

Anonymous 0 Comments

Music is based on relative frequencies.

A note that’s one octave higher is exactly twice the frequency of the other note. That’s purely based on physics, and it’s not arbitrary – mathematically, a waveform that’s periodic with frequency 2*f* is also going to be periodic with frequency *f*, so it makes sense that they “sound” like the same note, not a different note.

Other notes in the scale are based on mathematical ratios. What we call a “perfect fifth” is a 3:2 ratio of frequencies. Most harmonies that have been independently discovered by many cultures are simple ratios of frequencies like that.

However, most advanced music has a lot of arbitrary elements too. Western music divides the scale up into 12 equal parts. Dividing by 12 gives you notes that correspond reasonably well to many simple frequency ratios (though not perfectly), but also gives you the ability to “module” to different musical keys and add a lot of complexity that way. But at various times in history this was not popular, and using “pure” frequency ratios was actually preferred. Other cultures divide up the octave into a different number of notes – more or fewer.

Back to your original question, though: while the ratios have an important basis in physics, it’s completely arbitrary that we call a particular note a particular frequency. Most people can’t even tell the difference if you play a song at a slightly higher or lower frequency. (Those that can tell have “perfect pitch”.)

At some point in Western music history, it was decided that A=440Hz. Before then it wasn’t standardized. But it’s completely arbitrary.

You are viewing 1 out of 3 answers, click here to view all answers.