Eli5, how do sound equilizers work? If there is only one audio file, how do equilizers differentiate between bass, minds, and things like that and then are able to change them?
There is math called a “Fast Fourier Transform” that can take any sound (wave) and break it into separate frequencies. As a simple example, imagine an audio file where two flutes are playing in harmony. The FFT of that file would clearly show the frequencies of the two different sounds, even though they were combined into one wave. Now, say the lower-sounding flute is a little too quiet – the same math (mostly) can be applied to make the lower frequencies louder, without changing the higher frequency sound. Congratulations, you just raised the bass. Now, instead of two flutes, think of doing this math across three flutes with bass, mid, and treble. Finally expand this into a lot of different frequency bands and you have a full EQ. I have no idea how to ELI5 the FFT itself, so just wave your hands and say math magic.
All sound is made-up of a waveform that represents the frequencies that are generated by the sound source, at that given moment in time. This can be simple wave (like a constant sine wave hum ) or complex. More complex sounds have harmonics, which further accent particular frequencies in the soundwave. Some examples of their visual representation here: https://www.pinterest.ca/pin/95349717099292737/
The visual representations are based on low-frequencies-left/low Hz (wave cycles per second), and high-frequencies-right/high Hz, we can pinpoint certain areas if we wanted to adjust that frequency. A graphic equalizer works like this: boost/cut low frequencies by adjusting the left sliders, mids mid sliders, highs right sliders. Same concerpt for a 3-band knob eq, just less control. A paramentric EQ is similar, but focusses in on a select few frequencies. Another eq setting is q/peak, which allows you to control how ‘steep’ the cut/boost is relative to the neighboring frequencies.
A concept close to eq’ing is filtering, where you can, for example, reduce the lows by using a high-pass filter, and vice versa, and peak/q/notch to control the boost/cut type happening in the filtering.
It turns out you can build electronic circuits that can filter audio based on a range of frequencies. For a simple ‘Low/Mid/High’ EQ there would be three filters. One would filter out everything higher than a certain point. Another would filter out anything lower than a certain point. The last one would do the opposite of the other two. So now you have the entire range of audio frequencies but it’s split into three separate audio signals.
At this point you can adjust each signal up and down. Then the EQ mixes the signals back together.
When it comes to digital audio the same thing can be done except it’s done with clever math instead of physical electronic circuits.
The nice thing about filters is that the frequency content of the input doesn’t really matter. Say you have a simple low pass filter (which does what it’s named for – it lets low frequencies pass) with a corner frequency (basically where the filter starts to activate) of 100Hz. When audio is run through this filter, frequencies well below 100Hz will go through more or less unchanged, but frequencies above (especially way above) will be attenuated. On an EQ, this could look like keeping the bass at 0, but mid and treble all the way down.
Useful EQs use better filters with more options for what frequencies you want to start adjusting around in order to tune it up how you want it.
Mids* not minds