How are headphones with single drivers able to make sounds that are “multiple sounds at the same time?´´

425 views

So from what I know, the drivers vibrate at a given wavelength and that makes a sound. That makes sense to me of how you can create voice for example. But how does multiple instruments + voice in one driver work? You hear all them seperately but at the same time?

To me this sounds like a monitor pixel making more than 1 pixel at once.

In: 135

16 Answers

Anonymous 0 Comments

Drivers don’t vibrate at only one wavelength. All drivers are doing what they can to create pressure waves in the air according to the signals they’re receiving.

Low-frequency sounds need the diaphragm to move a greater distance for the sound to be heard, which requires that the mass of the driver be greater. The human ear isn’t super sensitive to low-frequency sounds, so the amount of energy that needs to go in to reproducing the waves goes up. For these reasons, we normally use subwoofers for low-frequency sounds.

Tweeters have the opposite problem. They need to move very quickly, and only a very short distance in order for the sound to be heard. This also means we need considerably less energy to do it.

While you can drive a subwoofer with high-frequency sounds, the diaphragm simply *can’t physically move* that fast with the available energy, and so the high-frequency sounds don’t really make it out. Similarly, you can drive a tweeter with low-frequency sounds, but the diaphragm isn’t going to move very far with each wave, and the resulting pressure wave won’t be heard.

The sound “wave” that is produced by a driver, either way, is going to be a mixture of all of the sound waves being reproduced. Imagine throwing a stone into a calm lake, and watch the one pure wave move outward from where it was created. But throw many stones, and pick a random point on the lake’s surface, and that point will be moving up and down in a complex way that isn’t always obviously tied to the different waves that make up its motion.

In the digital signal processing world, we think of sound waves as a composition of many different pure sine waves, with each sine wave frequency having its own “volume” or contribution to the total. There is an algorithm called the Fourier transform that we can use to take that total waveform and separate it into its basic components, just like a prism takes a beam of white light and separates out the different colors.

The human ear does something very similar. The cochlea has a spiral shape, so that sound vibrations of different frequencies fit further into the spiral, and end up tickling tiny hairs at each point. High frequencies get deeper into the cochlea before their energy ends up vibrating the hairs, and we perceive that distance as a higher frequency of sound. This lets us “un-mix” the sounds into their component sine waves similar to how the Fourier transform works. We can then use that separated signal to match against patterns of human speech to understand what someone is saying, even though the waves that make up that speech might be generated by a combination of multiple types of speaker drivers reproducing a signal that was captured by an entirely different microphone that itself is just a diaphragm vibrating an “inverse” driver, capturing all of the original waves from the original spoken words.

You are viewing 1 out of 16 answers, click here to view all answers.