eli5: How do audio engineers & producers isolate sounds from recordings?

172 views

Say you found a weird bird on the street & wanted to record it’s call. How would you remove wind & traffic noise?

In: 2

4 Answers

Anonymous 0 Comments

Based on my very limited understanding, all sounds come from certain frequencies, or pitches, that can be isolated using digital software. Through some careful analysis and trimming, you can remove the unwanted frequencies and keep the wanted ones, although it’s often not a perfect process unless done very well.

Anonymous 0 Comments

One wouldn’t typically record bird sounds within a city environment, but go to a wooded are or a meadow. You could make an assumption that a bird call occupies primarily the mid-high frequencies, while wind and distant highway or idling train engines emit a low rumble. Then use an equalizer with a “highpass filter” to remove a long-term average voltage leaving the faster fluctuations.

A more elaborate approach would be using a “noise gate” that is triggered by sound above a certain threshold, to smoothly fade the remaining background noise in and out at a rate that matches the rise and decay the wanted sound. This would result in the noise breathing.

The bandwidth can be split into several ranges using stacked filters and each of those bands adjusted by a separate noise gate. Now the bands further away from the tones of the bird call can be reduced by a bigger amount. When overdone, this approach yields a result similar to mp3 compression with more noticeable tonal components breathing or suddenly popping out of unnatural silence.

Anonymous 0 Comments

The simplest way is to filter by frequency. It’s fairly easy for even a basic hobbyist to build a filter with purely passive components that will exclude certain frequencies from a signal. Actual audio equipment can manage it much more precisely.

You can also use samples to exclude specific sounds. We can develop a mathematical model for what wind “sounds like” in terms of not just specific frequency spectrums but how those spectrums change over time. Then we match the model to our recording and figure out the highest likelihood filtering method.

We can use similar methods to identify doppler patterns – which enables us to discriminate between moving and non-moving audio sources.

If we can control the microphone, we can also use different hardware to control direction/range so that we’re primarily recording a specific point in space.

If we have multiple microphones, we can use the known properties of attenuation to isolate specific points in space and track movement in time.

From the practical standpoint of “hey, I’ve got my phone and want to record this bird…”, you’re either going to need to learn a lot of mathematical techniques (many of which are classified since the task you’ve set yourself is basically the same as trying to detect submarines) or you’ve going to want to download an app that incorporates many of those techniques.

Note: Audio engineers/producers don’t isolate sounds from recordings. They use devices invented by other people – other people with enormously more education than them about how sound works – to prevent noise from occurring in the first place. There’s a reason auto-tune was invented by a guy whose day job was studying seismic anomalies and not the guy who ensures Taylor Swift’s latest album bops.

Anonymous 0 Comments

Sound engineer here. It’s less about “taking away other sounds” than it is about “focusing on this particular sound.”
Nothing in sound is really a happy accident, as you have to plan and set up to record particular audio.
A shotgun mic or 416 for example, is specifically directional. It is built to pick up specific sound waves rather than room ambience.
There’s a lot more to it obviously, but let’s just say we have the technology to pick up what we want, and filter out the rest.