Let’s suppose you have a sound wave, and you’re recording it. Every 25 microseconds (say), the computer records the amount of air pressure at the microphone.
Now you have excellent *time* data about the sound. But you have no idea about the *frequency* data. You can point to times when it’s loud, or soft, but not when it’s a high-pitched tone or a low-pitched tone.
One way to get frequency data is by taking a fourier transform. But then, you lose the *time* information.
Wavelet transforms give a nice compromise between these two types of useful information. A wavelet is like a short blip with specific frequencies (though not as specific as a pure sine wave tone), at a specific time (though not as specific as a single clap). Your microphone’s signal can be broken into wavelets, and then you can identify how the frequencies change with time (or vice-versa).
Latest Answers