Microphones.. can sound waves be reproduced with tones/electrical current?

338 viewsOtherTechnology

I’m not sure if iam explaining correctly but I was looking into vibrations, frequencies, soundwaves and how microphones work.
(Looking into doesn’t mean I know or understand any of it, nor do I pretend to lol)

If microphones worked as so “When sound waves hit the diaphragm, it vibrates. This causes the coil to move back and forth in the magnet’s field, generating an electrical current” am assuming the electrical current is then sent to the amp or speaker.

Let’s use the word “hello” for example.
When someone says hello it produces a sound wave / acoustic wave / electrical current?…. If so, is there a certain signature assigned/associated with your sound wave “hello” and if so is it measured in decibels frequencies? Tones? Volts? And can it be recreated without someone physically saying hello?

For example can someone make a vibration to mimic your sound wave of hello? By hitting a certain object, if they knew the exact tone/frequency? Also/or can you make an electrical current that mimics your hello sound wave?

I understand a little about a recorded player but can someone go onto the computer and reproduce a certain tone/frequency and it says “hello” I’m not sure if that makes sense lol.

In: Technology

5 Answers

Anonymous 0 Comments

The short answer is “yeah”. That’s exactly what a speaker is doing. It is recreating the sound of your “hello”. Modern AI software can fully fake your voice pretty convincingly.

Now, doing this with some kind of mechanical instrument like a guitar is so difficult as to be more or less impossible, but there is no fundamental reason it couldn’t be done.

As for how your “hello” is measured, it could either be measured in the time domain like a recording does where it samples sound pressure, or it could be measured in the frequency domain but this gets considerably more complicated if you’re doing it properly (by properly I mean fully in the frequency domain, no time component).

Anonymous 0 Comments

do you mean basically “can someone take just a raw waveform generated by an electronic device, and adjust it to sound exactly like human speech when it’s played through speakers?” even without using an actual recording of human speech to modulate the waveform.

because the answer to that is yes. synthesizers can do pretty much anything. in terms of hitting objects… it’s probably not practically feasible, like there’s probably no particular object or combination of objects somebody could reasonably put together and bang on to produce your voice, but theoretically if you did have the right physical objects you could. it’d just be easier to do with a synthesizer. and even then, why bother when i have a recording of your voice?

Anonymous 0 Comments

Yes. If you capture all the harmonics.

This is basically where physics meets music theory.

If you pick up an old fashioned wall phone you will hear dial “tone” which is 2 frequencies, added together. Actually every key on the dial pad makes a different combination of 2 frequencies.

If you dig in the math, when you add 2 frequencies, x, and y hertz, it actually forms 4 frequencies, the two original ones and two more “beat frequencies” x+y hz and x-y hertz. And so on for several frequencies.

Now, consider a piano key, middle C. (440hz) that wire in the piano isn’t making just 440hz, it’s making many other frequencies. These harmonics altogether in different frequencies and phases, make that unique sound. Now we take a singer singing “aaaaaa” at middle c, or “eeee” or “oooooo” or playing trumpet or violin at middle c for that matter, the fundamental or strongest frequency is the same, but the harmonics make the “fingerprint” of that sound.

Electrical engineers call this “spectral content”, but a musician would call it “timbre” or tone quality.

Also this is how audio and video compression work, because not only can you reproduce it, as your asking, you can actually remove a lot of the “detail” in the harmonics that a human ear won’t miss.

Anonymous 0 Comments

Yeah, voice synthesizers have been around awhile. Stephen Hawking had one. All voice tones were generated electrically. Modern ones are much more sophisticated.

Anonymous 0 Comments

Basically, yeah. The electrical signal generated by the microphone can be processed and recreated as sound. It’s all about frequencies and waveforms. Your “hello” has a unique waveform that can be recreated digitally or by another analog system. It’s like a fingerprint for sound. Decibels measure loudness, frequency measures pitch. So yes, hit the right tone/frequency and you got it.