how do microphones in a phone not pick up any audio that the speakers put out? if I put a call on speaker mode, how do people on the other end not hear themselves?



how do microphones in a phone not pick up any audio that the speakers put out? if I put a call on speaker mode, how do people on the other end not hear themselves?

In: Technology

They do pick up the audio. But the phone “knows” exactly what electrical signal it is outputting to the speakers create the audio; even with a simple electrical circuit, you can subtract that same signal from what is being output output by the microphone and transmitted over the phone line. That will cancel out most of the noise that was produced by the speakers and hopefully prevent the feedback you normally get with a microphone closer to a speaker.

It’s why if people are talking over each other on the phone you tend to get dropped audio and it just becomes garbled instead of it being very noisy.

The first thing a microphone will be connected to is a filter and then an amplifier. The filter is mostly to get rid of things outside of hearing range. Next it goes to the amplifier. The amplifier will have a built in common-mode rejection. This means that a signal on the input that is matching another signal will get snubbed out, while signals unique to the input will be amplified. Typically the common-mode rejection ratio is around 100 dB, so the sound of the speaker on your phone ends up being about 1/1000 as loud as your voice.

Have you never heard yourself while on the phone? It happens (used to happen more often) and it’s really annoying.

The microphone is connected out of phase from the speaker, creating a phenomenon known as  phase cancellation, in which two identical but inverted waveforms summed together will “cancel the other out”

I hear myself all the time when my friends with Iphones talk to me on speaker it’s fucking annoying because there’s at least half a second delay and suddenly i’m talking to myself

They do, and it is very difficult to mitigate.

The first step is to isolate the speaker and the microphone from the chassis, as sound is a vibration and the chassis will transmit it better than air. Ths is mechanical engineers job, and it is not easy, esp. on a cramped cell phone.

The next step is to use specific microphones that are directional, and will only pickup sound from a very near source.

Then there is active noise cancellation, where a secondary (or more) microphone records the ambient noise to “substract”it from the one coming from the primary microphone. This is done by software.

Finally, there are various filters, both software and hardware, to eliminate unwanted noise, like echo and larsen. some are integrated in chips, others need to be coded. People often use both.

TL;DR: the microphone picks this up, but phones are made to remove it.

The mics do record it. But then it depends on the software, as some do cancel it or ignore it. Where the hardware is placed also affects.

Have a voice call in a game while playing both without headset, just speakers, and you will probably hear that feedback with a second of delay or so.

Not being the expert but I do believe it may be a form of phase cancellation that happen in between the mic and speaker so that the mic can pick up the sound but will invert them and cancel them out.

The correct answer is:

Acoustic Echo Cancellation or AEC.

The phone has 1 “channel” of AEC. This is the technology which “cancels out” the audio from the speaker.

One very important point – conferencing software never feeds the sound from your microphone back to your speakers. They feed that sound to everyone else, but never to you. This means you can’t get the short-loop feedback howl that is really easy to get in a PA. But you can get the long-loop warble from a loop that goes into your mic, out of someone else’s speakers, into their mic, back to your speakers, and to your mic.

Another thing they do is detect when you are speaking, and adjust the speaker volume down and the mic volume up, then restore the speaker volume and cut the mic once you stop. It doesn’t make for a good result, but it works.

You can also use a ‘comb filter’. Carve regular notches our of the speaker sound, so that a graph of the frequency response looks like a comb. Then filter the frequencies that remain in the speaker output, from the microphone, with a ‘complementary’ filter. The sound you get from such a setup is – well, ugly – but at least you can get rid of the worst echo.


Um we do? Every time all I hear is the echo of my annoying voice

Electronic engineer here

Sound’s system in phones has something called “negative feedback loop”

which basically means that it subtracts the output sound from the inputs sound.

here is what it does in a function form

(person voice + phone voice) – **(phone voice from feedback loop)** = person voice

the bold **phone voice** is the signal fed through by the negative feedback loop.

This has probably been said but I’m too lazy to scroll through the comments. On an iPhone, when you’re listening in the earpiece, there’s a microphone enabled on the bottom of the phone by your mouth (and bottom speaker). If you enable speaker mode, that bottom mic disables and it enables a mic that’s built into the earpiece so it doesn’t hear the speaker blasting right next to it

As someone who works in a call center, a lot of speakers do pick up their own audio on speakerphone, and the person DOES hear themselves. And we hate it.

Won’t read the replies but I’ll add some little fact:

Audio is highly digital these days, meaning what the other part hears ain’t nothing like a 1×1 voice connection, and yes the byproduct of noise cancellation tech such as mixing and stabilizing multiple microphone sources and applying other hardware/software optimizations such as compression so fast and with such efficiency it happens in what gives us the impression of a real time conversation. Kinda like why phones come with multiple cameras instead of a single one, they’re all working in tandem to construct the illusion of a great camera.

This technology is being expanded for video, in initiatives such as Google’s Starline where again, SEVERAL components are working in amazing speed to give the illusion of real time talk, by emulating what we perceive as real time.

In modern Cellphones, they’ve taken to being half duplex, so while the speaker is outputting noise, the microphone is turned off, and when the microphone is listening, the speaker is cut. This is why you can’t talk over each other on a cellphone and still hear what the other person is saying like you could on analog phones years ago.

There might be some magic software or hardware witchery on some types of connection, but the cell companies are too cheap to put full duplex systems in for everyone.

Just throwing this out there, as someone who works in a call center taking for 8 hours to people in their cellphones, your speakerphone doesn’t filter out sounds as much as you think. Please, just take the call off speaker. I’m so tired of hearing myself echo back.

Part of my job is taking calls from the public. I can hear everything going on in the background and I wish people didn’t think phones were a magic device that only picked up speach. I can hear you eating, peeing, breathing. I can hear Wheel of Fortune in the background. I can hear the baby screaming on your lap. If you put me on speaker phone I do hear an echo of everything I say.

If you call someone be courteous and do it from a quiet place.

I’m a tech support specialist for a major phone company, and I can tell you the phone company uses algorithms in the transport network to combat feedback and reduce outside noise on phone calls. Much of it is done in the network, not by your phone. That said, it’s not perfect, and if you’ve ever sat on a large conference call you know speaker phones do feedback. I hear my own voice echo, I hear everything in the background. The most annoying are people who eat while they’re on the phone and loud mouth breathers