How does a microphone works? how can it converts audio into digital?

633 views

How does a microphone works? how can it converts audio into digital?

In: Technology

4 Answers

Anonymous 0 Comments

Microphones don’t convert audio to digital.

Microphones just convert audio into analogue signals.

Converting those analogue signals to digital signals is the job is the job of a DAC a digital audio converter.

The same basically goes for traditional headphones and speakers which also just convert dumb analogue electrical signals into analogue audio signals.

Speakers and microphones are basically the same sort of machine working in opposite directions.

Sound is air (or other stuff) moving back and forth in waves. The moving air can be turned into electrical pulses with a magnet and induction. reversing that process can make electric pulses move magnet which in turn moves the air around it which is sound.

All that is simple analogue technology that works more or less the same it did back in the days of Alexander Graham Bell and the first telephone.

Converting the analogue electrical signal into a digital signal is more involved and more modern and takes a specialized electronics to do.

Anonymous 0 Comments

There are a couple of different microphone types, mainly dynamic and condenser. I’ll explain the dynamic microphone, as that’s a bit easier to understand.

The dynamic mic is basically the same as a speaker driver, but in reverse. The soundwaves make a membrane vibrate. There is a small coil of very thin copper wire attached to the membrane. When this coil moves near a magnet, a small electric current is induced in the wire, which will follow tje waveform of the soundwaves. This is then taken out of the microphone and amplified.

The conversion from analog to digital happens in a separate unit called a A/D converter. Sometimes it’s built into the microphone (eg. USB microphones)

The A/D converter works by sampling the voltage from the microphone many thousand times per second. 48 000 Hz is a common sampling rate in pro audio. This means that the converter measures the voltage 48 000 times every second and then assigns a binary value to it. If the bit depth is 24 bits, then one sample might be say

101100010011010101110110

Anonymous 0 Comments

Look at a spreaker playing music. A mic is pretty much the opposite. Instead of throwing out airwaves it moves with the airwaves you make(on a much smaller scale) and generates a frequency. Amplify that frequency thousands of times and you can move more air with a speaker. You might want to google piezo and see how it works. Its not a mic but its the basic version of it.

Anonymous 0 Comments

Sounds made of waves of moving air. In your ear, these waves hit your ear drums and your brain detects that to hear. In a microphone, the waves hitting it cause it to produce different voltages (the exact mechanism vary with different microphone types). At this point, the microphone has done it work, and it has produce a continuous analog signal.

It is possible to work with analog signals and traditionally all audio hardware did. Microphones, cassette tapes, loudspeakers, record players, television, basically anything with an RCA connection is designed to work with continuous analog signals, which is what an RCA cable designed to transmit. It wasn’t until computers got involved that things changed.

In a continuous analog signal, every possible moment in time is mapped onto actual value of that signal at that time. The shape of the waves of the sound is fully contained within the signal. This really isn’t possible for a computer. Computers work with discrete chunks of time and with a finite number of different digital values (the reasons for these restrictions could be another ELI5). These means to work with audio, the signal is feed through a device called an analog to digital converter (ADC). The ADC samples the signal at a given time, figures out what the closest digital value to the actual signal value and record that (the inner workings of a ADC could be an ELI5). If you can do this fast enough, you can record enough values to fully reconstruction the original signal. It has been proven that as long as you sample at twice the highest frequency in the original signal, you can get the original signal back, and this is known as the Nyquist frequency or the Nyquist rate. To play the sounds back, you feel the recorded samples into a digital to analog converter, which reproduces the signal by outputing the digital level for the sample time, creating an square-tooth approximation of the original signal.

So, the quality of digital audio is defined by two things: the numbers of sample you take and the number of different levels used to represent the loudness of the sound in the samples. This creates a problem, as the more samples you take, the larger the sound file is, and the more different levels in the sample, the larger each sample becomes, and the larger the sound file is. This is why computer audio didn’t really take off until the 90s. The computers weren’t fast enough to do the sampling right and they didn’t have to storage space or the efficient compression algorithms to store the files.