How does a microphone works? how can it converts audio into digital?

639 views

How does a microphone works? how can it converts audio into digital?

In: Technology

4 Answers

Anonymous 0 Comments

Sounds made of waves of moving air. In your ear, these waves hit your ear drums and your brain detects that to hear. In a microphone, the waves hitting it cause it to produce different voltages (the exact mechanism vary with different microphone types). At this point, the microphone has done it work, and it has produce a continuous analog signal.

It is possible to work with analog signals and traditionally all audio hardware did. Microphones, cassette tapes, loudspeakers, record players, television, basically anything with an RCA connection is designed to work with continuous analog signals, which is what an RCA cable designed to transmit. It wasn’t until computers got involved that things changed.

In a continuous analog signal, every possible moment in time is mapped onto actual value of that signal at that time. The shape of the waves of the sound is fully contained within the signal. This really isn’t possible for a computer. Computers work with discrete chunks of time and with a finite number of different digital values (the reasons for these restrictions could be another ELI5). These means to work with audio, the signal is feed through a device called an analog to digital converter (ADC). The ADC samples the signal at a given time, figures out what the closest digital value to the actual signal value and record that (the inner workings of a ADC could be an ELI5). If you can do this fast enough, you can record enough values to fully reconstruction the original signal. It has been proven that as long as you sample at twice the highest frequency in the original signal, you can get the original signal back, and this is known as the Nyquist frequency or the Nyquist rate. To play the sounds back, you feel the recorded samples into a digital to analog converter, which reproduces the signal by outputing the digital level for the sample time, creating an square-tooth approximation of the original signal.

So, the quality of digital audio is defined by two things: the numbers of sample you take and the number of different levels used to represent the loudness of the sound in the samples. This creates a problem, as the more samples you take, the larger the sound file is, and the more different levels in the sample, the larger each sample becomes, and the larger the sound file is. This is why computer audio didn’t really take off until the 90s. The computers weren’t fast enough to do the sampling right and they didn’t have to storage space or the efficient compression algorithms to store the files.

You are viewing 1 out of 4 answers, click here to view all answers.