We have the technology required to record, transmit and play high fidelity audio and video. Why are the phone calls’ quality still so bad as if we’re talking over walkie-talkies?
In other words, we definitely \*can\* have high quality phone calls. Why is it that the carriers (or whoever responsible for building the underlying infrastructure) choose to not make this improvement yet?
Edit: the question came up after finishing a call with my bank. I’m pretty sure the CS on the other end used a landline phone and the audio quality was no bueno. Maybe my impression on the phone calls’ quality can have some recency bias involved. So please correct me if phone call qualities aren’t that bad in your region or in your experience .
In: Technology
Maybe because you live in America ?
We have voice-over-wireless in some europe countries, the phone routes the call through internet via the provider
I also believe we have different encoding formats based on signal type and device, gsm/hdspa/3g/4g/5g etc.
So tehcnically, if you have a good infrastructure to begin with, there is little cost factor for companies
to allow you to use better audio.
ELI5 if i’m wrong
The thing is that phones are widely used, and a lot of them are older analog and low-bitrate systems built on standards set when phones were a novelty in much of the world. Changes now would require either a patchwork of ad hoc solutions that would be a nightmare to maintain, or a global agreement to fundamentally change how telephone interoperability works. Providers don’t see much value in upgrading either way; the system as is works “well enough” and people have the option of skipping the POTS system and using the Internet if they so choose.
The other issue is, sure a “HD calling” system would make your calls better to your friend (and we have several different options for this already) but it’s not going to improve the fact that you can barely understand the company’s employee in a ‘virtual call center’ with their three children screaming in the background and their company-provided $5 microphone stuffed actually inside their throat (or however they manage to get such exceptionally shit audio quality). Both ends have to have the optimal solution for it to be relevant.
The convention hasn’t changed.
At first, every call needed it’s own wire. You would pick up the phone with your own wire to the operator and you would tell them who or which number you wanted to contact. They would then unplug their phone from your wire and plug a wire into your phone and the phone of the person you are trying to contact.
This works fine in a small town, but if you wanted to call someone in the next town over, you would call the operator, and they would connect you to the operator of the next town, who would connect you to the person you are trying to reach. The problem was, the number of calls between those two towns was limited by the number of phone lines between those two towns. If there was only one line between those two towns, only one person could call that other town at a time, and everyone else would have to wait.
This is then exaggerated if you wanted to call a distant town, you would need to be routed through several other towns. This is why long distance calls cost so much.
Ironically, phone call quality would actually be better back then. We intentionally made it worse.
We take the range of frequencies human voices are typically in, and we isolate them. When we send the phone signal, we can make the signals occupy a higher or lower frequency, and then on the other end of the call you undo that process so it occupies the human range of hearing again. By doing this, you can send multiple signals down the same wire at once. With the growing demand of phone calls, this was necessary yo avoid building millions of new phone lines all across the country, particularly between cities.
Since then, demand has only grown, and we’ve had go implement more compression techniques, not all of which are lossless, which is why the quality has decreased. This includes the switch from an analog system to digital.
Private phone networks, like among an office building, also need to implement their own compression to allow for multiple phone conversations taking place at once, which adds to the problem even more (but only if that party is involved).
We could go back a fix these issues, but the amount of new phone lines would dramatically increase and we would still be limited by the amount of radio frequencies we can use for cellular communication. Leaving the compression in place is simply much easier.
[Relevant Tom Scott video](https://youtu.be/w2A8q3XIhu0?si=gUfWrjeV5GAN1wAq)
They already have the capability of increasing the audio quality. AT&T was at one of the conferences I attended, and they showed it off and it was so much better than what we have now. The caveat is that it is for first responders and they charge them a metric ton to have that feature. I don’t know if they still have that feature.
Edit: It is called FirstNet
You can chalk this up to the original digitization of the phone system which happened in the 1960s. Audio “resolution” is a function of the bandwidth of the digitized signal, and at the time, 56kbits per second* (a “DS0 channel”) was considered good enough to match the quality of analog phone lines of the time (by comparison, CD quality audio is roughly 700kbits per second). If you remember the days of dialup internet access, that 56kbps number probably sounds familiar.
Because of the installed base of phone network equipment that works at this bandwidth, it’s simply impossible to upgrade the whole network, and as such 56k. That said, Voice over LTE supports higher quality audio streams, and you may have noticed this if you’re calling someone who’s on the same mobile network (but only if both phones have an LTE or better signal. But DS0 bandwidth will always be a fallback.
(Yes, I know DS0 is technically 64kbps – but one bit out of every 8 is “robbed” from the audio stream, which leaves you with 56k)
Latest Answers