# [ELI5] For wifi why do we use a whole channel of 20MHz instead of a couple frequencies?

55 views

So I get why we can’t use ONE single frequency, it’s sine and we can’t transfer data through it, it’s always gonna be constant. But what about using like 10 or 20 frequencies, something like from 2.4GHz to 2.4GHz10Hz (wrong syntax but you get what I mean). and use it to transfer the data.

I’m gonna give an other potentially wrong example, for radio, don’t we use one single frequency? we set our receiver to that frequency and get the transmitted data from the transmitter? or is it different from wifi because radio is simplex and wifi is half-duplex?

This ended up requiring 2 explanations (wifi and radio) but I’m more interested in the wifi use of a whole 20MHz channel instead of a smaller number of frequencies. Thanks.

In: 2

The frequency you’re thinking of is the carrier frequency, which is basically the frequency on top of which you put the data you need to transfer. The more data per time unit you want to transfer, the wider of a channel around this carrier you need. There are different techniques whit varying efficiency of channel usage, but given a certain technique, the data rate dictates the necessary width of your channel. Even radio is not a single frequency; the frequency you set on your radio is the carrier frequency. There’s still a band around that carrier to “contain” the music.

We can never use one frequency. Exactly one frequency cannot communicate any data at all. It’s just a constant unchanging signal. To transmit data, we need a range of frequencies, known as “bandwidth”. The “width” of the “band”.

The amount of data we can move (assuming everything else is perfect) is directly related to (and limited by) the bandwidth.

So… Using exactly one frequency has exactly zero bandwidth and is not very productive.

If you use a single frequency you can only send a constant sine wave. If you just turn the transition on and off the result is that you create signals on other frequencies.

Any signal that changes so you can encode information will contain signal at multiple frequencies, the faster you change the signal the more information you can encode but it results in a signal with a larger frequency range.

Are real radio transmission will have noise in the signal you send, for natural and another radio transmitter,s and noise from the electronics in the receiver. This limits how small changes you can do to the signal and encode data. The higher the single power compare to noise the more data you can transmit. This is called the signal-to-noise ratio.

from noise, signal power, and channel bandwidth you can mathematically calucalte the maximum bandwidth for a channel. It does done by [https://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theorem](https://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theorem)

the max bandwidth in bits is C= B Log2 (1 + S/N) where B is the bandwidth in hertz. lon2 is the base 2 logarithm and S/N is the signal to noise ratio

So for a given signal to noise ratio, the capacity is directly proportional to the signal bandwidth

So your example with 10Hz channel that starts at 2.4 GHz will have 10/20 000 000= 1/2 000 000 of the capacity of a 20 MHz channel

Because there is an inherent limitation in the amount of data you can transmit on a channel with the noise you need to increase the bandwidth of the channel to be able to transmit more data.

We do, the 2.4GHz band is centered at 2.4GHz with 20MHz of bandwidth, so it’s actually 2.390 to 2.410 GHz.

The reason you need a range of frequencies is because the channel capacity (max number of bits per second you can send or receive) is proportional to the bandwidth. If the bandwidth is 0 then the amount of information you can send is 0 bits per second. If you want something to Google, it’s called the Shannon-Hartley limit and has some interest math to prove it.

It’s actually the same for radio. The key insight behind radio is that what you’re doing is taking a low bandwidth baseband signal and moving it to a “channel” at a higher base frequency so it doesn’t interfere with another signal with the same bandwidth. Another trick you can pull is to time the signal so it is interleaved in time with other signals, but this is ultimately the same thing viewed at a different angle (pun intended, for the Fourier nerds)

No, radio doesn’t use single frequency. AM radio channels occupy 10kHz band, FM channels occupy 200 kHz band. The frequency you tune in your radio is the middle of the channel (usually).

The channel with just 1 frequency cannot ever change the signal. Any change of the signal create sidebands (frequencies slightly above and below “main” frequency). The faster the change, the wider the sidebands. With ±10 Hz channel, you can change signal 10 times per second. With ±10kHz – 10000 times per second. With ±20 MHz channel – 20 million times per second.