So, basically, these are a way to check that a the data has not been altered in transmission, and provide a limited means to correct it. its not meant to protect against deliberate alteration, but more random noise and other problems of transmission altering the signal.
The EEC is a worked out using specific mathematical process (theirs several, and the details of them are beyond my ability to ELI5), and the output of that process is then added to the data packet. On receipt of the packet, the same maths are applied to the data, and the result is checked against the EEC. If the result is different, we know that the data has been altered.
as a result of the way it differs, it is possible to *sometimes* work out how the data must have been altered to get the result, allowing you to reconstruct the original data without having to ask the sender to resend that packet.
however, the EEC bits eat up space in the data packet (Which is limited to a fixed length by various protocols and standards), so lots of FEC will reduce your usable bandwidth. For example, if you can only send, 1,000 bits a second, using 200 of them for EEC means you can only pass 800 bits of actual data a second.
Thus, you have a trade off between time saved on resending corrupted packets, and time lost to reduced bandwidth, and the amount used is related to what medium you are using to send the data. A radio link, being more “noisy”, would need more FEC than a fibre optic cable, for example.
Links with high lag times tend to have more FEC, as if it takes you 30 millseconds to run a EEC check, but 3000 milliseconds to ask for and receive a resent packet, its much more efficient to use a lot of FEC to make sure the received data it right first time.
Latest Answers