So, basically, these are a way to check that a the data has not been altered in transmission, and provide a limited means to correct it. its not meant to protect against deliberate alteration, but more random noise and other problems of transmission altering the signal.
The EEC is a worked out using specific mathematical process (theirs several, and the details of them are beyond my ability to ELI5), and the output of that process is then added to the data packet. On receipt of the packet, the same maths are applied to the data, and the result is checked against the EEC. If the result is different, we know that the data has been altered.
as a result of the way it differs, it is possible to *sometimes* work out how the data must have been altered to get the result, allowing you to reconstruct the original data without having to ask the sender to resend that packet.
however, the EEC bits eat up space in the data packet (Which is limited to a fixed length by various protocols and standards), so lots of FEC will reduce your usable bandwidth. For example, if you can only send, 1,000 bits a second, using 200 of them for EEC means you can only pass 800 bits of actual data a second.
Thus, you have a trade off between time saved on resending corrupted packets, and time lost to reduced bandwidth, and the amount used is related to what medium you are using to send the data. A radio link, being more “noisy”, would need more FEC than a fibre optic cable, for example.
Links with high lag times tend to have more FEC, as if it takes you 30 millseconds to run a EEC check, but 3000 milliseconds to ask for and receive a resent packet, its much more efficient to use a lot of FEC to make sure the received data it right first time.
There’s an awful lot of math involved in the various FEC types, and it’s going to be *really* hard to “break it down, Barney style,” but I’ll do what I can.
FEC and ECC are codes that are built into the signal to allow the receiver to correct errors on it’s own. Some codes send a chunk of data and a chunk of code, some signals interleave the data with the code, some signals combine the data with the code and send out something entirely new. They types of code range from relatively simply (like the code built into the bar code on your soda can) to extremely complex (like the codes used in digital video transmissions).
The simple ones, like the previously mentioned bar code, perform a few mathematical calculations on the numbers of the code and compare them to the check digit. Your basic bar code has 12 digits under the code, a single number on the far-left, ten digits underneath the bars, and one more on the far-right. That far-right number is the check digit. Here’s how it works:
The bar code on the bottom of my box of Better Oats Steel Cut Original oatmeal is :
0 42400 01593 2
That 2 is the check digit. Starting from the left, we add up all of the numbers in the odd positions and multiply by 3:
**0** 4**2**4**0**0 **0**1**5**9**3**
3(0+2+0+0+5+3) = 3(10) = 30
Now we add all the digits in even positions, but not the check digit:
0 **4**2**4**0**0** 0**1**5**9**3
4+4+0+1+9 = 18
Add the two results together:
30 + 18 = 48
Determine what needs to be added in order to get to the next multiple of 10:
48 + 2 = 50
The number we added to our sum (in this case, 2) is the check digit. So when the scanner at the grocery store reads the bar code on my box of oatmeal, it does these exact same calculations and compares the end result with the check digit on box, and if they match then it knows it scanned the right code. If they don’t, then they know the have an error.
But that’s just detection, right? So how does it correct the error if it detects one?
The correct answer is it can only fix an error if it can’t read one of the other numbers because the bar code has been damaged in some way, so it isn’t a perfect code. Let’s say the scanner reads the code and this is what it gets:
0 42400 01*93 2
The scanner knows that “*” isn’t a valid symbol, so what the system does is it runs the code through the calculations with a different number in the error spot until it gets one that fits the check digit (and matches a code in the system).
0 42400 01193 gives us a check digit of 4, which doesn’t fit.
0 42400 01293 gives us a check digit of 1, which also doesn’t fit.
0 42400 01393 results in a 3, that doesn’t fit, either.
0 42400 01493 kicks back a 5, nope.
0 42400 01593 gives us a 2. Bingo!
Hopefully this helps a little.
The basic idea behind ECC is to add redundancy to a message such that you can detect and correct common types of errors. There are many different versions used, heavily varying depending on the nature of the communication and the expected types of errors that will occur. ECC, and related systems, have been developed for voice communication, writing, and electronic communication.
The absolute simplest is to merely duplicate, or more, the data. If you know I am sending you a 12 digit number, I could send you the number twice and, if you get different values, you know something went wrong and can request it again. If I send it to you three times and one gets corrupted, the one that is different is likely wrong and can be discarded and you use the matching two; if all three differ, you know *something* went wrong, but don’t know which is correct and need to request it again. Such a system is really good if you expect a *lot* of corruption, or the message is *really* important. However, such a system is also very inefficient given that I sent you two, or three, times the needed data to ensure you got it. Luckily, we could use any number of different tricks, mostly based in mathematics, to devise a smarter system, and make sure it can detect, and ideally correct, the most commonly expected errors.
One example of an ECC that is used for voice is the [NATO phonetic alphabet](https://en.wikipedia.org/wiki/NATO_phonetic_alphabet), parts of which are very well known. In this case, redundancy is added by adding extra sounds to the letter names to help make them more distinct and easier to identify. As such, even if you miss part of what was said, you can still understand the message, such as due to noise or very short signal loss.
Credit card numbers use a version called the [Luhn algorithm](https://en.wikipedia.org/wiki/Luhn_algorithm) that was designed to help with transcription errors. In this case, the digits of the card number are added together in a specific pattern and reduced to a single digit using modulus. This specific one was designed to detect most types of errors that a human would likely make when copying the number, such as miswriting a single digit or swapping two numbers around. It, however, does not normally encode enough data to allow for actual correction of errors – merely detection of them. If you know you got all but one digit, you could reverse the math to calculate what the missing digit was, however.
[QR codes](https://en.wikipedia.org/wiki/QR_code#Error_correction) have a few different versions depending on how much reliability you want. These use quite a bit of math to store the same data in multiple places around the code. The main intent here was to handle codes getting ripped or otherwise physically damaged after printing. The basic idea is to use a polynomial as, if you have any two of a, b, or c from a^(2) + b^(2) = c^(2), you can calculate the missing one (you might remember this from middle or high school math classes) – by storing all three of a, b, and c in the code, you can lose one and calculate the missing one. In most cases, this is expanded to additional variables and you can be missing any one. If you do this complex enough, you can have a lot of variables and chain calculate others so long as you are not missing too many.
Latest Answers