Computers can only represent a limited amount of steps when representing something like an image or sound. Even if a computer is computing decimal places, there is always going to be a limit on how many decimal places it can work with.
This has the side effect that if there’s something like an image with a smooth gradient, it has to represent that gradient from dark to light as something like `[0.0, 0.5, 1.0, 1.5, 2.0, …]`, and depending on the distance between the gaps, the jumps between each “step,” can be extremely noticable. Sound is similar, in that each “sample,” has to be represented as a value, so the lower the precision, the larger the steps between them, and the worse the audio can sound.
Dithering fixes this by jumbling the data a bit, sort of like “stippling,” and image, to _fake_ a smoother image/sound without actually having to increase audio quality.
Latest Answers