The simplest/traditional “fast” algorithm (quadratic convergence) is Newton’s method, also known as Newton-Raphson. It can be applied to many different calculations, and finding nth roots is an example where it works with hardly any theoretical work.
This is easy to explain graphically. Take square roots as an example. We want to find sqrt(n), so we set up a function f where f(x) = x^2 – n. We set up f so that it has a root at sqrt(n), essentially by definition.
Now we imagine graphing this function. We guess any random value as an initial value. A reasonable first guess would be a number with half as many bits or digits as n. We find the point on the graph of f with this x value, we draw the tangent at this point, and we find where this straight line crosses the x axis. This x value is our second guess, and we repeat.
It’s fairly easy to see qualitatively why this works if our guess is good and the function is smooth; we are basically assuming that the function is roughly a straight line, and seeing where the root would be if that were the case. Then repeat, until you are close enough for satisfactory precision. A quantitative analysis of the error and rate of convergence isn’t really ELI5, but the gist is that if the function is sufficiently smooth around the root and the initial guess, then the error in the next step is on the order of the square of the error in the current step, for any function. Note that for small errors, squaring the error means doubling the number of correct digits after the decimal place (or the number of correct bits in a computer).
I should also add that this only works on a computer because all of the steps are easy to implement. We find the derivative of f beforehand, and in this case, both f and its derivative are polynomials. Then we only need addition, subtraction, multiplication, and division to combine all of this and reproduce the above steps.
Latest Answers