https://attackofthefanboy.com/articles/the-quake-iii-algorithm-that-defies-math-explained/
I know next to nothing about programming and physics, but I do know this article intrigues me. I’m mostly interested in how the algorithm “allowed the game to function faster and more smoothly,” and why it’s “seemingly mathematically impossible.”
In: 119
My attempt at a no-math ELI6 version. It’s long, but most of it is setting up exactly what the problem *is*, so we can get to what the algorithm is actually doing to avoid that problem.
### What we’re trying to do
Quake III is a 3D game. In 3D games, worlds tend to be made up of a patchwork of triangle-shaped flat spots. To calculate convincing lighting when drawing the game world to the screen, the computer has to look at all the triangles currently visible on the screen and run some light-bouncing calculations on each one.
### How computers do math
Computer math is a little different from “normal” math. Computers can’t really solve algebraic equations in quite the same way a human mind does with pencil and paper. (You can program a computer to mimic it, but that’s Very Hard^(tm) to do, so most computers don’t bother.) Instead, a computer has to rely on some ways of going about it that look very roundabout to a human.
Sometimes, it has to go through lots of steps that slowly meander their way towards the answer they are looking for instead of solving it directly. Like, remember long division from gradeschool, and all the steps that took? Something closer to that. Two operations that are like this are calculating square roots, and division (unless it’s division by 2, that one is special and super easy).
Now, computers have the benefit of being Very Fast^(tm). Even if you give them curveballs that need the roundabout algorithms, they’re still going to solve them a lot faster than you can. Even the weakest pocket calculator is using these strategies, and would you want to race one of those in a math exam?
### The problem
Even so, Very Fast^(tm) can still be slow if you bog it down with enough busywork to do. Remember that task of running lighting equations on triangles from earlier? One of those equations happens to involve both division AND a square root. Both of the slow ones at the same time. And the computer has to do it for *every* triangle on the screen. That’s going to chew up a tremendous amount of time. Almost certainly more time than the game has to run smoothly on the hardware it was designed for.
### The solution (the “algorithm that defies math”)
The “algorithm that defies math” (known as the [fast inverse square root](https://en.wikipedia.org/wiki/Fast_inverse_square_root)) is a clever workaround that takes advantage of some really weird and specific quirks in both mathematics itself and the way computers happen to be built. When we do this, we can effectively convert our algorithm involving square root and division into an alternative algorithm that that only has multiplication (which is fast), division by 2 (which, as mentioned before, is a special case and is even faster), and logarithms (which should be *horrendously* slow, but we have a clever trick for that).
One caveat of this new alternative algorithm is that it doesn’t give you a *precisely correct* answer, but after a fairly small number of iterations it gives you a *pretty close* answer, close enough for the lighting engine’s purposes. It’s overall faster than doing the division and square rooting the normal way, meaning we can save computation time. This in turn means the game can run at a faster speed.
### A side tangent about floats
That “clever trick” for computing logarithms comes from the fact that a computer has two major ways of thinking about numbers. One way is only for numbers that have no decimal part (integers), and the other way is for numbers that do have a decimal part (so-called “floats”).
Computers have to handle floats differently, because keeping track of that pesky little decimal point and what is on the left and right of it is not as straightforward as you’d think.
A naive way to go about it would be to store the left and right chunks separately. This means you’d need to reserve spaces in memory to hold two potentially very big numbers. We would call that “fixed point” notation, because you always know where the decimal point is–it’s in between the two chunks. But this method tends to be wasteful in practice, since in most practical situations if one side has a lot of digits in it, the other side tends to not (or they become unimportant anyway). It also means that when you do math with numbers stored this way you have to keep track of two pieces at the same time, which can be cumbersome.
An alternative is to instead set it up in such a way where you cram the entire number into a single chunk, and then keep track of where the decimal point inside the chunk is supposed to go, kind of like [scientific notation](https://en.wikipedia.org/wiki/Scientific_notation). This is called “floating point”, as the decimal point now gets to “float around”, which is where the name “float” comes from.
Computers have special dedicated hardware to transform one number into the other kind (either integers to floats, or floats into integers). This special hardware is Very Very Fast^(tm) because it is baked directly into the physical system that does this and *only* this job.
The magic here is all in the fact that, by a really weird coincidence, if you convert an integer into a float using the special hardware, then try to read that converted data as integer data *without* converting it back, what you essentially get is the logarithm of the number you started with. Which is kind of crazy.
So, whenever you need a logarithm calculated, instead of going about it the super long way, you can just abuse this integer-to-float conversion hardware (which is Very Very Fast^(tm)) as a makeshift “dedicated logarithm calculator”.
Latest Answers