How does the Cordic algorithm work to calculate Sine and Cosine?

196 views

How does the Cordic algorithm work to calculate Sine and Cosine?

In: 18

6 Answers

Anonymous 0 Comments

It essentially uses “sum/difference of angles” formula, with precomputed values for some angles:

* sin(a±b) = sin(a)*cos(b) ± cos(a)*sin(b)
* cos(a±b) = cos(a)*cos(b) ∓ sin(a)*sin(b)

The algorithm uses a table of “b” angles, and then adds or subtracts them, until they hit the target angle X.

CORDIC uses several tricks to simplify a computation of the formula:

1. Sine and cosine of “b” are redefined in terms of Tan: sin(b) = tan(b)/√(1+tan^(2)(b)), cos(b) = 1/√(1+tan^(2)(b)). So there is only one table of tans, instead of two for sines and cosines.
2. The common factor 1/√(1+tan^(2)(b)) is pulled out.
3. The precomputed angles are picked in such a way, that tan(b) is a power of 2. That means, that multiplying by tan(b) is reduced to binary shift.
4. The algorithm always either adds angle “b”, or subtracts it. That means that factors 1/√(1+tan^(2)(b)) are multiplied on every step – so you can just precompute a resulting product, instead of multiplying them at runtime.

Anonymous 0 Comments

It essentially uses “sum/difference of angles” formula, with precomputed values for some angles:

* sin(a±b) = sin(a)*cos(b) ± cos(a)*sin(b)
* cos(a±b) = cos(a)*cos(b) ∓ sin(a)*sin(b)

The algorithm uses a table of “b” angles, and then adds or subtracts them, until they hit the target angle X.

CORDIC uses several tricks to simplify a computation of the formula:

1. Sine and cosine of “b” are redefined in terms of Tan: sin(b) = tan(b)/√(1+tan^(2)(b)), cos(b) = 1/√(1+tan^(2)(b)). So there is only one table of tans, instead of two for sines and cosines.
2. The common factor 1/√(1+tan^(2)(b)) is pulled out.
3. The precomputed angles are picked in such a way, that tan(b) is a power of 2. That means, that multiplying by tan(b) is reduced to binary shift.
4. The algorithm always either adds angle “b”, or subtracts it. That means that factors 1/√(1+tan^(2)(b)) are multiplied on every step – so you can just precompute a resulting product, instead of multiplying them at runtime.

Anonymous 0 Comments

I’m going to explain a very similar algorithm first called Fast Exponentiation (FE), both Cordic and FE are very similar, in fact they both fall under the same category of algorithms we call “shift and add” algorithms, but CORDIC has a lot of red herrings compared to FE so we’re gonna go over this real quick to get an intuition. Still gonna be slightly complicated but hopefully this is at a level high school me could understand.

Suppose you had to figure out what 7^(43) was? How would you go about doing it, well without a calculator at least.

I mean you could multiply 7s together 43 times but that would be a waste, and gets pretty unreasonable pretty quick if you have to multiply even bigger numbers like 7^746373.

Well if you stared at the problem for long enough you would probably be able to mess with the rules of algebra to figure out a shortcut. And that shortcut is the exponent exponent addition rule, that a^x * a^y = a^(x + y) .

Instead of multiplying 7s together, we can multiply higher powers of 7s together that we already know in the process of going up to those higher powers and skip a lot of steps involved here. to get 7^4 we could multiply 7 * 7 * 7 * 7 or we could do this in two steps, multiply 7 * 7 to get 49 then multiply 49 * 49 to get 7^4 = 2401

And now it even takes us just one step to get 7^8 = 7^4 * 7^4 = 2401 * 2401 = 5764801. We can similarly find 7^16 , 7^32 and beyond.

To find 7^43 all we have to do is break this down into a product of powers of 7 that are multiplies of two, (in other words we have to find the binary decomposition of 43, not gonna get too deep into that, you can search this sub for that topic) and that would be 7^43 = 7^32 * 7^8 * 7^2 * 7^1

It takes us 5 multiplications to get all the powers of 7 we need for this, then 4 multiplications to “sum” them all together into 7^43, compared to multiplying 7 43 times this is really good, and it gets even better with bigger exponents. Computers have the added benefit of already storing all their numbers as binary decompositions (we store our numbers as say “4 tens and 3 1s,” computers store this as “one 32, one 8, one 2, one 1”), so we can just shift through the list of an exponents binary decompositions and add them up. Hence why we call these shift and add algorithms.

CORDIC works almost the same! In linear algebra we have a thing called a rotation matrix, each rotation matrix “rotates” vectors by whatever number of degrees the rotation matrix is designed to rotate, the actual values in the matrix are based off sine and cosine for that angle. Linear algebra allows us to “multiply” two matrices together which combines their effects, if we multiply two rotation matricies we get a third rotation matrix whose rotation angle is the sum of the last two. By adding a rotation matrix to itself we get the double rotation of that matrix!

If you want to avoid using linear algebra, you can actually just use the trig double angle and angle addition formulas for the same result (the formulas are actually derived from the rotation matrix’s properties), if you know sine and cosine for some angle, you can calculate the sine and cosine for double that same angle.

This gives us the add in the shift and add, except we have one big question, where do we start?

CORDIC computer programs are usually “bootstrapped” with a very small angle’s rotation matrix, usually a very small power of two, something like the angle 1/256 degrees or something, this is our minimum resolution for the angle. We take the binary decomposition of the angle we want to find, out all the powers of two that are in the angle, then we start doubling and “summing” to the biggest angle in the binary decomposition. And at the end we get the rotation matrix, from which we just extract sine and cosine.

And there you go, you your trig functions.

Anonymous 0 Comments

First, let’s recall the dichotomic method (also named binary search). It seems unrelated, but this algorithm starts with a variant of it.

Let’s take a simple example for the dichotomic method: searching a word in a dictionary.

* The method states that first you should open the dictionary at 1/2, and then you compare your word to the current page, and make a jump of 1/4 “in the good direction”. Said otherwise If it is before alphabetically you now open at 1/4, otherwise at 3/4.

* You compare again and now make a jump of 1/8 in the good direction, so depending on the result you will open it at 1/8 or 3/8 in the former case, and 5/8 or 7/8 in the latter case.

* And you continue by making smaller and smaller jumps: 1/16, 1/32, etc

* Eventually, you’ll end up at the good page. And while this method is not always practical for a human, it’s the most efficient one for a computer.

So what’s the link between this method and computing sines and cosines? Well, that’s because the method approximate the angle in a similar way. It makes jumps of size arctan(1), then arctan(1/2), then arctan(1/4), then arctan(1/8), … up until it’s near enough from the intended angle.

(Those values for arctan(theta) are pre-computed and included in the algorithm. Usually, you only need to pre-compute like 20 maybe 30 of them to get a good enough algorithm.)

At each steps, it updates values of the sine and cosine of this approximated angle. This is done through mathematical formula that link “making a rotation of angle arctan(theta)” and “modifying the cosine and sine in a certain way”.

(Those mathematical formula are the reason why the jumps are things like arctan(1/8) and not just 1/8. Fortunately, this small change does not break the dichotomic method due to some nice properties of the arctan function.)

And since sin and cosin are continuous function will small derivatives, approximating the angle and giving the sin and cosin of this approximation will actually give a good approximation of the sin and cosin of the intended angle.

Anonymous 0 Comments

I’m going to explain a very similar algorithm first called Fast Exponentiation (FE), both Cordic and FE are very similar, in fact they both fall under the same category of algorithms we call “shift and add” algorithms, but CORDIC has a lot of red herrings compared to FE so we’re gonna go over this real quick to get an intuition. Still gonna be slightly complicated but hopefully this is at a level high school me could understand.

Suppose you had to figure out what 7^(43) was? How would you go about doing it, well without a calculator at least.

I mean you could multiply 7s together 43 times but that would be a waste, and gets pretty unreasonable pretty quick if you have to multiply even bigger numbers like 7^746373.

Well if you stared at the problem for long enough you would probably be able to mess with the rules of algebra to figure out a shortcut. And that shortcut is the exponent exponent addition rule, that a^x * a^y = a^(x + y) .

Instead of multiplying 7s together, we can multiply higher powers of 7s together that we already know in the process of going up to those higher powers and skip a lot of steps involved here. to get 7^4 we could multiply 7 * 7 * 7 * 7 or we could do this in two steps, multiply 7 * 7 to get 49 then multiply 49 * 49 to get 7^4 = 2401

And now it even takes us just one step to get 7^8 = 7^4 * 7^4 = 2401 * 2401 = 5764801. We can similarly find 7^16 , 7^32 and beyond.

To find 7^43 all we have to do is break this down into a product of powers of 7 that are multiplies of two, (in other words we have to find the binary decomposition of 43, not gonna get too deep into that, you can search this sub for that topic) and that would be 7^43 = 7^32 * 7^8 * 7^2 * 7^1

It takes us 5 multiplications to get all the powers of 7 we need for this, then 4 multiplications to “sum” them all together into 7^43, compared to multiplying 7 43 times this is really good, and it gets even better with bigger exponents. Computers have the added benefit of already storing all their numbers as binary decompositions (we store our numbers as say “4 tens and 3 1s,” computers store this as “one 32, one 8, one 2, one 1”), so we can just shift through the list of an exponents binary decompositions and add them up. Hence why we call these shift and add algorithms.

CORDIC works almost the same! In linear algebra we have a thing called a rotation matrix, each rotation matrix “rotates” vectors by whatever number of degrees the rotation matrix is designed to rotate, the actual values in the matrix are based off sine and cosine for that angle. Linear algebra allows us to “multiply” two matrices together which combines their effects, if we multiply two rotation matricies we get a third rotation matrix whose rotation angle is the sum of the last two. By adding a rotation matrix to itself we get the double rotation of that matrix!

If you want to avoid using linear algebra, you can actually just use the trig double angle and angle addition formulas for the same result (the formulas are actually derived from the rotation matrix’s properties), if you know sine and cosine for some angle, you can calculate the sine and cosine for double that same angle.

This gives us the add in the shift and add, except we have one big question, where do we start?

CORDIC computer programs are usually “bootstrapped” with a very small angle’s rotation matrix, usually a very small power of two, something like the angle 1/256 degrees or something, this is our minimum resolution for the angle. We take the binary decomposition of the angle we want to find, out all the powers of two that are in the angle, then we start doubling and “summing” to the biggest angle in the binary decomposition. And at the end we get the rotation matrix, from which we just extract sine and cosine.

And there you go, you your trig functions.

Anonymous 0 Comments

First, let’s recall the dichotomic method (also named binary search). It seems unrelated, but this algorithm starts with a variant of it.

Let’s take a simple example for the dichotomic method: searching a word in a dictionary.

* The method states that first you should open the dictionary at 1/2, and then you compare your word to the current page, and make a jump of 1/4 “in the good direction”. Said otherwise If it is before alphabetically you now open at 1/4, otherwise at 3/4.

* You compare again and now make a jump of 1/8 in the good direction, so depending on the result you will open it at 1/8 or 3/8 in the former case, and 5/8 or 7/8 in the latter case.

* And you continue by making smaller and smaller jumps: 1/16, 1/32, etc

* Eventually, you’ll end up at the good page. And while this method is not always practical for a human, it’s the most efficient one for a computer.

So what’s the link between this method and computing sines and cosines? Well, that’s because the method approximate the angle in a similar way. It makes jumps of size arctan(1), then arctan(1/2), then arctan(1/4), then arctan(1/8), … up until it’s near enough from the intended angle.

(Those values for arctan(theta) are pre-computed and included in the algorithm. Usually, you only need to pre-compute like 20 maybe 30 of them to get a good enough algorithm.)

At each steps, it updates values of the sine and cosine of this approximated angle. This is done through mathematical formula that link “making a rotation of angle arctan(theta)” and “modifying the cosine and sine in a certain way”.

(Those mathematical formula are the reason why the jumps are things like arctan(1/8) and not just 1/8. Fortunately, this small change does not break the dichotomic method due to some nice properties of the arctan function.)

And since sin and cosin are continuous function will small derivatives, approximating the angle and giving the sin and cosin of this approximation will actually give a good approximation of the sin and cosin of the intended angle.