It’s all in how computers represent negative numbers. Without getting into the nitty, gritty details, the way computers represent positive and negative numbers is such that all positive numbers (and 0) will have the first bit as a 0 and all negative numbers will have the first bit as a 1.
Another consequence of how computers represent negative numbers is that subtraction can be implemented via addition; you simply convert the second number into its negative. That is, it takes something like 8 – 3 and changes it into 8 + -3.
We can use both of these facts to compare two numbers, A and B.
We subtract one from the other: A – B.
The computer converts this into addition: A + -B.
Then examines the result, C. It checks the first bit of C to see if C is positive (or 0), or negative. If C is positive (or 0) it then checks to see if it is actually positive or 0 (all bits are 0).
So, if C is positive, then A was greater than B.
If C is negative, then A was less than B.
If C was 0, then A was equal to B.
Latest Answers