Why was Y2K specifically a big deal if computers actually store their numbers in binary? Why would a significant decimal date have any impact on a binary number?

1.54K views

I understand the number would have still overflowed *eventually* but why was it specifically new years 2000 that would have broken it when binary numbers don’t tend to align very well with decimal numbers?

EDIT: A lot of you are simply answering by explaining what the Y2K bug is. I am aware of what it is, I am wondering specifically why the number ’99 (`01100011` in binary) going to 100 (`01100100` in binary) would actually cause any problems since all the math would be done in binary, and decimal would only be used for the display.

EXIT: Thanks for all your replies, I got some good answers, and a lot of unrelated ones (especially that one guy with the illegible comment about politics). Shutting off notifications, peace ✌

In: 478

84 Answers

Anonymous 0 Comments

“… actually stored their numbers in binary” doesn’t give you enough information about how the numbers were stored. In binary, sure, but there are still several ways to do that.

One way to do that is called Binary Encoded Decimal. If we’re gonna party like it’s 1999, some systems would encode that ’99 as: `1001 1001`. That’s it. That’s two nibbles representing two digits, packed into a single byte. It’s binary, but it does align perfectly well with decimal numbers.

A different encoding system would interpret that bit pattern to mean hex 99, or dec 153. There would be room to store hex 9A, or dec 154. Or, more to the point, the ’99 could be stored as hex 63, `0110 0011`. This can be naturally followed by hex 64, dec 100, `1001 0100`.

Either way, you could have a problem. In a two-nibble binary encoded decimal, there is no larger number than `1001 1001`. Adding one to that would result in an overflow error. A theoretical `1001 1010` in such a system *is no number at all*.

In the other encoding system I mentioned, adding one to 99 gives you 100 (in decimal values). Oh, lovely. So the year after 1999 is 2000, maybe. Or, it’s 19100, maybe. Or, it’s 1900, maybe. We’d still need to know more about that particular implementation — about how the bit pattern will be used and interpreted — before we know the kinds of errors that it will produce.

And, we haven’t covered every encoding scheme that’s ever been used to handle two-digit dates internally. This was just a brief glimpse at some of the bad outcomes of two possibilities. Let’s not even think about all the systems that *stored* dates *as text* rather than as numbers. It’s enough to know that both text and numbers are binary, right?

You are viewing 1 out of 84 answers, click here to view all answers.