Why was Y2K specifically a big deal if computers actually store their numbers in binary? Why would a significant decimal date have any impact on a binary number?

1.40K views

I understand the number would have still overflowed *eventually* but why was it specifically new years 2000 that would have broken it when binary numbers don’t tend to align very well with decimal numbers?

EDIT: A lot of you are simply answering by explaining what the Y2K bug is. I am aware of what it is, I am wondering specifically why the number ’99 (`01100011` in binary) going to 100 (`01100100` in binary) would actually cause any problems since all the math would be done in binary, and decimal would only be used for the display.

EXIT: Thanks for all your replies, I got some good answers, and a lot of unrelated ones (especially that one guy with the illegible comment about politics). Shutting off notifications, peace ✌

In: 478

84 Answers

Anonymous 0 Comments

Because at some time in the past they were stored in 8-bit binary memory locations giving only 0-255, so they used 00-99 for the year.
Calculations that used those year-indications would give weird results, for instance in duration calculation. (Year now minus year back then would give negative time after year 99)

Later they used 16/32/64 bit systems but they would not change the software to use 00-99 and then came 2000…

2038 is the next one!

You are viewing 1 out of 84 answers, click here to view all answers.