Why was Y2K specifically a big deal if computers actually store their numbers in binary? Why would a significant decimal date have any impact on a binary number?

1.43K views

I understand the number would have still overflowed *eventually* but why was it specifically new years 2000 that would have broken it when binary numbers don’t tend to align very well with decimal numbers?

EDIT: A lot of you are simply answering by explaining what the Y2K bug is. I am aware of what it is, I am wondering specifically why the number ’99 (`01100011` in binary) going to 100 (`01100100` in binary) would actually cause any problems since all the math would be done in binary, and decimal would only be used for the display.

EXIT: Thanks for all your replies, I got some good answers, and a lot of unrelated ones (especially that one guy with the illegible comment about politics). Shutting off notifications, peace ✌

In: 478

84 Answers

Anonymous 0 Comments

Too add to all of the excellent answers here. The Y2K thing was mostly relevant to things like billing systems, infrastructure control, and other highly integrated systems. Those systems were taken care of without too much issue, and as we saw, Jan 1st 2000 came and went without a hitch.

Most of the hype became a marketing gimmick to get people to buy new electronics, computers, and software, even though the stuff they already had was 99.99% y2k compliant.

Many consumer electronics that used only 2 digit years were either patched years ahead of time or were already long obsolete and irrelevant to the problem.

You are viewing 1 out of 84 answers, click here to view all answers.