Why was Y2K specifically a big deal if computers actually store their numbers in binary? Why would a significant decimal date have any impact on a binary number?

1.38K views

I understand the number would have still overflowed *eventually* but why was it specifically new years 2000 that would have broken it when binary numbers don’t tend to align very well with decimal numbers?

EDIT: A lot of you are simply answering by explaining what the Y2K bug is. I am aware of what it is, I am wondering specifically why the number ’99 (`01100011` in binary) going to 100 (`01100100` in binary) would actually cause any problems since all the math would be done in binary, and decimal would only be used for the display.

EXIT: Thanks for all your replies, I got some good answers, and a lot of unrelated ones (especially that one guy with the illegible comment about politics). Shutting off notifications, peace ✌

In: 478

84 Answers

Anonymous 0 Comments

Dates in older computer systems were stored in 2 digits to save memory. Memory was very expensive back then so the name of the game was finding efficiencies, so dropping 2 digits for a date along with various other incremental savings made a big difference.

The problem is this meant that computers assumed that all dates start with 19, so when the year 2000 came about computers would assume the date was 1900.

This was potentially a very big problem for things like banking software, or insurance because how would the computer behave? If a mortgage payment came up and it was suddenly 1900 how would the system react?

Ultimately the concern was overblown because computer and software engineers had been fixing the problem for well over a decade at that point, so it mostly just impact legacy systems.

While it was potentially a really big problem, the media blew it way out of proportion.

You are viewing 1 out of 84 answers, click here to view all answers.