Why was Y2K specifically a big deal if computers actually store their numbers in binary? Why would a significant decimal date have any impact on a binary number?

1.46K views

I understand the number would have still overflowed *eventually* but why was it specifically new years 2000 that would have broken it when binary numbers don’t tend to align very well with decimal numbers?

EDIT: A lot of you are simply answering by explaining what the Y2K bug is. I am aware of what it is, I am wondering specifically why the number ’99 (`01100011` in binary) going to 100 (`01100100` in binary) would actually cause any problems since all the math would be done in binary, and decimal would only be used for the display.

EXIT: Thanks for all your replies, I got some good answers, and a lot of unrelated ones (especially that one guy with the illegible comment about politics). Shutting off notifications, peace ✌

In: 478

84 Answers

Anonymous 0 Comments

One byte stores. -128 to 127 (or 0 to 255).

That would only allow you to store the last two digits, eg, 1999 would be stored as 99, 2000 would be stored as 00.

The code could work in different ways. So the time difference between this year and last year would be 2023-2022 = 1 or 23 – 22 = 1.

But the problem is

2000-1999=1 or 00 – 99 = -99

But this is just a possibility. In my company, out of over a million lines of code, there were no problems.

But we still had to check.

You are viewing 1 out of 84 answers, click here to view all answers.