Why was Y2K specifically a big deal if computers actually store their numbers in binary? Why would a significant decimal date have any impact on a binary number?

1.55K views

I understand the number would have still overflowed *eventually* but why was it specifically new years 2000 that would have broken it when binary numbers don’t tend to align very well with decimal numbers?

EDIT: A lot of you are simply answering by explaining what the Y2K bug is. I am aware of what it is, I am wondering specifically why the number ’99 (`01100011` in binary) going to 100 (`01100100` in binary) would actually cause any problems since all the math would be done in binary, and decimal would only be used for the display.

EXIT: Thanks for all your replies, I got some good answers, and a lot of unrelated ones (especially that one guy with the illegible comment about politics). Shutting off notifications, peace ✌

In: 478

84 Answers

Anonymous 0 Comments

besides the explanation given by other people already, the next actual “big deal” for computer dates will be at 03:14:07 UTC on 19 January 2038.

As a lot of computers and embedded devices use Unix time which is stored in a signed 32-bit integer. This stores the number of seconds relative to 00:00:00 UTC on 1 January 1970. and the way signed integers work , if the first bit is a 1, the number is negative. so as soon as all the bits are full, there will be an overflow where that first bit is flipped.

And 1 second later , for a lot of devices, it will suddenly be 20:45:52 UTC on 13 December 1901.

Or how some people are calling it:

Epochalypse

You are viewing 1 out of 84 answers, click here to view all answers.