Why was Y2K specifically a big deal if computers actually store their numbers in binary? Why would a significant decimal date have any impact on a binary number?

1.47K views

I understand the number would have still overflowed *eventually* but why was it specifically new years 2000 that would have broken it when binary numbers don’t tend to align very well with decimal numbers?

EDIT: A lot of you are simply answering by explaining what the Y2K bug is. I am aware of what it is, I am wondering specifically why the number ’99 (`01100011` in binary) going to 100 (`01100100` in binary) would actually cause any problems since all the math would be done in binary, and decimal would only be used for the display.

EXIT: Thanks for all your replies, I got some good answers, and a lot of unrelated ones (especially that one guy with the illegible comment about politics). Shutting off notifications, peace ✌

In: 478

84 Answers

Anonymous 0 Comments

besides the explanation given by other people already, the next actual “big deal” for computer dates will be at 03:14:07 UTC on 19 January 2038.

As a lot of computers and embedded devices use Unix time which is stored in a signed 32-bit integer. This stores the number of seconds relative to 00:00:00 UTC on 1 January 1970. and the way signed integers work , if the first bit is a 1, the number is negative. so as soon as all the bits are full, there will be an overflow where that first bit is flipped.

And 1 second later , for a lot of devices, it will suddenly be 20:45:52 UTC on 13 December 1901.

Or how some people are calling it:

Epochalypse

Anonymous 0 Comments

Since no one has mentioned this, I think you’re mixing the y2k bug with the year 2038 problem. In computers that use UNIX time to keep track of time, they count in binary in the form of a 32 bit digit, which means a sequence of 32 1’s and 0’s used to represent the time.

The time is counted from time 0 in binary, represented by 32 0’s in this 32 bit number. Time 0 represents January 1st, 1970, and to get the current time and date you can take the current value and plug it into a formula to get the current date as the elapsed time since then.

The problem is, eventually as you count up, the 32 bit number won’t be able to go any higher since it’ll go to all 1’s for the largest value. That date corresponds with January 19th, 2038. At that point it’ll either stop or roll back over to January 1st, 1970, messing up the time and potentially many program executions.

While this may be an issue for older systems, many computers now use 64 bit numbers. To count to the final value would take about 292 billion years, which is about 21 times longer than the estimated age of the universe.

Anonymous 0 Comments

Since no one has mentioned this, I think you’re mixing the y2k bug with the year 2038 problem. In computers that use UNIX time to keep track of time, they count in binary in the form of a 32 bit digit, which means a sequence of 32 1’s and 0’s used to represent the time.

The time is counted from time 0 in binary, represented by 32 0’s in this 32 bit number. Time 0 represents January 1st, 1970, and to get the current time and date you can take the current value and plug it into a formula to get the current date as the elapsed time since then.

The problem is, eventually as you count up, the 32 bit number won’t be able to go any higher since it’ll go to all 1’s for the largest value. That date corresponds with January 19th, 2038. At that point it’ll either stop or roll back over to January 1st, 1970, messing up the time and potentially many program executions.

While this may be an issue for older systems, many computers now use 64 bit numbers. To count to the final value would take about 292 billion years, which is about 21 times longer than the estimated age of the universe.

Anonymous 0 Comments

I wrote software fixes during that time. Timekeeping systems and all manner of things broke. It was common for just about anything with date calculations to break. And often the databases were only set to a 2 digit year as well. It was definitely cause for a lot of issues, though mostly inconveniences.

Anonymous 0 Comments

If your car’s odometer goes from 9500 to 9506, then the difference is 6 (km or miles, depending where you live). However, if your odometer rolled over from 999,999 to 0 during your trip, then trying to calculate the distance travelled on even a short trip is going to give a very confusing result…like -999,994. Same thing with your computer’s clock and date calculations if it “rolls over”.

Anonymous 0 Comments

besides the explanation given by other people already, the next actual “big deal” for computer dates will be at 03:14:07 UTC on 19 January 2038.

As a lot of computers and embedded devices use Unix time which is stored in a signed 32-bit integer. This stores the number of seconds relative to 00:00:00 UTC on 1 January 1970. and the way signed integers work , if the first bit is a 1, the number is negative. so as soon as all the bits are full, there will be an overflow where that first bit is flipped.

And 1 second later , for a lot of devices, it will suddenly be 20:45:52 UTC on 13 December 1901.

Or how some people are calling it:

Epochalypse

Anonymous 0 Comments

I wrote software fixes during that time. Timekeeping systems and all manner of things broke. It was common for just about anything with date calculations to break. And often the databases were only set to a 2 digit year as well. It was definitely cause for a lot of issues, though mostly inconveniences.

Anonymous 0 Comments

besides the explanation given by other people already, the next actual “big deal” for computer dates will be at 03:14:07 UTC on 19 January 2038.

As a lot of computers and embedded devices use Unix time which is stored in a signed 32-bit integer. This stores the number of seconds relative to 00:00:00 UTC on 1 January 1970. and the way signed integers work , if the first bit is a 1, the number is negative. so as soon as all the bits are full, there will be an overflow where that first bit is flipped.

And 1 second later , for a lot of devices, it will suddenly be 20:45:52 UTC on 13 December 1901.

Or how some people are calling it:

Epochalypse

Anonymous 0 Comments

Since no one has mentioned this, I think you’re mixing the y2k bug with the year 2038 problem. In computers that use UNIX time to keep track of time, they count in binary in the form of a 32 bit digit, which means a sequence of 32 1’s and 0’s used to represent the time.

The time is counted from time 0 in binary, represented by 32 0’s in this 32 bit number. Time 0 represents January 1st, 1970, and to get the current time and date you can take the current value and plug it into a formula to get the current date as the elapsed time since then.

The problem is, eventually as you count up, the 32 bit number won’t be able to go any higher since it’ll go to all 1’s for the largest value. That date corresponds with January 19th, 2038. At that point it’ll either stop or roll back over to January 1st, 1970, messing up the time and potentially many program executions.

While this may be an issue for older systems, many computers now use 64 bit numbers. To count to the final value would take about 292 billion years, which is about 21 times longer than the estimated age of the universe.

Anonymous 0 Comments

If your car’s odometer goes from 9500 to 9506, then the difference is 6 (km or miles, depending where you live). However, if your odometer rolled over from 999,999 to 0 during your trip, then trying to calculate the distance travelled on even a short trip is going to give a very confusing result…like -999,994. Same thing with your computer’s clock and date calculations if it “rolls over”.