Why was Y2K specifically a big deal if computers actually store their numbers in binary? Why would a significant decimal date have any impact on a binary number?

1.38K views

I understand the number would have still overflowed *eventually* but why was it specifically new years 2000 that would have broken it when binary numbers don’t tend to align very well with decimal numbers?

EDIT: A lot of you are simply answering by explaining what the Y2K bug is. I am aware of what it is, I am wondering specifically why the number ’99 (`01100011` in binary) going to 100 (`01100100` in binary) would actually cause any problems since all the math would be done in binary, and decimal would only be used for the display.

EXIT: Thanks for all your replies, I got some good answers, and a lot of unrelated ones (especially that one guy with the illegible comment about politics). Shutting off notifications, peace ✌

In: 478

84 Answers

1 2 3 8 9
Anonymous 0 Comments

The computer’s interpretation of a binary number resulted in two digits representing the last two numbers of the year. It was a problem because they interpretation *could* roll over at midnight 2000. Any math based on that interpretation would calculate an incorrect result or, worse, result in a negative number and cause more serious problems.

Anonymous 0 Comments

The computer’s interpretation of a binary number resulted in two digits representing the last two numbers of the year. It was a problem because they interpretation *could* roll over at midnight 2000. Any math based on that interpretation would calculate an incorrect result or, worse, result in a negative number and cause more serious problems.

Anonymous 0 Comments

The computer’s interpretation of a binary number resulted in two digits representing the last two numbers of the year. It was a problem because they interpretation *could* roll over at midnight 2000. Any math based on that interpretation would calculate an incorrect result or, worse, result in a negative number and cause more serious problems.

Anonymous 0 Comments

Binary wasn’t the issue here. The trick was that most computers were only storing the last two digits of years. They kept track of dates as 88 or 96, not 1988 or 1996. This was fine at first, since early computers had very little memory and space for storage, so you tried to squeeze as much efficiency as possible.

The problem is that computer programs that were built with just two digit dates in mind started to break down when you hit the year 2000. You might run into a computer program that kept track of electric bill payments glitching out because as far as it could tell, you hadn’t paid your bill in years because it couldn’t handle the math of 00 compared to 99.

There were lots of places where the two digit date format was going to cause problems when the year 2000 came, because everything from banks to power plants to airports were using old computer programs. Thankfully, a concentrated effort by programmers and computer engineers over several years was able to patch and repair these programs so that there was only minimal disruption to life in 2000.

However, if we hadn’t fixed those, there would have been a lot of problems with computer programs that suddenly had to go from 99 to 00 in ways they hadn’t been prepared for.

Anonymous 0 Comments

Because the applications only had two spaces set up for the year. Once you hit 2000 it’s an issue because you don’t know if 00 is 2000 or 1900.

I think you’re confused. The date wouldn’t overflow, it would just become ambiguous. Ambiguity and software don’t mix.

Anonymous 0 Comments

Dates in older computer systems were stored in 2 digits to save memory. Memory was very expensive back then so the name of the game was finding efficiencies, so dropping 2 digits for a date along with various other incremental savings made a big difference.

The problem is this meant that computers assumed that all dates start with 19, so when the year 2000 came about computers would assume the date was 1900.

This was potentially a very big problem for things like banking software, or insurance because how would the computer behave? If a mortgage payment came up and it was suddenly 1900 how would the system react?

Ultimately the concern was overblown because computer and software engineers had been fixing the problem for well over a decade at that point, so it mostly just impact legacy systems.

While it was potentially a really big problem, the media blew it way out of proportion.

Anonymous 0 Comments

Binary wasn’t the issue here. The trick was that most computers were only storing the last two digits of years. They kept track of dates as 88 or 96, not 1988 or 1996. This was fine at first, since early computers had very little memory and space for storage, so you tried to squeeze as much efficiency as possible.

The problem is that computer programs that were built with just two digit dates in mind started to break down when you hit the year 2000. You might run into a computer program that kept track of electric bill payments glitching out because as far as it could tell, you hadn’t paid your bill in years because it couldn’t handle the math of 00 compared to 99.

There were lots of places where the two digit date format was going to cause problems when the year 2000 came, because everything from banks to power plants to airports were using old computer programs. Thankfully, a concentrated effort by programmers and computer engineers over several years was able to patch and repair these programs so that there was only minimal disruption to life in 2000.

However, if we hadn’t fixed those, there would have been a lot of problems with computer programs that suddenly had to go from 99 to 00 in ways they hadn’t been prepared for.

Anonymous 0 Comments

Binary wasn’t the issue here. The trick was that most computers were only storing the last two digits of years. They kept track of dates as 88 or 96, not 1988 or 1996. This was fine at first, since early computers had very little memory and space for storage, so you tried to squeeze as much efficiency as possible.

The problem is that computer programs that were built with just two digit dates in mind started to break down when you hit the year 2000. You might run into a computer program that kept track of electric bill payments glitching out because as far as it could tell, you hadn’t paid your bill in years because it couldn’t handle the math of 00 compared to 99.

There were lots of places where the two digit date format was going to cause problems when the year 2000 came, because everything from banks to power plants to airports were using old computer programs. Thankfully, a concentrated effort by programmers and computer engineers over several years was able to patch and repair these programs so that there was only minimal disruption to life in 2000.

However, if we hadn’t fixed those, there would have been a lot of problems with computer programs that suddenly had to go from 99 to 00 in ways they hadn’t been prepared for.

Anonymous 0 Comments

Because at some time in the past they were stored in 8-bit binary memory locations giving only 0-255, so they used 00-99 for the year.
Calculations that used those year-indications would give weird results, for instance in duration calculation. (Year now minus year back then would give negative time after year 99)

Later they used 16/32/64 bit systems but they would not change the software to use 00-99 and then came 2000…

2038 is the next one!

Anonymous 0 Comments

Because the applications only had two spaces set up for the year. Once you hit 2000 it’s an issue because you don’t know if 00 is 2000 or 1900.

I think you’re confused. The date wouldn’t overflow, it would just become ambiguous. Ambiguity and software don’t mix.

1 2 3 8 9