Why was Y2K specifically a big deal if computers actually store their numbers in binary? Why would a significant decimal date have any impact on a binary number?

67 views
0

I understand the number would have still overflowed *eventually* but why was it specifically new years 2000 that would have broken it when binary numbers don’t tend to align very well with decimal numbers?

EDIT: A lot of you are simply answering by explaining what the Y2K bug is. I am aware of what it is, I am wondering specifically why the number ’99 (`01100011` in binary) going to 100 (`01100100` in binary) would actually cause any problems since all the math would be done in binary, and decimal would only be used for the display.

EXIT: Thanks for all your replies, I got some good answers, and a lot of unrelated ones (especially that one guy with the illegible comment about politics). Shutting off notifications, peace ✌

In: 478

The computer’s interpretation of a binary number resulted in two digits representing the last two numbers of the year. It was a problem because they interpretation *could* roll over at midnight 2000. Any math based on that interpretation would calculate an incorrect result or, worse, result in a negative number and cause more serious problems.

Because the applications only had two spaces set up for the year. Once you hit 2000 it’s an issue because you don’t know if 00 is 2000 or 1900.

I think you’re confused. The date wouldn’t overflow, it would just become ambiguous. Ambiguity and software don’t mix.

Binary wasn’t the issue here. The trick was that most computers were only storing the last two digits of years. They kept track of dates as 88 or 96, not 1988 or 1996. This was fine at first, since early computers had very little memory and space for storage, so you tried to squeeze as much efficiency as possible.

The problem is that computer programs that were built with just two digit dates in mind started to break down when you hit the year 2000. You might run into a computer program that kept track of electric bill payments glitching out because as far as it could tell, you hadn’t paid your bill in years because it couldn’t handle the math of 00 compared to 99.

There were lots of places where the two digit date format was going to cause problems when the year 2000 came, because everything from banks to power plants to airports were using old computer programs. Thankfully, a concentrated effort by programmers and computer engineers over several years was able to patch and repair these programs so that there was only minimal disruption to life in 2000.

However, if we hadn’t fixed those, there would have been a lot of problems with computer programs that suddenly had to go from 99 to 00 in ways they hadn’t been prepared for.

A lot of older software was written to store the year in two digits e.g. 86 for 1986, to save space in memory or disk, back when memory and disk were very limited. When we hit the year 2000, the year would be stored as 00, which could not be differentiated from 1900.

Because at some time in the past they were stored in 8-bit binary memory locations giving only 0-255, so they used 00-99 for the year.
Calculations that used those year-indications would give weird results, for instance in duration calculation. (Year now minus year back then would give negative time after year 99)

Later they used 16/32/64 bit systems but they would not change the software to use 00-99 and then came 2000…

2038 is the next one!