Why was Y2K specifically a big deal if computers actually store their numbers in binary? Why would a significant decimal date have any impact on a binary number?

1.41K views

I understand the number would have still overflowed *eventually* but why was it specifically new years 2000 that would have broken it when binary numbers don’t tend to align very well with decimal numbers?

EDIT: A lot of you are simply answering by explaining what the Y2K bug is. I am aware of what it is, I am wondering specifically why the number ’99 (`01100011` in binary) going to 100 (`01100100` in binary) would actually cause any problems since all the math would be done in binary, and decimal would only be used for the display.

EXIT: Thanks for all your replies, I got some good answers, and a lot of unrelated ones (especially that one guy with the illegible comment about politics). Shutting off notifications, peace ✌

In: 478

84 Answers

Anonymous 0 Comments

Dates were represented as a 2 digit year in order to save memory and disk space in early days when there was only so much to work with. Also, many systems were developed without a thought that they would still be in use in the year 2000. So after 1/1/2000, if you did a date calculation such as 1/1/00-1/1/80 the system would return a negative number which caused issues.

Anonymous 0 Comments

One byte stores. -128 to 127 (or 0 to 255).

That would only allow you to store the last two digits, eg, 1999 would be stored as 99, 2000 would be stored as 00.

The code could work in different ways. So the time difference between this year and last year would be 2023-2022 = 1 or 23 – 22 = 1.

But the problem is

2000-1999=1 or 00 – 99 = -99

But this is just a possibility. In my company, out of over a million lines of code, there were no problems.

But we still had to check.

Anonymous 0 Comments

Dates in older computer systems were stored in 2 digits to save memory. Memory was very expensive back then so the name of the game was finding efficiencies, so dropping 2 digits for a date along with various other incremental savings made a big difference.

The problem is this meant that computers assumed that all dates start with 19, so when the year 2000 came about computers would assume the date was 1900.

This was potentially a very big problem for things like banking software, or insurance because how would the computer behave? If a mortgage payment came up and it was suddenly 1900 how would the system react?

Ultimately the concern was overblown because computer and software engineers had been fixing the problem for well over a decade at that point, so it mostly just impact legacy systems.

While it was potentially a really big problem, the media blew it way out of proportion.

Anonymous 0 Comments

Dates in older computer systems were stored in 2 digits to save memory. Memory was very expensive back then so the name of the game was finding efficiencies, so dropping 2 digits for a date along with various other incremental savings made a big difference.

The problem is this meant that computers assumed that all dates start with 19, so when the year 2000 came about computers would assume the date was 1900.

This was potentially a very big problem for things like banking software, or insurance because how would the computer behave? If a mortgage payment came up and it was suddenly 1900 how would the system react?

Ultimately the concern was overblown because computer and software engineers had been fixing the problem for well over a decade at that point, so it mostly just impact legacy systems.

While it was potentially a really big problem, the media blew it way out of proportion.

Anonymous 0 Comments

Too add to all of the excellent answers here. The Y2K thing was mostly relevant to things like billing systems, infrastructure control, and other highly integrated systems. Those systems were taken care of without too much issue, and as we saw, Jan 1st 2000 came and went without a hitch.

Most of the hype became a marketing gimmick to get people to buy new electronics, computers, and software, even though the stuff they already had was 99.99% y2k compliant.

Many consumer electronics that used only 2 digit years were either patched years ahead of time or were already long obsolete and irrelevant to the problem.

Anonymous 0 Comments

Too add to all of the excellent answers here. The Y2K thing was mostly relevant to things like billing systems, infrastructure control, and other highly integrated systems. Those systems were taken care of without too much issue, and as we saw, Jan 1st 2000 came and went without a hitch.

Most of the hype became a marketing gimmick to get people to buy new electronics, computers, and software, even though the stuff they already had was 99.99% y2k compliant.

Many consumer electronics that used only 2 digit years were either patched years ahead of time or were already long obsolete and irrelevant to the problem.

Anonymous 0 Comments

The problem was not the modern binary representation or the technology in the 1990s in general. When computers began to be usable for reallife applications, every byte of memory was costly.

Software Engineers of the 1970s began to save as much resources as possible, and that included printing dates to paper for humans to read. One obvious pattern to save memory was to not have a second copy of identical dates (one that is human readable, and one that is binary), but to have number (and date) arithmetic operating directly on its human readable, decimal representation. It was a shortcut but it worked.

They were fully aware this solution would not work in the year >2000, but In the 70s no one expected their technology to still be around 30 years later.

But then of course working code gets rarely touched, to the contrary actually working code gets copied a lot. Such that old code easily ends up in banking backends, elevators, and what-not microprocessors.

Anonymous 0 Comments

One byte stores. -128 to 127 (or 0 to 255).

That would only allow you to store the last two digits, eg, 1999 would be stored as 99, 2000 would be stored as 00.

The code could work in different ways. So the time difference between this year and last year would be 2023-2022 = 1 or 23 – 22 = 1.

But the problem is

2000-1999=1 or 00 – 99 = -99

But this is just a possibility. In my company, out of over a million lines of code, there were no problems.

But we still had to check.

Anonymous 0 Comments

The problem was not the modern binary representation or the technology in the 1990s in general. When computers began to be usable for reallife applications, every byte of memory was costly.

Software Engineers of the 1970s began to save as much resources as possible, and that included printing dates to paper for humans to read. One obvious pattern to save memory was to not have a second copy of identical dates (one that is human readable, and one that is binary), but to have number (and date) arithmetic operating directly on its human readable, decimal representation. It was a shortcut but it worked.

They were fully aware this solution would not work in the year >2000, but In the 70s no one expected their technology to still be around 30 years later.

But then of course working code gets rarely touched, to the contrary actually working code gets copied a lot. Such that old code easily ends up in banking backends, elevators, and what-not microprocessors.

Anonymous 0 Comments

Too add to all of the excellent answers here. The Y2K thing was mostly relevant to things like billing systems, infrastructure control, and other highly integrated systems. Those systems were taken care of without too much issue, and as we saw, Jan 1st 2000 came and went without a hitch.

Most of the hype became a marketing gimmick to get people to buy new electronics, computers, and software, even though the stuff they already had was 99.99% y2k compliant.

Many consumer electronics that used only 2 digit years were either patched years ahead of time or were already long obsolete and irrelevant to the problem.