Why was Y2K specifically a big deal if computers actually store their numbers in binary? Why would a significant decimal date have any impact on a binary number?

1.46K views

I understand the number would have still overflowed *eventually* but why was it specifically new years 2000 that would have broken it when binary numbers don’t tend to align very well with decimal numbers?

EDIT: A lot of you are simply answering by explaining what the Y2K bug is. I am aware of what it is, I am wondering specifically why the number ’99 (`01100011` in binary) going to 100 (`01100100` in binary) would actually cause any problems since all the math would be done in binary, and decimal would only be used for the display.

EXIT: Thanks for all your replies, I got some good answers, and a lot of unrelated ones (especially that one guy with the illegible comment about politics). Shutting off notifications, peace ✌

In: 478

84 Answers

Anonymous 0 Comments

If your car’s odometer goes from 9500 to 9506, then the difference is 6 (km or miles, depending where you live). However, if your odometer rolled over from 999,999 to 0 during your trip, then trying to calculate the distance travelled on even a short trip is going to give a very confusing result…like -999,994. Same thing with your computer’s clock and date calculations if it “rolls over”.

Anonymous 0 Comments

I wrote software fixes during that time. Timekeeping systems and all manner of things broke. It was common for just about anything with date calculations to break. And often the databases were only set to a 2 digit year as well. It was definitely cause for a lot of issues, though mostly inconveniences.

Anonymous 0 Comments

The biggest assumption that a developer makes is that everything it relies on works as expected.

Usually, this is fine because at time of writing the software, everything DOES work as expected. It’s tested.

But because everything works, developers go with the easiest solution.

Need to compare the current date to one that was input by the user? Well here’s a little utility that outputs the current date in an easy to parse format! A little string parsing, and you’re good to go!

Sounds lovely, right?

Well…

Sometimes one of the lower components doesn’t work right. Sometimes that’s caused by an update, and sometimes that’s caused by reality slipping out of supported bounds.

The broken component in this case is that date utility. It thinks the year is 99… But it’s gonna have a choice to make. Is it 00? 100? 100 but the 1 is beyond its registered memory space? Depends on how it was written.

Let’s say they used 100 because it’s just simple to calculate as int then convert to a string.

The program above it gets 1/1/ 100 as the date. The parser sees that and goes “ok, it’s January first, 19100. So January 1st, 1980 was 17120 years ago.” Computers are not exactly known for checking themselves, so a date 20 years ago really is treated as if it were over a thousand years ago by every other utility.

And I do mean every other utility. If there’s a point where that becomes binary down the line, it’s gonna try to store that number regardless of whether or not enough space was allocated (32 bits is NOT enough space for that late of a date), and unless protections were added (and why would they have been?), You’re gonna corrupt anything that happens to be next to it by replacing it with part of this massive date.

Y2K just happened to be a very predictable form of this issue, and plenty of developers had prepared defences to ensure it didn’t cause actual disaster.

Anonymous 0 Comments

The biggest assumption that a developer makes is that everything it relies on works as expected.

Usually, this is fine because at time of writing the software, everything DOES work as expected. It’s tested.

But because everything works, developers go with the easiest solution.

Need to compare the current date to one that was input by the user? Well here’s a little utility that outputs the current date in an easy to parse format! A little string parsing, and you’re good to go!

Sounds lovely, right?

Well…

Sometimes one of the lower components doesn’t work right. Sometimes that’s caused by an update, and sometimes that’s caused by reality slipping out of supported bounds.

The broken component in this case is that date utility. It thinks the year is 99… But it’s gonna have a choice to make. Is it 00? 100? 100 but the 1 is beyond its registered memory space? Depends on how it was written.

Let’s say they used 100 because it’s just simple to calculate as int then convert to a string.

The program above it gets 1/1/ 100 as the date. The parser sees that and goes “ok, it’s January first, 19100. So January 1st, 1980 was 17120 years ago.” Computers are not exactly known for checking themselves, so a date 20 years ago really is treated as if it were over a thousand years ago by every other utility.

And I do mean every other utility. If there’s a point where that becomes binary down the line, it’s gonna try to store that number regardless of whether or not enough space was allocated (32 bits is NOT enough space for that late of a date), and unless protections were added (and why would they have been?), You’re gonna corrupt anything that happens to be next to it by replacing it with part of this massive date.

Y2K just happened to be a very predictable form of this issue, and plenty of developers had prepared defences to ensure it didn’t cause actual disaster.

Anonymous 0 Comments

They didn’t store the year in binary. Y2K problems typically involved a technique called [binary-coded decimal](https://en.wikipedia.org/wiki/Binary-coded_decimal), or BCD. Computers were *much* slower then, and spending dozens of CPU cycles on a division (by 10) just to display a number was considerable.

BCDs persisted even in microprocessors like the Intel x86 and Motorola 68000 (used in the Apple Macintosh computer and others), but they were not used to the same extent as on business mainframes.

Anonymous 0 Comments

They didn’t store the year in binary. Y2K problems typically involved a technique called [binary-coded decimal](https://en.wikipedia.org/wiki/Binary-coded_decimal), or BCD. Computers were *much* slower then, and spending dozens of CPU cycles on a division (by 10) just to display a number was considerable.

BCDs persisted even in microprocessors like the Intel x86 and Motorola 68000 (used in the Apple Macintosh computer and others), but they were not used to the same extent as on business mainframes.

Anonymous 0 Comments

>A lot of you are simply answering by explaining what the Y2K bug is. I am aware of what it is

Apparently you don’t, as you’re still asking about binary overflows in the comments.
The bug had nothing to do with binary.

Anonymous 0 Comments

They didn’t store the year in binary. Y2K problems typically involved a technique called [binary-coded decimal](https://en.wikipedia.org/wiki/Binary-coded_decimal), or BCD. Computers were *much* slower then, and spending dozens of CPU cycles on a division (by 10) just to display a number was considerable.

BCDs persisted even in microprocessors like the Intel x86 and Motorola 68000 (used in the Apple Macintosh computer and others), but they were not used to the same extent as on business mainframes.

Anonymous 0 Comments

The biggest assumption that a developer makes is that everything it relies on works as expected.

Usually, this is fine because at time of writing the software, everything DOES work as expected. It’s tested.

But because everything works, developers go with the easiest solution.

Need to compare the current date to one that was input by the user? Well here’s a little utility that outputs the current date in an easy to parse format! A little string parsing, and you’re good to go!

Sounds lovely, right?

Well…

Sometimes one of the lower components doesn’t work right. Sometimes that’s caused by an update, and sometimes that’s caused by reality slipping out of supported bounds.

The broken component in this case is that date utility. It thinks the year is 99… But it’s gonna have a choice to make. Is it 00? 100? 100 but the 1 is beyond its registered memory space? Depends on how it was written.

Let’s say they used 100 because it’s just simple to calculate as int then convert to a string.

The program above it gets 1/1/ 100 as the date. The parser sees that and goes “ok, it’s January first, 19100. So January 1st, 1980 was 17120 years ago.” Computers are not exactly known for checking themselves, so a date 20 years ago really is treated as if it were over a thousand years ago by every other utility.

And I do mean every other utility. If there’s a point where that becomes binary down the line, it’s gonna try to store that number regardless of whether or not enough space was allocated (32 bits is NOT enough space for that late of a date), and unless protections were added (and why would they have been?), You’re gonna corrupt anything that happens to be next to it by replacing it with part of this massive date.

Y2K just happened to be a very predictable form of this issue, and plenty of developers had prepared defences to ensure it didn’t cause actual disaster.

Anonymous 0 Comments

Since as of this writing, the top comment doesn’t explain what’s being asked. In a lot of systems, years weren’t stored as binary numbers. Instead they were stored as two ascii characters.

So “99” is 0x39, 0x39 or 0011 1001 0011 1001 while “2000” would be 0011 0010 0011 0000 0011 0000 0011 0000. Notice that the second one takes more bytes to store.