Why was Y2K specifically a big deal if computers actually store their numbers in binary? Why would a significant decimal date have any impact on a binary number?

1.42K views

I understand the number would have still overflowed *eventually* but why was it specifically new years 2000 that would have broken it when binary numbers don’t tend to align very well with decimal numbers?

EDIT: A lot of you are simply answering by explaining what the Y2K bug is. I am aware of what it is, I am wondering specifically why the number ’99 (`01100011` in binary) going to 100 (`01100100` in binary) would actually cause any problems since all the math would be done in binary, and decimal would only be used for the display.

EXIT: Thanks for all your replies, I got some good answers, and a lot of unrelated ones (especially that one guy with the illegible comment about politics). Shutting off notifications, peace ✌

In: 478

84 Answers

Anonymous 0 Comments

“… actually stored their numbers in binary” doesn’t give you enough information about how the numbers were stored. In binary, sure, but there are still several ways to do that.

One way to do that is called Binary Encoded Decimal. If we’re gonna party like it’s 1999, some systems would encode that ’99 as: `1001 1001`. That’s it. That’s two nibbles representing two digits, packed into a single byte. It’s binary, but it does align perfectly well with decimal numbers.

A different encoding system would interpret that bit pattern to mean hex 99, or dec 153. There would be room to store hex 9A, or dec 154. Or, more to the point, the ’99 could be stored as hex 63, `0110 0011`. This can be naturally followed by hex 64, dec 100, `1001 0100`.

Either way, you could have a problem. In a two-nibble binary encoded decimal, there is no larger number than `1001 1001`. Adding one to that would result in an overflow error. A theoretical `1001 1010` in such a system *is no number at all*.

In the other encoding system I mentioned, adding one to 99 gives you 100 (in decimal values). Oh, lovely. So the year after 1999 is 2000, maybe. Or, it’s 19100, maybe. Or, it’s 1900, maybe. We’d still need to know more about that particular implementation — about how the bit pattern will be used and interpreted — before we know the kinds of errors that it will produce.

And, we haven’t covered every encoding scheme that’s ever been used to handle two-digit dates internally. This was just a brief glimpse at some of the bad outcomes of two possibilities. Let’s not even think about all the systems that *stored* dates *as text* rather than as numbers. It’s enough to know that both text and numbers are binary, right?

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

[removed]

Anonymous 0 Comments

I feel really bad for OP. Very few people in this thread are even understanding the specific question.

No, storing just 2 characters rather than 4 does not ‘save memory’ that was scarce in the 90s. Nobody anywhere ever with even a passing understanding of computers has used ASCII dates to do date arithmetic, so this was never an overflow problem. If you want two bytes for year, you just use a u16 and you’re good for the foreseeable.

The overwhelming majority of timestamps were already in some sensible format, such as 32bit second precision from some epoch. Or some slightly retarded format such as 20+20bit 100 milliseconds precision (JFC Microsoft). None of this time data had any issues for the reasons OP states. No fixes needed to be done for y2k on any of these very common formats.

The problem was simply data in some places at rest or in some human facing interface was ASCII or BCD or 6 or 7bit encoded and that data became ambiguous, as all of a sudden there were two possible meanings of ’00’.

What made this bug interesting was that it was time sensitive. Ie as long as it’s still 1999, you know that all 00 timestamps must be from 1900, so you have a limited time to tag them all as such before it’s too late.

Anonymous 0 Comments

I feel really bad for OP. Very few people in this thread are even understanding the specific question.

No, storing just 2 characters rather than 4 does not ‘save memory’ that was scarce in the 90s. Nobody anywhere ever with even a passing understanding of computers has used ASCII dates to do date arithmetic, so this was never an overflow problem. If you want two bytes for year, you just use a u16 and you’re good for the foreseeable.

The overwhelming majority of timestamps were already in some sensible format, such as 32bit second precision from some epoch. Or some slightly retarded format such as 20+20bit 100 milliseconds precision (JFC Microsoft). None of this time data had any issues for the reasons OP states. No fixes needed to be done for y2k on any of these very common formats.

The problem was simply data in some places at rest or in some human facing interface was ASCII or BCD or 6 or 7bit encoded and that data became ambiguous, as all of a sudden there were two possible meanings of ’00’.

What made this bug interesting was that it was time sensitive. Ie as long as it’s still 1999, you know that all 00 timestamps must be from 1900, so you have a limited time to tag them all as such before it’s too late.

Anonymous 0 Comments

I feel really bad for OP. Very few people in this thread are even understanding the specific question.

No, storing just 2 characters rather than 4 does not ‘save memory’ that was scarce in the 90s. Nobody anywhere ever with even a passing understanding of computers has used ASCII dates to do date arithmetic, so this was never an overflow problem. If you want two bytes for year, you just use a u16 and you’re good for the foreseeable.

The overwhelming majority of timestamps were already in some sensible format, such as 32bit second precision from some epoch. Or some slightly retarded format such as 20+20bit 100 milliseconds precision (JFC Microsoft). None of this time data had any issues for the reasons OP states. No fixes needed to be done for y2k on any of these very common formats.

The problem was simply data in some places at rest or in some human facing interface was ASCII or BCD or 6 or 7bit encoded and that data became ambiguous, as all of a sudden there were two possible meanings of ’00’.

What made this bug interesting was that it was time sensitive. Ie as long as it’s still 1999, you know that all 00 timestamps must be from 1900, so you have a limited time to tag them all as such before it’s too late.

Anonymous 0 Comments

because most people didnt understand this and bought into the mania, humans are really good at ignoring logic and getting lost in the hype

Anonymous 0 Comments

because most people didnt understand this and bought into the mania, humans are really good at ignoring logic and getting lost in the hype

Anonymous 0 Comments

because most people didnt understand this and bought into the mania, humans are really good at ignoring logic and getting lost in the hype

Anonymous 0 Comments

They were stored in Binary Coded Decimal BCD which only had spaces for 2 decimals so could go up to 0x1001 0x1001 or 99. They used just 2 digits to save space because in those days storage and memory were very expensive.