Was Y2K Justified Paranoia?

224 viewsOtherTechnology

I was born in 2000. I’ve always heard that Y2K was just dramatics and paranoia, but I’ve also read that it was justified and it was handled by endless hours of fixing the programming. So, which is it? Was it people being paranoid for no reason, or was there some justification for their paranoia? Would the world really have collapsed if they didn’t fix it?

In: Technology

49 Answers

Anonymous 0 Comments

In honesty there are two sides to this.

First is that this was a real threat that if nothing was done would have been problematic. But we had the time and resources, so we fixed the issue before it was a major problem.

Second is the hysteria. As someone who loved through it, the news on the morning of December 31st was still saying “when the clocks turn over, we have no idea what’s going to happen. Planes might fall from the sky, you might not have power.” That had no basis in reality and why many people who loved through it thought the entire thing was fake.

Anonymous 0 Comments

People assumed the world would end!
There was a lot of absolutley uselss panic.

Yes this issue could have made some computers reset thier clock to 1900. That can lead to issues but never the end of the world or a crash of society. Maybe your TV station would not work, or some other service or tool and thats it.

Anonymous 0 Comments

As someone else has said, there were extremes of paranoia involved and those people would have been justified *if* we had collectively done nothing about the Y2K problem. But, we did a *LOT* about solving the problem. It was a massive endeavor that took at least two or more years to sort out for larger corporations and institutions.

I’ll give you examples from my personal experience. I was in charge of a major corporation’s telecommunication systems. This included large phone systems, voicemail, and integrated voice response systems (IVR). When we began the Y2K analysis around 1998, it took a lot of work to test, coordinate with manufacturers, and plan the upgrade or replacement of thousands of systems across the country. In all that analysis we had a range of findings:

A medium sized phone system in about 30 locations that if it were not upgraded or replaced, on January 1st, 2000, nothing would happen. The clock would turn over normally and the system would be fine. That is until that phone system happened to be rebooted or had a loss of power. If that happened you could take that system off the wall and throw it in the dumpster. There was no workaround.

A very popular voicemail system that we used at smaller sites would, on January 1, 2000 would not have the correct date or day of the week. This voicemail system also had the capability of being an autoattendant (the menu you hear when you call a business, “press 1 for sales, press 2 for support, etc.”). So a customer might try and call that office on a Monday morning but the autoattendant thinks it’s Sunday at 5:00 PM and announce “We are closed, our office ours are Monday through Friday…etc.”. This is in addtion to a host of other schedule-based tasks that might be programmed into it.

An IVR system (integrated voice response system: it lets you interact with a computer system using your touchtones like when you call a credit card company), would continuously reboot itself forever on January 1st, 2000. There was no workaround.

Some of the fixes for these were simple: upgrade the system to the next software release. Others were more complex where both hardware and software had to be upgraded. There were a few cases where there was no upgrade patch. You just had to replace the system entirely.

And these were just voice/telecom systems. Think of all the life-safety systems in use at the time. Navigation systems for aircraft and marine applications, healthcare equipment in hospitals, and military weapon systems were all potentially vulnerable to the Y2K problem.

Anonymous 0 Comments

There’s a wikipedia page about y2k. The bottom line is, there was a big scare, so a lot prophylaxis has been done and nobody is sure just how bad it could have been otherwise. 

Anonymous 0 Comments

Yes, there were a lot of systems that could go wrong.

For an example of impending problem like Y2K you can look up the 2038 problem.

[And it’s already started to hit some companies doing financial damage](https://www.reddit.com/r/programming/comments/erfd6h/the_2038_problem_is_already_affecting_some_systems/)

Anonymous 0 Comments

It’s a good snapshot of IT work in general: when you’re doing your job right, things run smoothly and people think you’re a waste of money because nothing breaks. Yet if you didn’t do your job there would be major problems.

Anonymous 0 Comments

While the hysteria was overblown, the problem was real and would have caused significant disruption to daily life, if not the cataclysm some predicted.

However, the problem was well known and lots of people worked to upgrade or replace affected systems before y2k. This is why some people call it a scam because nothing happened, but that was precisely the desired effect.

Anonymous 0 Comments

People who thought planes would just fall out of the sky at exactly midnight on New Years were paranoid.

People who thought there would be hundreds of bugs that would have popped up starting in the years leading up to 2000 and even in the years following it? Very justified.

For a comparison, think about the Crowdstrike outage that happened back in July. It caused entire industries to shut down. But that is very different, because it was an immediate outage. The thing with Y2K is that the bugs it caused might not necessarily cause immediate system outages, but instead result in incorrect data. Systems could still be up and running for a long time, compounding the effect of bad data over and over and over.

Something like an airline scheduler that has to handle where planes and pilots are going to be could be full of errors, and it could take a long time to get everything working right again. A banking application could make compounding errors on interest payouts. These kinds of bugs could go on for weeks and weeks, and rewinding to the data before the bug happened and then replaying all the logic going forward could be impossible. So much could have happened based off that bad data that it is a mess to clean up.

The bugs also didn’t necessarily have to happen at exactly midnight on New Years, they just had to involve calculations that went beyond New Years. So you didn’t know when they were happening until it was too late. Every software vendor had to painstakingly review everything to make sure they were safe. Additionally, software deployment was kind of different in that era. Automated installs largely didn’t exist. You might not even be getting your software via downloads, but instead installing it off of discs. That means all these fixes had to be done well ahead of time to be able to print and ship them.

Anonymous 0 Comments

I worked for AOL at the time as a security analyst and was living in the DC area. We were told that likely nothing would happen, but we should be available just in case. I was like fuck it, if the shit hits the fan I’d be in the middle of it. So when the clock struck midnight, I was standing in front of the White House with a bottle of champagne.

Nothing happened except a hangover from many bottles of champagne.

Anonymous 0 Comments

[removed]