Why isn’t there a solution to websites crashing when they are experiencing high volume of traffic?

63 views
0

Ticket master for example. Whenever there’s a high volume of traffic, the website crashes or experiences problems. Surely they know that there’s going to be a huge surge in traffic all at once? Can they not design the website to cope with the increase in demand and function like normal?

In: 5

Sure they can – they can spin up additional servers and have better load balancing to ensure that any number of users can access the site simultaneously.

However, that costs extra money. When you have a product that you know is going to sell out without discounting – like concert tickets – why bother? It doesn’t matter if it takes 5 minutes to sell out or 5 hours – you are going to get paid the same amount either way. Moreover, none of the folks buying those tickets can go buy them anywhere else, so they have to use the website no matter how bad it might perform.

The question really is “will you stop using Ticketmaster because of this?”

If not, why spend the money to have more capacity?

Imagine you are having a store. The place in your store is obviously limited. Every day at maximum there are 10 people in your store and thats fine because you have the place for it. Now one day for whatever reason 2000 people come to your store at the same time. When you know this beforehand you could rent a much larger location but this costs a lot of money. So you only do it when you know it beforehand and its beneficial to you.

Resources are not infinite. Ultimately, the overall amount of CPU, RAM, disk, network connections, and so on, is limited. The questions are:

1) how fast can you add resources without disrupting the existing ones? and

2) how much are you willing to pay?

The first question is essentially why cloud computing (think Amazon AWS, Microsoft Azure,…) exists. It provides a way to provision new resources quickly (“elasticity”) as long as your application software is written in a decent way (which is not always the case).

The second question remains and, in fact, can be even more relevant if you rely on public cloud.

By definition they “crash” (or become highly unresponsive) when the traffic exceeds whatever measures are in place to deal with it. You don’t really know if those limits are being exceeded by 1% or 10000%.

Big sites like Ticketmaster know they’re going to have high demand sometimes… but it’s very very expensive to have enough capacity to deal with, I dunno, Taylor Swift’s new tour going on sale. So while they are going to deal much better with high load than a site that runs on a single server and doesn’t do anything special, they still might have problems when they get crushed under insanely high load for a short time.