Why isn’t there a solution to websites crashing when they are experiencing high volume of traffic?

425 views

Ticket master for example. Whenever there’s a high volume of traffic, the website crashes or experiences problems. Surely they know that there’s going to be a huge surge in traffic all at once? Can they not design the website to cope with the increase in demand and function like normal?

In: 5

10 Answers

Anonymous 0 Comments

Sure they can – they can spin up additional servers and have better load balancing to ensure that any number of users can access the site simultaneously.

However, that costs extra money. When you have a product that you know is going to sell out without discounting – like concert tickets – why bother? It doesn’t matter if it takes 5 minutes to sell out or 5 hours – you are going to get paid the same amount either way. Moreover, none of the folks buying those tickets can go buy them anywhere else, so they have to use the website no matter how bad it might perform.

Anonymous 0 Comments

The question really is “will you stop using Ticketmaster because of this?”

If not, why spend the money to have more capacity?

Anonymous 0 Comments

Imagine you are having a store. The place in your store is obviously limited. Every day at maximum there are 10 people in your store and thats fine because you have the place for it. Now one day for whatever reason 2000 people come to your store at the same time. When you know this beforehand you could rent a much larger location but this costs a lot of money. So you only do it when you know it beforehand and its beneficial to you.

Anonymous 0 Comments

Resources are not infinite. Ultimately, the overall amount of CPU, RAM, disk, network connections, and so on, is limited. The questions are:

1) how fast can you add resources without disrupting the existing ones? and

2) how much are you willing to pay?

The first question is essentially why cloud computing (think Amazon AWS, Microsoft Azure,…) exists. It provides a way to provision new resources quickly (“elasticity”) as long as your application software is written in a decent way (which is not always the case).

The second question remains and, in fact, can be even more relevant if you rely on public cloud.

Anonymous 0 Comments

By definition they “crash” (or become highly unresponsive) when the traffic exceeds whatever measures are in place to deal with it. You don’t really know if those limits are being exceeded by 1% or 10000%.

Big sites like Ticketmaster know they’re going to have high demand sometimes… but it’s very very expensive to have enough capacity to deal with, I dunno, Taylor Swift’s new tour going on sale. So while they are going to deal much better with high load than a site that runs on a single server and doesn’t do anything special, they still might have problems when they get crushed under insanely high load for a short time.

Anonymous 0 Comments

Because it’s a hardware issue.

A website sits on the server. Whoever owns the website either owns a server to put the website on or pays someone to keep the website on their server. Larger websites can have multiple servers, but that costs more because it requires more hardware. When a lot of traffic hits the website at once, the server is operating at maximum capacity, with requests for information constantly coming in, and it can only fulfill those requests so fast. When there’s more requests than the server can handle, it can crash. And even before that, the website’s performance will be very slow.

The only real solution is to get more servers, and that can’t be done instantly, and requires that data from existing servers be copied over, which can’t be done while they’re full of traffic.

Ticket master in particular, is limited in this capacity, because it needs to keep track of which tickets have been reserved, which can’t update instantly across servers. Something like YouTube can give out videos from multiple servers, but no resource is being reserved in that case. Ticket master would need on server keeping track of which tickets have been sold before anyone can attempt to buy one. That’s the bottle neck.

Anonymous 0 Comments

There *is* a solution. More servers. The problem is that it doesn’t make business sense to purchase or build a large amount of servers if it’s only a short time that you’re getting that traffic.

And let’s be real. It’s not ticketmaster that is the problem, it’s people being ignorant about economics. Demand is always going to exceed supply for very popular events, but people have this idea that it shouldn’t be reflected in the price.

Anonymous 0 Comments

It’s a monopoly and there no reason for it to improve.

All the technical problems have been solved already – just think about Reddit implementing r/place (pixel wars). Ticketmaster does basically the same, but at a lower scale.

While it doesn’t make sense to purchase physical servers for peak traffic, it is possible to offload bursts to the cloud. Cloud is basically a server rental where you can pay only for resources you consume (often even by second, e.g. you are billed for each second your extra server was working), so it is very flexible if you have rare large waves of people.

But all of that requires extra engineering effort, aka extra salary cost, and needs to be justified by business. As Ticketmaster is a monopoly they won’t loose much money as there is nowhere for a customer to go.

UPD: you can read more how it became a monopoly by merging with “live nation” and forcing artists to sign exclusive ticketing contracts if they want to perform in most concert halls. It is an actual and real ‘proper’ monopoly that government closes eyes on, probably for a good buck.

Anonymous 0 Comments

A lot of the more technical answers here are wrong.

It is possible.

An application can be coded so that it can more easily scale up based on load.

The modern form of doing this is through a concept called containerization.

A container is a little part of the application that performs a specific function.

For example, when you log in to Ticketmaster, you interact with the login container.

A container is controlled through something called an orchestrator. A common orchestrator is a software called Kubernetes.

The orchestrator knows how much load is on the container, and spins up more if the existing ones are overloaded.

For example, under normal load, there are 10 login containers running. Under high load, Kubernetes creates 100 login containers to handle that load.

The application is programmed so it can work whether there are 10 or 100 containers performing that login service.

A modern web application could have hundreds or thousands of these little components that each exist in a separate container. Each one of these components can scale to hundreds of containers as load increases.

The problem is that many web applications were created before this modern containerization architecture existed. These applications are called monoliths. They are much harder to scale up, but it is possible for them to scale.

Converting a monolith into a containerized application is expensive and takes a long time. Companies that don’t see the return on investment don’t do it. It’s possible that Ticketmaster is already doing it, but it also takes a long time.

Facebook, Snap, Airbnb, Instagram, Twitter, etc, are all using containerization, which is why they can handle peak loads better.

I work for an F500 that’s been around a long time (IE, monolithic applications). Just to build our containerization platform took years and hundreds of millions of dollars. Our first native containerized application took additional years and tens of millions to put into the public.

Tldr; it is easy to do so through modern application architecture, but it’s expensive to convert old apps to this new way of working that handles load better. It’s possible through legacy application architecture, but it’s not as responsive to fast capacity increases.

Anonymous 0 Comments

One of the things you may have noticed with Reddit is that you may see a different set of comments on different devices. A lot of this is what is known as “eventual consistency” – basically, they don’t care if the data is slightly different for different users, as long as it eventually all gets to the same state.

Amazon actually uses this same type of trick, which will, very rarely, result in them allowing you to purchase an item that is out of stock. Due to their large number of items, generally large stockpiles, and often fast restocking, it rarely comes up in the first place, and, when it does, its likely just a short delay. In the worst case, Amazon would issue a refund and cancel the order.

That type of pattern doesn’t work well with ticket sales, especially if they are tied to a specific seat. The risk is very high multiple people will manage to buy the same ticket and thus result in cancelations and refunds, which the ticket seller really needs to minimize for a good user experience. This means they have to actively serialize all access to ensure two people cannot buy the same ticket, and that makes scaling really hard.

What they can do is to have each event isolated to a server as there will be no overlap between them, however if 10,000 people all want to buy tickets to the same event, that doesn’t help. They can also isolate off the graphics and other fixed content to other servers, which most will have done, typically via a content-delivery network (CDN).