Why can’t computers predict the weather far into the future?

203 views

I guess I’m assuming.. Bare bones, everything out there has a scientific reason for acting the way it does. Wind, humidity, temperature, yadda yadda. What randomness is preventing us from entering all the values into a computer program, hitting fast foward, and seeing close to exactly how things will be far down the road?

In: 0

6 Answers

Anonymous 0 Comments

>What randomness is preventing us from entering all the values into a computer program, hitting fast foward, and seeing close to exactly how things will be far down the road?

This is what we already do. This issue is that weather systems are global events with cause and effect all over the world on both large and small scales, think the butterfly effect. So there are so many random variables that we don’t know how to track them all.

However, with stuff like AI and machine learning and all the data we’ve collected over the years our weather prediction models are getting better and better.

Anonymous 0 Comments

Chaos.

Weather is the classic example of something that’s VERY sensitive to both the fine details of initial conditions, and the small reinforcing changes that occur with time. It’s not just a hard mathematical problem, it’s one that puts limits on what we can precisely know.

As supercomputers have evolved along with much greater and higher resolution monitoring of the global weather system, it’s become possible to predict weather with startling accuracy up to about 3 days. From there you can predict with *decent* accuracy out to about 7 days, and with rapidly decreasing accuracy out to around 10 days. From there you really can’t make fine predictions at all, such as “it will rain in a given region at a specific time.”

If you go much further out, say the overall trends of weather for months in large regions, you can do that. You can sometimes predict that a given year will be hotter, or drier, or colder or more filled with hurricanes, but you can’t tell if it will rain in Kent a month from now.

All of this weather prediction is really just solving equations with lots of variables (partial differential equations) and those equations become harder and harder to solve in the case of chaotic systems. We’ve gotten to the point where you can only make marginal, decreasing gains even with newer and better supercomputers. The problem isn’t our model, the problem is that weather is so chaotic, small forces can be amplified over time and lead to radical changes in outcomes.

tl;dr All models have errors, but in the case of modeling chaotic systems those errors rapidly accumulate and inflate, rendering the model worthless. In the case of weather these effects on accuracy occur as described above.

If you want some less ELI5 reading: https://www.ecmwf.int/en/elibrary/79859-chaos-and-weather-prediction

Anonymous 0 Comments

>everything out there has a scientific reason for acting the way it does.

We’re still talking about predicting the movements of billions upon billions of molecules churning in the atmosphere. Even the best computer model is only a vague approximation of this process; there is no way to perfectly simulate something this complex. It’s like trying to accurately shoot at a target that keeps getting moved further back, until it’s miles away; after a few days it becomes impossible to predict with any accuracy.

Anonymous 0 Comments

We don’t currently have the ability to perfectly measure all those things. We can take the temperature in one location, but it might be off by a 10th of a degree or a 100th of a degree. We’re not measuring the temperature at every point on Earth. We’re maybe measuring it in a few places in one location and assuming that the temperature in the surrounding area is similar, but it might be off by a little bit. So there are all these tiny errors that add up when we try to measure the weather and make predictions. These aren’t problems when you’re making a prediction a few days out, but over longer periods of time, they add up and make your predictions less and less accurate.

Anonymous 0 Comments

I’m going to draw a grid on the ground, then throw a grain of rice at it. It’s your job to predict where it lands.

Eventually, if you set up a computer with a high-speed camera and lots of information about which direction the wind is blowing right at the very moment and the exact mass and shape of the grain of rice, it will probably get pretty good at predicting where that grain will fall!

Now, I’m going to throw an entire bucket of rice, and I’m going to keep going it with more buckets non-stop. After I throw the *first* one, I want you to predict where the 18th grain from the 10th bucket will land.

Good luck. ;p

In a system with lots of parts that all affect each other, you can calculate for lots of variables as they are now – but then calculating for a hundred ways they *might be* tomorrow, and a thousand ways they *might be* the day after that, and ten thousand ways they *might be* the day after that, gets…messy.

Anonymous 0 Comments

Weather is, in the mathematical meaning of the term, chaotic. The tiniest difference at one point can quickly add up to a massive difference later (the so-called “butterfly effect” – a butterfly flaps its wings, and three weeks later a storm takes a different path). And, no – however unintuitive it may be, there’s no such thing as “too small to make a difference”.

To accurately predict the future weather, you would need a perfect model of how the weather works, perfect data about the state of everything featuring in it, and computers capable of putting the two together perfectly. We have none of those, and never will. The best we can do is run the models lots of times, tweak the data each time to allow for the fact that it’s neither complete nor 100% accurate, and look for reasonably common patterns in the results.