Imagine you have a series of swimming pools filled with water to that sum to 1.3 billion km^2 in surface area – 2/3rds the size of the globe. They are covered by a series of air pockets the size of the globe. The swimming pools are constantly emptying humidity into the air pockets, and then the air pockets are dumping the water out of the pool or into a different pool. This is happening in billions of square km all over the globe, more than 2/3rds of which is water. Every time it happens, the heat content and humidity of each square km changes, and it’s changing non-stop.
So if you want a resolution of 1 square km, you have to track ~2 billion data points *just on the surface*. And the pockets of air are many kilometers deep, so really, you have 10s of billions of data points – one for each cubic km. And they’re changing non-stop. And they’re interacting with one another non-stop. Let’s say we want to have predictions that are accurate to the hour interval, and assume that only 10 km of atmosphere are relevant to the weather. We’re now on the order of 250 billion data points, on an hourly, cubic km resolution.
So a weather model at that resolution is required to have 250 billion sensors all over the globe, all networked and feeding information back to a central data repository. But that’s too costly. Plus people in other countries would feel kinda weird about being observed that closely, so we have to make due with a much smaller number of sensors, which are trying to interpolate (figure out in-between points), often from high altitudes/space. So we use that smaller number of data points to create models about what we’d expect to be there in between, and use that incomplete data model to make predictions about what will happen, in terms of weather. And that means there’s a lot more error than there might be with perfect data.
Latest Answers