For weather and climate forecasting, imagine the atmosphere broken up into a 4000 x 4000 x 100 (vertical) grid, and the model state (temperature, humidity, cloud content, dust content, sometimes chemical species content, wind direction) has to be represented in each one on say a 5-minute time step. Some models are ensembles, which may have 20 or 50 or more different versions of the same simulation running, with small variations. You have to do the math to propagate everything forward in time, which is relatively straightforward. And you also have to do the math to “assimilate” thousands of ground, radar, balloon, aircraft, and satellite observations to try to make the model agree with those observations as best you can…this is a huge linear algebra equation which involves solving an equation with millions of terms. The numbers can vary depending on the geographical extent, the resolution, the time horizon, etc. but that should give you an idea.
Supercomputers can do a variety of tasks, but they are most commonly used for tasks that require a lot of processing power, such as weather forecasting, climate modeling, and large-scale simulations. Some mathematical models can be so complex that they require a computer close to $1B in order to be solved.
My company has a server with a cluster of hundreds of processors. We have a team of engineers who do FEA simulations to model the structural dynamics of our designs prior to committing to making multimillion dollar injection mold tooling. These simulations can take as long as 16hrs and perhaps longer to process on the cluster. They also do Fluid dynamics simulations that can take even longer.
A recent example of a problem that needs a supercomputer is wind turbines. Imagine you want to build offshore wind power, in water too deep to solidly anchor to the bottom – the entire turbine floats. How can you be sure your engineering design will withstand a category three hurricane? You can’t build a prototype and wait for a hurricane to destroy it, rebuild and try again, it would take forever and cost billions. You have to simulate it – which means you need to simulate materials stresses on a platform that is spinning in 100+mph winds, bobbing up and down and swaying in the waves. You need to model the wind turbulence, coupled to wave motion, coupled to moving blade surfaces, stress and strain forces from the microscale to many meters, all together at the same time, in timeframes from microseconds to hours. That’s a problem that can use as much compute power as you can throw at it, and more. And that’s just one problem – there are many others like it in scale and complexity.
I would imagine a lot of it is sensitivity analysis (basically saying parameters and rerunning models thousands or millions of times) to test ranges of possibilities. Some of the work my lab does involves these analyses but we use a computing cluster for it. As context, running these kinds of simulations on a single computer can take days, weeks, months, etc. depending on the number of simulations and a cluster can reduce the time significantly.
To make it understandable:
An action acts on something and that in turn causes another action.
So all of a sudden you have one action that in turn caused two actions. Say, a ball hitting another ball and that ball hits another etc.
So you want to predict what those other balls do but all of a sudden the equation became two, one for every ball.
And those hit other balls.
And then you have molecular biology and oh shit everything compounds on itself.
Supercomputers take this into account and bear the brunt of it.
Latest Answers