Often probability problems are too complicated to find an exact answer. “What is the chance that a die rolls a 3” is easy, but if you have 100 dice and perform some complicated calculations with their results then it’s getting very difficult. So instead of trying to find an exact answer, you just roll these 100 dice and calculate the result, and repeat that a million times (not with real dice – but with a computer). You’ll get a pretty good idea how likely different outcomes are that way, what the maximum is that you can get, and similar.

All Monte Carlo methods follow that basic idea – instead of calculating everything exactly you use random numbers as input, check what that leads to, and repeat that many times.

Basically, it’s a way for computers to estimate the answer to a mathematical problem by taking a bunch of random samples.

For example, lets say you wanted to figure out the area of a circle. What you would do is draw a square around the circle that touches the edges and generate random points in that square. For each point, you calculate whether it is in the circle or not and then count the ones that are in the circle and not in the circle. If you generate enough samples, you should get that about 78.5% of the points are in the circle, so you can estimate that the area of the circle is 78.5% the area of the square.

It’s used to solve a probability problem through simulation/experimentation.

Let’s say you wanted to find the odds of getting heads or tails when you flip a quarter. If you flip a quarter once you might think the odds of getting heads is 100%. If you flip the quarter 5 times you might think it’s a random number like 80% or 60% or 20%. Well if you flip the quarter 100 times you’ll get a number closer to 50%. Flip it 1000 times and even closer yet. Eventually you run the simulation/experiment enough times and you converge on the solution.

Some problems are very hard (or even impossible) to solve exactly. For some of those problems, you can get a decent guess (sometimes a very very good guess) by using randomness to check for a result.

To make the Monte Carlo method work, you need to program two things into a computer. First, you need a range of options that you can sample from randomly. Second, you need a condition that is easy to test for whether it is true or not. Once you have both of those, you repeatedly take samples, until you have enough to draw useful conclusions (usually hundreds or thousands of samples *minimum*). If you did the setup correctly, you’ll be able to use the results to get an approximate answer to the original hard question.

Others have used the “what is the area of a circle” example. It also works for a bunch of other things, like predicting future weather patterns based on random inputs, or future stock behaviors, or a bunch of other things, both situations where we know there’s a definite answer and ones where we don’t.

## Latest Answers