So we know the chance of a coin flip is 0.5 heads or tails. But we also know that if you flip a coin it could be a run of heads nothing but heads. We understand this is possible because each flip is an independent choice that doesn’t depend on the previous choices so there’s nothing wrong with seeing heads heads heads heads heads. It can happen.
We understand that it’s much less likely to continue the longer it gets because each time you flip the coin you halve your chance of continuing. So you’re much more likely to get 3x heads than you are to get 5x heads.
By the time you get to 100x heads well we understand this is possible but we’ve halved our chance of surviving so many times that now this is extremely unlikely. You would basically not be able to see this occur in your lifetime even if you spent your whole life flipping coins.
The larger the number of random decisions made, the closer the actual average of these decisions will be to the predicted average. Meaning for very large numbers of random decisions it is *extremely* likely the aggregate of these random decisions will be almost exactly equal to the value predicted by mathematics.
Once you have a system which involves a sufficiently large number of these decisions you can start ignoring the fact that they’re random because you’re sampling so many of these random decisions that the probability they will be anything other than the expected value (in aggregate) is so close to zero we can call it zero.
Exactly where this threshold lies is an interesting question, but a bit of an academic one. Chemistry for example can move backwards to the point where quantum effects are actually very important for the functioning of the chemistry (I believe photosynthesis relies on probabilistic quantum effects) so we don’t actually need a hard boundary we only need to understand the effects that are probabilistic and work them into the theory.
Then anything that builds upon chemistry, such as medicine, that now has a firm foundation because the point at which these random effects have coalesced around predictable values by sheer weight of numbers, we passed this point back in chemistry. Anything above this layer is really working with millions and trillions of atoms so the probabilistic uncertainty in lower layers has smoothed out to the point where it’s truly irrelevant.
Latest Answers