For example, when I flip a coin, I have a 1/2 chance of getting head, and the same chance at getting tails. With that theory, if I toss a coin 50 times, I should get 25 heads, and 25 tails.
​
However, I did 3 rounds of 50 coin flips, with results of:
1. 28 heads, 22 tails
2. 27 heads, 23 tails
3. 27 heads, 23 tails.
I assumed that perhaps the coins weren’t “true”. Maybe the sides are waited different, maybe I tossed with different height/force each time. So I went virtual, and switched to having a computer roll a dice.
​
I should have a 1/6 chance at rolling a number between 1-6. So 60 rolls, should have each number come up 10 times. But in practice my results were:
1. 14
2. 9
3. 8
4. 13
5. 6
6. 10
So how come practice/reality, doesn’t align with theory, even when we take human error out of the equation?
In: 34
You have the theory wrong. Others talk about sample size but that’s not exactly what’s going wrong here.
Take the simplest example of a fair coin. The theory states you should expect a flip to be equally likely to be heads or to be tails.
So what should you expect if you flip one coin exactly? Surely not half heads half tails. It will be one or the other. The space is H, T – there’s 50% chance to be either
Now if you extend this to two coins, what do you expect? Well the two flips are independent, so again before each flip both cases are equally likely.
HH, HT, TH, TT – there’s 25% chance to be any of the four.
But now notice that we increased the number of cases that are exactly half-half (in one coin we had zero cases, with two coins we have two cases). As we have more and more flips we will tend towards more cases that are more balanced in the number of flips.
Latest Answers