For example, when I flip a coin, I have a 1/2 chance of getting head, and the same chance at getting tails. With that theory, if I toss a coin 50 times, I should get 25 heads, and 25 tails.
​
However, I did 3 rounds of 50 coin flips, with results of:
1. 28 heads, 22 tails
2. 27 heads, 23 tails
3. 27 heads, 23 tails.
I assumed that perhaps the coins weren’t “true”. Maybe the sides are waited different, maybe I tossed with different height/force each time. So I went virtual, and switched to having a computer roll a dice.
​
I should have a 1/6 chance at rolling a number between 1-6. So 60 rolls, should have each number come up 10 times. But in practice my results were:
1. 14
2. 9
3. 8
4. 13
5. 6
6. 10
So how come practice/reality, doesn’t align with theory, even when we take human error out of the equation?
In: 34
You’re not using a large enough sample size.
To get anywhere close to statistically significant, you’d need a sample size of at least 10,000 or more.
You would also need to ensure you have a completely true die or coin. Any minted coin is likely to be weighted towards heads or tails just due to the design, so you would need a completely flat coin. Similarly, a true die is extremely difficult to find commercially available, casino dice are the closest you can get and they are expensive and difficult to find.
Latest Answers