For example, when I flip a coin, I have a 1/2 chance of getting head, and the same chance at getting tails. With that theory, if I toss a coin 50 times, I should get 25 heads, and 25 tails.
​
However, I did 3 rounds of 50 coin flips, with results of:
1. 28 heads, 22 tails
2. 27 heads, 23 tails
3. 27 heads, 23 tails.
I assumed that perhaps the coins weren’t “true”. Maybe the sides are waited different, maybe I tossed with different height/force each time. So I went virtual, and switched to having a computer roll a dice.
​
I should have a 1/6 chance at rolling a number between 1-6. So 60 rolls, should have each number come up 10 times. But in practice my results were:
1. 14
2. 9
3. 8
4. 13
5. 6
6. 10
So how come practice/reality, doesn’t align with theory, even when we take human error out of the equation?
In: 34
You’re not thinking big enough.
Why did you choose the number 50 for the coin flips? Would you expect that if you flipped it 2 times, that it you’d always get exactly one heads and one tails? No, of course not. You need to flip it enough times for it to even out. 50 is not nearly enough.
10,000 flips is a better number, but you’re probably still not going to get exactly 5000 heads and 5000 tails. The more flips you do, the closer it gets to 50/50.
Try 1000 dice rolls and you see numbers that are closer to 1/6. Try 10,000 rolls and it should be even better. Try 100,000 or 1 million and you should get really nice numbers.
There’s always an error probability, which is the amount that it deviates from the exact probability number. This can be calculated through something called the “standard deviation”.
Latest Answers