For example, when I flip a coin, I have a 1/2 chance of getting head, and the same chance at getting tails. With that theory, if I toss a coin 50 times, I should get 25 heads, and 25 tails.
​
However, I did 3 rounds of 50 coin flips, with results of:
1. 28 heads, 22 tails
2. 27 heads, 23 tails
3. 27 heads, 23 tails.
I assumed that perhaps the coins weren’t “true”. Maybe the sides are waited different, maybe I tossed with different height/force each time. So I went virtual, and switched to having a computer roll a dice.
​
I should have a 1/6 chance at rolling a number between 1-6. So 60 rolls, should have each number come up 10 times. But in practice my results were:
1. 14
2. 9
3. 8
4. 13
5. 6
6. 10
So how come practice/reality, doesn’t align with theory, even when we take human error out of the equation?
In: 34
>With that theory, if I toss a coin 50 times, I should get 25 heads, and 25 tails.
No, that’s not what the theory says. [The theory](https://en.wikipedia.org/wiki/Binomial_distribution) says, that any result from 0/50 to 50/0 is possible, but the **average** and the **maximum probability** is achieved at 25/25 – with 11% probability. Your actual results have probability of 8% (for 28/22) and 10% (for 27/23). Note, that they are only slightly less than the maximum. On average, you should expect a deviation of 3.5.
This average deviation actually increases with the number of trials, but at a slower rate – as a square root of the number of trials. So, as a percentage of the possible range, the results become closer towards the average with more trials.
Latest Answers