For example, when I flip a coin, I have a 1/2 chance of getting head, and the same chance at getting tails. With that theory, if I toss a coin 50 times, I should get 25 heads, and 25 tails.
​
However, I did 3 rounds of 50 coin flips, with results of:
1. 28 heads, 22 tails
2. 27 heads, 23 tails
3. 27 heads, 23 tails.
I assumed that perhaps the coins weren’t “true”. Maybe the sides are waited different, maybe I tossed with different height/force each time. So I went virtual, and switched to having a computer roll a dice.
​
I should have a 1/6 chance at rolling a number between 1-6. So 60 rolls, should have each number come up 10 times. But in practice my results were:
1. 14
2. 9
3. 8
4. 13
5. 6
6. 10
So how come practice/reality, doesn’t align with theory, even when we take human error out of the equation?
In: 34
> For example, when I flip a coin, I have a 1/2 chance of getting head, and the same chance at getting tails. With that theory, if I toss a coin 50 times, I should get 25 heads, and 25 tails.
The coin does not know that you plan on flipping it 50 times, and it does not know how you expect it to behave. If coins, dice, etc, behaved so that every time someone flipped or measured them some set of times, they gave exactly an even distribution of results, they would not be random, because they would be predictable.
People are really good at looking for patterns and expecting certain behaviors based on past observations. We are so good at it, that we see patterns in things that are inherently patternless and then don’t understand why those patternless systems don’t behave the way we expect them to.
Latest Answers