Pretty much the title.
I understand this is a law but can there be a reason why this happens?
For background: for almost all real life data like population, GDP and other real word stats, probability of it having 1 is like almost 30% which keeps decreasing with 9 to be least probable.
But why this happens, is this just a fascinating pattern in randomness?
In: 130
An important distinction is that this NOT true for _any_ data. Benfords law is specifically about numbers spanning several orders of magnitude.
The reason it works is that in a range spaning a few orders of magnitude, there simply tends to be more numbers starting with 1.
For example from 0 to 200 there are 121 numbers that start with 1, and only 12 that start with 2 and 11 for the digits 3-9.
And as you increase the range to include more Hundreds, it starts to equalise, and if you looked at the numbers frok 0 to 999 it _would_ be distributed equally, but in real life you won’t have _exactlyy_ numbers from 0 to 999 in a data set. It’ll be a rough range, and as soon as you start getting into the 1000s you start having more number 1s again, so any data set that doesn’t have a strictly defined cut off and spans several orders of magnitude is more likely to contain more numbers starting with 1 than any other.
Latest Answers