People are posting numbers but here’s where they come from:
The maximal size allowed by the standard is a 177×177 square (that’s “version 40” which has size 4·**40**+17). That’s 31329 little squares.
Some of those are _structural_, they are there to tell you and your app where the stuff is. The big squares at 3 of the corners are probably the most notable one,s but there are some more, like the smaller squares all over and a few dotted on-off lines that start at the inner corners of the large squares.
There are also some very few that state what kind of data and what pattern it uses. Those are only very few pixels that are each found twice besides two of the big squares.
The rest is “bits”, but not all of it is data we set. A sizeable chunk is _error correction_, redundancies to make the code readable even if some is missing or cannot be deciphered. You can actually decide on the level of redundancy as part of the aforementioned states. Lets say we use the the least amount of redundancy (7% used for it), which corresponds to the largest number of different codes.
And if you do an actual count, you end up with 23648 data bits left to our choosing. That’s 2^^23648 ~ 5.7·10^^7118 options. If you cant the different patterns and states as different despite encoding the very same data, just in a different look, you will add a small factor on top of that. Meanwhile factoring in the smaller possible codes and higher redundancies has only really tiny effect on it, nothing you will see in the first dozens of decimal digits.
Latest Answers