It comes from the way modern computers internally treat numbers.
The computers we use today treat all information as a series of on/off states, which we usually write down as a 1s and 0s.
They don’t have endlessly many of these, but they come in groups of 8.
8 binary digits that can either be 1s or 0s like fro example the number one hundred would be represented as:
0110 0100
Just like our normal system of using ten different digits we ran into problems when we run out of room. For example an old mechanical odometer in a car that only had 6 digits to display would be able to count up to 999,999 miles traveled and then roll over to 000,000.
The same goes with the way computers count in binary. once they reach 1111 1111 the byte is full and if you added one more you would reach 0000 0000 again.
In Binary eight ones in a row 1111 1111 is 255 in our normal writing system.
So if you only had one byte of space to count with you could only count up to 255.
Of course computers can count beyond 255, but you need to be sure from the beginning that you want to use more than one byte for counting.
Two bytes let you count up to 65,535 and 3 bytes up to 16,777,216 and four bytes up to 4,294,967,296
These numbers (often rounded down) may be familiar to you. You may have heard for example that a technology allowed for up to 16 million colors. (16,777,216 to be exact). Why that number? because they used three bytes to number all the colors and that is how far you can count with three bytes.
Another thing is that sometimes you want to include negative numbers. At that point half the possible numbers get sued for negative numbers and you can for example only count up to 127 with a single byte and is you add one more you roll over into -128.
These limits crop up all the time with computers and electronics. Once you realize that these numbers are special you will find them all over the place.
You see 255 a lot with computers because it’s a nice round number in the binary number system. Much in the same way that humans often round to ten or one hundred or whatever in our decimal number system.
Binary is a base-2 number system that only has two binary digits (“bits”) – 0 and 1 – before you have to “carry over” to the next column. Compared to our decimal base-10 number system that has ten digits (0 to 9) before carrying over. Binary is used by computers because digital electronic components are easier and cheaper to make if they only have to distinguish between high and low voltages.
255 in binary is represented as 11111111. That’s an 8-bit number. A group of 8 bits in computing is so common that in fact it has its own special name; a Byte. Bytes appear in all sorts of places – the way electrical signals are sent to a CPU to perform certain operations, the size of the CPU registers (immediate working memory), the size of addresses in RAM, the way packets are sent in computer networks etc. Most of the electronic components of a computer are designed to handle a particular number of bytes, it’s central to computer hardware.
So because 255/one byte is so important in the hardware, naturally we also see it pop up in software, like the amounts of Red, Green, Blue and Opacity when defining a colour, or when mapping binary data to alphanumeric symbols with the ASCII encoding standard, or when determining the size of an integer variable that stores the number of lives or [level number in a video game](https://www.geekwithenvy.com/2013/04/why-pac-man-stops-working-after-level-255/), and so on.
It is absolutely not the maximum size of a number used in computing though, just a popular choice when defining the size of certain integer values that don’t need to go higher than 1-2 hundred, and don’t need negatives. Numbers using 2 bytes, 4 bytes, 8 bytes are also very common. Your modern 64 bit CPU can use up to 8 bytes to address memory/RAM for example. And integer variables of arbitrary size can be used when dealing with large numbers.
Latest Answers