They had to represent characters somehow. And binary was what they needed to do it in, because binary is how computers work.
You can represent 2 unique things with 1 bit: 0 and 1. You can represent 4 with 2 bits: 00, 01, 10, and 11. You can represent 2^N (“2 to the power of N”) things with N bits.
So they counted up the number of characters in English, including lower case, upper case, numbers, punctuation, and a number of special other characters. It turns out that number was greater than 2^7 (= 128) and less than 2^8 (= 256). So 8 bits it was. And they called that a byte.
So they could represent 256 things in 1 byte. And then they just started assigning numbers to the characters. There was some reason to it, for instance “B” is one greater than “A”. For obvious reasons. But otherwise, they just decided. Chose a number. No science behind that.
And then they designed computers to expect that. When you type an “A” and the keyboard sends “01000001”, the computer sees that as “A” because there’s literally a chip in the computer which knows that 01000001 is an A. No magic. Just hardware designed to match the ASCII table, which someone made by assigning characters to numbers.
Latest Answers