How did ASCII assign letters a specific number? How did that system came to be? How did early computers adapt to it?

307 views

For example: how was the letter A given the binary code of “01000001”? (I really don’t know anything about this but I’m interested)

In: 12

8 Answers

Anonymous 0 Comments

> How did ASCII assign letters a specific number? How did that system came to be?

The answer to this isn’t really that interesting. ASCII is a Camel Standard (a camel being, as everyone knows, a horse designed by committee): Before ASCII every computer manufacturer sort of did their own thing for character codes, which made interoperability a pain in the ass, so a bunch of groups came together and created an “American” (US) standard.
They standardized the first 7 bits of ASCII, the first 32 values (0-31) being control characters (things like BEL which is the terminal bell, now a beep), and the remainder being the printable alphabet, numbers, spaces, and the most common punctuation. Basically all the stuff you see on a keyboard.

Letters and numbers were obviously assigned in their “traditional” order (A-Z, a-z, 0-9) and punctuation is mostly in one series.

> How did early computers adapt to it?

Badly, at first. In fact until 8-bit ASCII was standardized there were a bunch of “extended ASCII” systems (for example [PETSCII](https://en.wikipedia.org/wiki/PETSCII) used by Commodore computers), and you’ll notice parts of that don’t match the 1963 ASCII table it was derived from. The printable stuff is more-or-less the same, but the space between upper-case Z and lower-case a differs and a bunch of the control codes are replaced with Commodore-specific function codes.

As more software was written using and expecting ASCII-conforming data though it became not just a published US standard but a de facto standard for interoperability.

It’s not the only one though – [EBCDIC](https://en.wikipedia.org/wiki/EBCDIC) is still around (and is not compatible with ASCII). [JIS X 0201](https://en.wikipedia.org/wiki/JIS_X_0201) was commonly used in Japan and is compatible with printable US-ASCII but differs significantly in the extended (8-bit) range.

Today Unicode (specifically variable-length utf-8) has largely replaced ASCII by consuming it (the first 7 bits of utf-8 are the 7 bits of standard ASCII, including all the control characters) and allowing it to be extended to encompass many other languages and graphical characters.
As with ASCII there was a period where Unicode adoption and interoperability was pretty awful – competing implementations like utf-16 and utf-32 were tried but largely failed to gain popularity because they were not ASCII-compatible.
Even today some software *assumes* ASCII encoding and fails to process Unícödē Characters ☹️.

You are viewing 1 out of 8 answers, click here to view all answers.