How tf does binary code work.

600 views

it’s just a bunch of 0’s and 1’s.. like I can just put a bunch of them and somehow make a sentence??? like what does this mean -> 010100101001010010 (i legit just spammed 0’s and 1’s)

In: 0

26 Answers

Anonymous 0 Comments

Multi-part answer:

1. Everything a computer does or stores (or where it is stored) can be represented by a (sequence of) number. “G” is a number. “ADD X Y” is a number. The place in memory where the result of ADD X Y is stored is a number.
2. Numbers can be represented using different symbols in a particular order. In normal use, we have the 10 “arabic” symbols: 0,1,2,3,4,5,6,7,8,9. This is known as “decimal,” or “base 10.” And the meaning of each numeral depends on where in the sequence it is. So numerals that just represent units are the furthest to the right, numerical representing the number of 10’s is second to the right, 100s is 3rd from the right, 1000s is 4th from the right, and so forth. You can remember this because 10^(0)=1, 10^(1)=10, 10^(2)=100, 10^(3)=1000, and so forth. When we see a number like 1234, we interpret it as 4 units + 3 x *10* units *+ 2 x* 100 units + 1 x 1000 units = 1,234.
3. Binary is doing the **exact same thing**, except that there are only two symbols, 0 and 1. But because there are only two symbols, the value of each position is powers of 2. So the first position is units, the second is 2^(1) = 2, the third is 2^(2) = 4, then 2^(3) = 8, then 2^(4) = 16, and so forth. Each digit is known as a *bit*, so if a number has 4 bits, then 1011 is equivalent to 1 unit + 1 “2 units” + 0 “4 units” + 1 “8 units” = 11. So any sequence of 0’s and 1’s represents a number. And it’s **exactly** the way we usually use base 10 numbers, except that each place is a power of 2 instead of a power of 10.
4. OK, so you see a really, really long sequence of 0’s and 1’s. On an 8-bit system, each number 8 8 0/1 “bits” long. An 8-bit number can represent 256 different numbers. A 16-bit system uses 16 0/1 bits to represent 65536 different numbers, and so forth. While the computer may interpret numbers as these massively long sequences of 0/1, everything is being chopped up into the right number of bits and worked on together.

BONUS part: People can’t really do anything with 0’s and 1’s. They all start to look the same and your eyes start to hurt starting at them. Is there an easier way to still kind of work in binary (powers of 2), with something easier to look at? Yes, the **hexadecimal** number. Instead of representing the numbers 0 to 15 with a sequence of 4 0/1’s, why not use 16 symbols and work in “base 16?” Since 16 is a power of 2, it’s easy to line up a base 16 number with a base 2 number, except that it’s shorter. What are these 16 symbols? The standard 0-9 arabic numbers, along with symbols A, B, C, D, E, and F for 10-15.. When you see a number that looks like, say, 3BE4, that is equal to 3*4096 + 11*256 + 14*16 + 4 = 15332 in decimal, or 0011 1011 1110 0100 in binary? How do I know that’s the binary equivalent of 3BE4? Because 0011 = 3 in decimal/hex, 1011=11 in decimal, which is B in hex, 1110 is 14 in decimal, which is E in hex, and 0100 is 4 in decimal/hex. A 8-bit number can be “compressed” into just two hexadecimal symbols, and a 16-bit number can be represented as 4 hex symbols.

Anonymous 0 Comments

Binary is just a number system with two instead of 10 digits.

If you look at a decimal number lie 189. The value of a digit depen on the position. The digits farthers to the right t is 10^0 =1 the next is 10^1= 10 the 10^2 =100

So 189 mean 1 * 10^2+ 8* 10^1 +9 * 10^0 =1 * 100 + 8* 10 +9 * 1

Binary works the same but the base is 2 instead of 10

So the value of binary digit expressed in decimal starting from the left is 2^0?=1, 2^1=2, 2^2 =4, 2^3=8 and so on

the binary number 10111101 would be if we convert ti to deimal

1* 2^7 + 0* 2^6 + 1* 2^5 + 1* 2^4 + 1* 2^3 + 1* 2^2 + 0 * 2^1 + 1* 2^0

= 1*128 + 0*64 + 1*32 + 1*16 + 1*8 + 1*4 + 0*2 + 1*1 =128 + 32 + 16 + 8 + 4+ 1 = 189

So binary is just another way to store numbers. What the number means depends on what standard you use to encode data.

Lets use a format where we use 3 decimal digit for a letter and a=097, b=098… z=122 What number we use for the digits are arbitrary, the one that create and the one that read the message just need to agree

We can now write the message 114101100100105116. We know there is 3 digits per letter sp we split it to 114 101 100 100 105 116. If you use the system above you can convert it to “reddit”

We can also write is as a binary number. We can use a table with 8 binary digits that represent the decimal number 0 to 255. The message above now becomes

011100100110010101100100011001000110100101110100

and split in groups of 8 digits

01110010 01100101 01100100 01100100 01101001 01110100

This is just another way to write

114 101 100 100 105 116

To understand the meaning you need the way we encoded the message it was a=097, b=098… z=122

To select those numbers might look strange but is a fact how the test is encoded in the ASCII standard. It at the extension to it is how the regular test is encoded
You can look up the values in binary and decimal for the character at https://en.wikipedia.org/wiki/ASCII#Printable_characters

The values make sense in binary. There is a table for when 7 digits were use just att one 0 to the front. https://en.wikipedia.org/wiki/File:USASCII_code_chart.png

The values are picked so A is a 1 with 0100 in front and a is 1 with 0110 in front. You can just flip a digit to change from an upper case letter to confirm it to a lower case.

The short variants binary is just another way to write numbers. How you interpreted the number requires an external definition just like if you store information with a decimal number

Anonymous 0 Comments

A Z80 was an early 8 bit processor so it is “simple”. It had 255 things it could do and most of them are moving data from one register (number storage) to another. If you check the chart:

[https://clrhome.org/table/](https://clrhome.org/table/)

It shows a grid view. You take the number 00 to 255, convert it to base16 (hexadecimal, count like you have 16 fingers), and use the first and second digit to look up what that operational code (opcode) does.

All this goes back to 1 and 0, which is how you represent numbers.

If you want to see how you store letters and numbers, ASCII is the standard.

[https://www.asciitable.com/](https://www.asciitable.com/)

Column 2 in the table has numbers, column 3 has letters. All are just assigned a number so the computer translates a 65 to an “A”, and 66 is “B”.

Anonymous 0 Comments

It’s a (non secret) code.

In the most familiar systems, groups of eight bits often encode one letter. Ignoring the first “01” in your eighteen-bit example, “01001010 01010010” means “JR”.

There are 128 ways you can combine seven bits, which is enough to assign a combination to each of the English capital and lowercase letters, digits and special symbols you see on a typical keyboard.

With the addition of the 8th bit, which is the first 0 in each of those two groups you gave, you now have the possibility of either encoding 128 more of your favorite non-English letters and funny symbols, or doing something more complicated in which a 1 in the first position signals that you’re using more than one group to encode a single character.

Anonymous 0 Comments

Binary is a different way of counting. In base 10/decimal, the system we use to count normally, we have different “columns” that numbers are in. The 1’s column, the 10’s column, the 100s, etc. if you take the number in that column and you know how many 100s, 10s, or 1s there are. It’s called base 10 because each column is one power of 10 higher then the previous.

Binary is base 2. This means each column is a power of 2 rather then a power of 10. We use it because electricity in a transiter in your computer can either be on, or off, 1 or 0. To count with it works the same. We have a 1s column, a 2s column, a 4s column, etc. so 1010 in binary would convert to 6 in decimal. Try it yourself, start with small numbers, and remember its just powers of 2. we call the columns in binary a “bit”, 8 bits make up a byte. From there you see how memory works! a kilobyte is roughly 8000 bits, 1s or 0s. The 18 bits you entered would convert to binary as the decimal number 84562.

To represent letters on a screen from binary we first assigned every letter and symbol a number from 0 to 255. Then if the binary value is equal to that binary value, we can represent that character. This is called ASCII, and you can find a table online if your curious. Each ASCII character is the same length, a single byte (that’s 8 bits!) so your random string of binary doesn’t mean anything in standard ASCII as its 18 numbers. if we cut the last 2 off, we would have “Rö”.

Anonymous 0 Comments

Binary code doesn’t make sense to us becasue it’s not written for us, it’s written for machines.

All computers are based on transistors, which are little gates (or switches). To know what to do, they need instructions, and the 2 instructions they get are “should I be on or off?” Which we translate to a 1 or a 0.

Strings of binary are strings of instructions, and we’ve converted those strings (and design the computers) so that we can convert instructions the machine reads into instructions we can write.

There’s a huge step (multiple steps) between writing code and getting binary code for a machine out the other end, that’s what compilers do.

Anonymous 0 Comments

> How tf does binary code work.

Somebody says “Here’s how we’ll interpret these ones and zeros,” and then somebody uses that interpretation to do something — usually a computer program, but it could also be a chip or a circuit or even something non computer related.

> like I can just put a bunch of them and somehow make a sentence

If you define sequences of ones and zeros to be letters (and other keyboard symbols like punctuation marks), then yes.

Originally every computer maker made up their own sequences of 0’s and 1’s for letters and punctuation marks and so on. But this got unsustainable once we started to make more computers and we wanted to make different kinds of computers able to talk to each other. So in the 1960’s a committee came up with a standard table called [ASCII](https://en.wikipedia.org/wiki/ASCII). Most computers use ASCII or its descendants today.

> what does this mean -> 010100101001010010

It means what you want it to mean. If you think maybe it’s a sentence in ASCII, you go by the ASCII table, and characters are 7 bits in the table, so breaking it into 7-bit groups you get 0101001 0100101 with four bits 0010 left over. Which translates to the following two characters:

)%

Most computers use 8-bit bytes though. If you interpret it as 8-bit bytes you get 01010010 10010100 with two bits 10 left over. The first character is a capital letter R in ASCII, the second one isn’t an ASCII character at all.

Conveniently MS-DOS (an old operating system) has a [bigger table](https://en.wikipedia.org/wiki/Code_page_437) that assigns a symbol to all possible 8-bit bytes. The table’s in base-16 (“hexadecimal” or “hex” for short), and binary to hex conversion is easy; you just break the digits in blocks of 4 and translate them like this:

0000 -> 0 0001 -> 1 0010 -> 2 0011 -> 3
0100 -> 4 0101 -> 5 0110 -> 6 0111 -> 7
1000 -> 8 1001 -> 9 1010 -> A 1011 -> B
1100 -> C 1101 -> D 1110 -> E 1111 -> F

So the 10010100 translates to 94 hex, and row 9, column 4 of the MS-DOS table says it represents an o with two tiny insect friends, like this: ö ([see here for more details](https://en.wikipedia.org/wiki/%C3%96) ). So with 8-bit groups interpreted with the MS-DOS table, your sequence corresponds to the two letters:

Maybe it’s part of some text about [Wilhelm Röntgen](https://en.wikipedia.org/wiki/Wilhelm_R%C3%B6ntgen)?

The 128 possible ASCII characters, or the 256 possible MS-DOS characters, work well enough for English and most western European languages. But they aren’t nearly enough to represent all the other alphabets of the world at once (India, Russia, Middle East, the many African languages) and certainly aren’t enough for languages like Chinese with thousands of characters.

So after a multi decade compatibility nightmare of multitudes of different national / regional text coding systems used by different parts of the world, in the past 15 years or so most programs and OS’s have standardized on a coding scheme called [UTF-8](https://en.wikipedia.org/wiki/UTF-8) whose mission is to standardize the sequence of 1’s and 0’s everyone uses to represent every symbol in every writing system humanity has ever invented (including stuff like 日本語 Japanese characters and a downright frightening number of symbols and emojis 😇 ☢️ 🐢 😂 which, you’ll notice, I can put in a Reddit post with no trouble, even if neither you nor I nor the IT elves of Reddit have selected “Japanese” as our OS’s main language).

You could also interpret your sequence of bits as a number. Usually numbers are done like this:

– Read the bits off from right to left
– Multiply the first (rightmost) bit by a place value of 1
– Multiply each next bit by a factor 2 times the previous bit’s place value
– Add all the products together

You read decimal numbers the same way, except you go up by a factor of 10.

So in our number system, 1492 = 2×1 + 9×10 + 4×100 + 1×1000. In binary, your number is:

01.01001010.01010010 =
0x1 + 1×2 + 0x4 + 0x8 + 1×16 + 0x32 + 1×64 + 0x128
+ 0x256 + 1×512 + 0x1024 + 1×2048 + 0x4096 + 0x8192 + 1×16384 + 0x32768
+ 1×65536 + 0x131072

which means 010100101001010010 is the number 84,562.

Anonymous 0 Comments

>How tf does binary code work.
>
>it’s just a bunch of 0’s and 1’s.

There’s really two fundamental concepts you need to understand.

One is the binary number system. You’re used to counting in decimal (base-10) – a number system with ten unique digits – because it’s what you were taught in school, and it only really exists because we have ten fingers. But you can use a number system with an arbitrary number of digits, and binary (a synonym for base-2) is one of them. And it’s easy enough to convert numbers between these number systems: 1100 in binary is 12 in decimal, or C in hexadecimal (base-16, where we use letters A-F as symbols for 10-15 in decimal) and so on. It only requires basic math to do the conversion, and people agreeing about what symbols to use for each digit so we can communicate those ideas correctly.

The second concept to understand is that “we” (smart people in the 1960s) came up with a system to map (encode) letters of the alphabet (and punctuation, etc) into numbers called ASCII. It’s entirely arbitrary, it only has meaning because we give it one. ASCII today requires that we use groups of 8 binary digits (“bits”). We can take any stream of bits, divide it into groups of 8 bits (a “byte”), and map it English characters. If it doesn’t divide evenly, or result in text that makes sense, it’s probably not ASCII data – how to interpret that data is determined by the human. ASCII is just one of many conventions that may be used.

>like I can just put a bunch of them and somehow make a sentence???

You can’t. If you’re using ASCII, you need to specify that, and follow the rules of ASCII. You need to use a multiple of 8 bits. And whatever numbers you type will be [mapped to specific characters](https://www.asciitable.com/). If you type 01010001 01011011 01111000 then that maps to “Q[x”. It doesn’t mean anything in English. I’m sure as a child you encountered puzzles that mapped A=1, B=2, … Z=26 or something. ASCII isn’t fundamentally different from that, you can’t just randomly type 478907239735 and expect other people to make sense of it.

>like what does this mean -> 010100101001010010 (i legit just spammed 0’s and 1’s)

By itself, without any context, it means nothing. You’ve typed an 18-digit number that *might* be binary, since it begins with 0 and only contains 0s and 1s (decimal numbers usually drop leading zeroes, binary numbers are usually a fixed-length because of the way electronics use them, but again that’s just a convention not a hard rule). All we know is that since it’s not a multiple of 8 digits, it’s definitely not ASCII. If you were feeding this data to a computer, it would be up to the human (user or programmer) to specify how the data should be interpreted. When you open a random file in Notepad for example, you’re telling the computer “interpret this file’s data as ASCII text”, even if it’s not (in which case it results in gibberish).

Anonymous 0 Comments

Binary is just another way of writing numbers.

Binary is based on powers of two instead of powers of ten. So instead of ‘how many ones, tens, hundreds, thousands… are in this number?’ it’s ‘how many ones, twos, fours, eights, sixteens… are in this number?’

For example, ‘10010’ from left to right means ‘one sixteen, no eights, no fours, one two, no ones’. So that’s binary for eighteen. In regular numbers, that would be ’18’ (one ten, one eight).

Anyway so they’re just numbers. Now how do you write stuff with numbers? Any way you’d like! You could assign a number to each letter, for example. But you’ll also need ways to encode punctuation, and special characters, and emojis…

The most common solution in use for text is called [Unicode](https://en.wikipedia.org/wiki/Unicode), which is an encoding we’ve agreed to use that maps numbers to all of these characters.

But computers don’t really ‘read’ characters, they just need to read instructions so that they can run programs. These instructions are encoded as numbers in what’s known as [machine code](https://en.wikipedia.org/wiki/Machine_code). Like Unicode, it’s just a code we’ve agreed on that connects numbers to meanings – in this case, stuff like ‘change this bit, or read me this piece of your memory’. That’s how computers can run programs from binary!

Anonymous 0 Comments

I’m only adding onto what others have said.

Computers deal with two kinds of stuff: instructions and data. They’re the verbs and nouns of binary.

If you’re talking about instructions (the programs that run), then the ones and zeros describe locations and what to do when you get to those locations. “At place 010100101001010010, get the number you find and add it to whatever is in built-in place blah.” The instructions have addresses, as do devices (mouse, keyboard, pixels on your monitor).

When you ask what ‘010100101001010010’ means, then we’re talking about the nouns — data. Someone mentioned ASCII earlier, which gives the most basic (US English) assignments of 7-digit binary data to letters, numbers, some punctuation, and some basic formatting (new line, ring a bell, etc).

You provided 18 digits of binary. Most Unicode (the much, much larger set of worldwide character mappings) breaks down into sets of 8 bits (one byte) each. We can simply look up these values in the various Unicode or ASCII charts. Since you have only 18 digits, we can either:

* Pad out zeroes to the left (beginning) of the binary string when we don’t have enough. In other words, a 4-byte (32-bit) Unicode of your ‘010100101001010010’ becomes ‘00000000000000010100101001010010’;
* Convert any smaller sections into pieces for lower-bit values. In your example, we can have:

** 01, 01001010, and 01010010 as separate one-byte values (again, we can pad out zeros in front of any number that is too short);
** 0101, 0010100, and 1010010 for 7-bit ASCII values (which were more important in the data-compression days of dial-up modems);
** or even 010100, 101001, and 010010 for six-bit (uppercase) ASCII.

So let’s pull up [some wiki pages](https://en.wikipedia.org/wiki/Basic_Latin_(Unicode_block)) and get going! We’ll start with the six-bit sets and get larger bit values as we go.

Notice that the pages refer to ‘U+003F’ instead of ‘00111111’ for a question mark. This is basic conversion of each four binary values to a sixteen-character value, aka hexadecimal. ‘F’ is ‘1111’, ‘2’ is ‘0010’, and so forth. I will refer to the hex values by putting a lowercase ‘h’ after each one.

six-bit:

* 010100 == 18h == ‘CAN’ (the ‘cancel’ character)
* 101001 == 29h == ‘)’ (close-parenthesis)
* 010010 == 12h == ‘DC2’, [device control 2](https://en.wikipedia.org/wiki/C0_and_C1_control_codes), which varies by OS and really does not apply to non-teletype machines.

seven-bit:

* 0000101 == 05h == ‘ENQ’, [the enquiry character](https://en.wikipedia.org/wiki/Enquiry_character). This goes back to the days of teletype machines.
* 0010100 == 14h == ‘DC4’ (see ‘DC2’)
* 1010010 == 52h == ‘R’, our first actual letter so far.

eight-bit:

* 00000001 == 01h == ‘SOH’, start of heading (more 1960s print-only instructions)
* 01001010 == 4Ah == ‘J’
* 01010010 == 52h == ‘R’ again.

Now we move to the [larger Unicode values:](https://en.wikipedia.org/wiki/List_of_Unicode_characters). I will leave these to the reader as an exercise.

* 0000000000000001
* 0100101001010010
* 00000000000000010100101001010010

tl;dr: ‘010100101001010010’ at best starts a type of sentence then says JR.