How tf does binary code work.

624 views

it’s just a bunch of 0’s and 1’s.. like I can just put a bunch of them and somehow make a sentence??? like what does this mean -> 010100101001010010 (i legit just spammed 0’s and 1’s)

In: 0

26 Answers

Anonymous 0 Comments

Man that’s a hard one to explain like your 5. But I’ll try I guess lol. So Binary is a system that is in Base 2. Regular numbers are in Base 10. Making sentences is a method of using Binary to create sentences on computer. It’s not the binary per se.

So back in the day an organization called ASCII created ASCII it was a way to make a computer understand that a certain combination of binary could represent a letter.

So the binary version of 65 equaled a and 66 equaled b and so on. That’s all I got before going all math wiz hope others can add more lol

Anonymous 0 Comments

Why do we write our ABCs? They’re just arbitrary shapes on a page….

Binary codes mean something because we decided that they do.

There are a few different standards that assign meaning to binary numbers. ASCII and Unicode are examples. They are created by people, who say “this binary number represents this letter (or symbol)”.

We did the same with Morse code:

A = .-

B= -…

C= -.-.

etc.

The actual assignment of these codes to letters is arbitrary. (Although there are reasons we might assign codes in certain ways, to make them easier to use.)

As to why binary? Computers work well with binary numbers. Either a lightbulb is on or it is off. Either a condition is true, or it is false. There are two options. Thus, binary.

Anonymous 0 Comments

1 is true or “on”, and 0 is false or “off”Every letter, number, character is made up of a series of 0s and 1s (ons and offs). By combining a series of 1s and 0s (ons and offs), text strings are formed using machine language (e.g., ASCII) and interpreted by the computer. Essentially, binary is a foreign language that’s more easily read by a computer than interpreted by a human.

Anonymous 0 Comments

Our regular number system is decimal- each position in a number goes from 0 thru 9, e.g. 189. In binary you only have the numbers 0 and 1 as you pointed out. The length of binary numbers is almost always multiples of 8 (8-bits), so 10111101 for example is an 8-bit number. You can convert between binary and decimal numbers and it just so happens that 10111101 is the same as 189 decimal.

Either way all you are doing is representing numbers – now what do you do with those numbers? Since you mentioned sentences you could for example assign a letter to certain numbers. Let’s say A=0, B=1, C=2, D=3. Given a number 303 that can be converted to DAD. If I gave you a different letter-to-number table it would turn out to be a different word though.

Traditionally letters in computers have been encoded using the ASCII standard, where for example the uppercase A is assigned to number 65 or 01000001 in binary.

Anonymous 0 Comments

Any symbol we write only has meaning that we assign to it. Why does “a” mean a single object? Why does the letter make the sound it does? Well, it turns out that you only really need two symbols to be able to encode any possible number, and you can assign a number to any specific meaning. We call such a definition an “encoding”.

We call a single binary digit a bit. For ease of usage, it is very common to write binary numbers as hexadecimal, which basically lets us write 4 binary digits as one character, also known as a nibble, storing the numbers from 0-15 as 0-9 and A-F, starting with 0x to indicate its hex. So 0x0 is 0000 binary or 0 decimal; 0xF is 1111 binary or 15 decimal. Modern computers typically work on the basis of 8 bits at a time, which is known as a byte and will be two hexadecimal numbers.

For text, we normally use Unicode, which is frequently encoded specifically as UTF-8 (about 99% of websites currently), which is explicitly designed to be an extension of the older ASCII encoding. This defines the number 0x41 or 0100 0001 binary or 65 decimal as the uppercase letter A, or 0x61 or 0110 0001 or 97 decimal as the lower case letter a. The rest of the Latin letters just count up from those, with any symbols or other characters having their own patterns.

Other commonly known encodings include JPEG, MP3, and MP4, which store images, sound, or movies, respectively. There are thousands of publicly known encodings, and millions more private ones.

Anonymous 0 Comments

Binary code on its own doesn’t turn into a sentence. Binary is just a way of writing numbers when all you have is a series of “yes” (1) and “no” (0) options, rather than 0 – 9.

Think about counting to 100 using our zero through nine numbers. 0, 1, 2, etc. Once you get to 9, we don’t have any single-digit numbers that go higher, so instead you put a 1 in front, and start over. 10, 11, 12, etc. You start using the “tens” place to signify a higher number. That first “1” represents 10, and then you put a 2nd “1” to say that it’s 10+1. So you get 11.

Binary only has zero and 1, but it’s basically the same concept. To count to 100, you start with 0, and then 1, but now you don’t have a 2 to go to, so to represent 2, you put a zero in front (just like you put a 1 in front earlier), and start over. So: 0(0), 1(1), 10(2), 11(3), 100(4), 101(5), 110(6)

The random 0s and 1s you wrote above use this system, and translated to our normal decimal system, the number you wrote is 84,562. On its own, this does not convert into letters or computer code.

You likely know the term “bit” and maybe “byte” as in “megabit” and “megabyte” of data. A “bit” is a zero or a 1 in binary. A “byte” is made up of 8 bits. So in your random binary code you wrote above, your first bit is a zero, and your first byte is “01010010”

To turn binary code into letters, you have to use hexidecimals, which is a more complicated coding system that uses bytes of data (chunks of 8 bits) to spit out more than just numerical values. So the base binary code of a program will be a series of 0s and 1s that are fed into a hexidecimal translator, which knows that binary “01100101” into its equivalent ASCII decimal value 101, which the computer recognizes as the letter “e” based on ASCII standard conversion. This is then displayed as the letter “e” on the computer screen.

Your number you put in above is 18 digits, which is two bites, with two spare bits on the end, so in theory, it could be two letters or numbers, and then a little remainder on the end.

Anonymous 0 Comments

answering the second part first. How do you make a sentence using numbers. Just like those secret codes you made when you were a kid, you just assign a character to a number: 1=A, 2=b, 3=c… 26=z, 27=A, 28=B etc. If you want to encode the word “cat” you would just write those numbers down c =3, a=1, and t=20. We need to put some leading zeros in some times so we don’t get confused between “aa” =0101 and “j” = 11. so “cat” is encoded as 03 01 20.

Binary is just another way to write numbers using only 0 and 1 for digits. 1 in binary is the same as decimal number 1. 2 in decimal is 10 in binary, 3 decimal is 11 binary. If we convert our coded “cat” message into binary we would convert the 03 for c into binary (with leading zeros so we don’t get confused) 00000011 (that’s 8 digits and is what is called a “byte”). the “a” would be 00000001 (another byte) and the “t” is 00010100. so “cat” is 00000011 00000001 00010100.

Before the trolls jump on me, the encoding schemes used in computers normally don’t start with “a” at zero, there are a number of encoding schemes that use different ways and numbers to encode the data. ASCII (american code for information interchange) , EBCDIC (extended binary coded decimal information code or something like that), and Unicode are standards used to day or in the recent past.

Anonymous 0 Comments

[Each spot in a string of 1’s and 0’s (ON or OFF) corresponds to a “2 to the power of (spot)”.](https://imgur.com/gallery/Tp020RC) These spots are counted right to left and start at 0. If there’s a 1 in the spot, I add it but if there’s a 0 I don’t. so if I say 01011000 then it’s 2^3 + 2^4 + 2^6 – which equals 88. 00000001 is just 2^0, which equals 1.

Every single number only has one combination that will make it, which means that 01011000 is the only way to write 88, and 00000001 is the only way to write 1. Each number can correspond to a letter – in the ASCII table, 65 is A, 66 is B and so on. 8 1’s or 0’s in a row allows 2^8 possibilities – 256 (a lot) and each possible string of 8 means a different thing: up to 256 different things. Basically: 8 1’s or 0’s gives you 256 unique combinations which can mean different things. I’ll reply if you have questions

Anonymous 0 Comments

Picture a normal eight digit number.

11,001,001

You know it’s 11 million, 1 thousand and one. Because there’s a one in the ones column, a zero in the tens column, a zero in the hundreds column, a one in the thousands column, and so on. Each column has ten possible values (0 through 9).

Now take an eight digit sequence in binary. Which, coincidentally, is a byte, the basic building block of information. Each column only has two possible values (1 and 0). So each column is esssentially now a yes/no. So:

11001001

Now it’s not columns that go up by 10. It’s columns that multiply by 2. You have a 1 in the ones column, so that’s a value of 1. You have a zero in the twos column, and a zero in the fours column. You have a one in the eights column (8 plus the 1 we’ve already established = 9 so far). You have zeros in the sixteens column and thirty-twos column. Then you have ones in the sixty-fours column and the one hundred twenty-eights column (8+1+64+128= 201).

So the binary sequence 11001001 represents the number two-hundred one. Play around with it. You may notice something- the total range of possible values is 256 (from 0 to 255).

Now hmmm….. what common numbers do you see that go from 0 to 255?

https://en.wikipedia.org/wiki/IP_address

Anonymous 0 Comments

Each serie of 8 either 1s or 0s is associated with one character. For example H is 01001000, E is 01000101, L is 01001100 and O is 01001111.
HELLO is then 0100100001000101010011000100110001001111.