I know the jist of binary, but something that I have never understood is how it know where one “word” stops and another starts.
so if 9 is 1001 and 57 is 111001, how does the computer know that the “1001” in 111001 isn’t whatever 11 is and then the number 9, 1001? only having two digits, 1 and 0 seems like not enough to differentiate.
If you want to have the word “apple” written in binary, do you take the binary code of each letter and smoosh them together or is there a separate specific code for “apple”?
In: Technology
Computers have a ‘word size.’ think of a computer program as a long list of instructions, where each instruction is the same size. A cpu will then have an instruction set. So, hypothetically you might have a 3 bit computer. Well that means the computer will read 3 bits of a time and can only have 8 instructions. So let’s say:
000 means put the next value into register A
001 means put the next value into register B
010 means add register A to B and store it in A
011 means sub A from B and store it in A
100 means print register As value to screen
So then a program might look like :
000001001001010100
Well the computer reads 000 which says store the next value on register A, so it reads 001 and stores it. Then it reads 001 which says rad and store in register B (001 again). Then it reads 010 which adds the registers, so now register B is 001 + 001 (which is 010) and then it prints that to screen, so the program will print the number 2. This is also why programs compiled for one chip won’t work on another and compiling for 64 bit won’t work on 32 bit. Also, the reason AMD and Intel chips both work on the same programs is because they both follow the same x64 (and x86 standard for 32 bit) so the programs are consistent. They migjt be different internally, but the same instruction on both chips should produce the same results
Latest Answers