Data or information on a computer is made up of 1’s and 0’s. Corresponding to “On” or “Off” How can programs be millions of bytes when it is Binary? Are there other types of ways data can be processed? Thank you in advance

432 views

This question has been racking my brain for a while. Thank you again! Have a great day/night!

In: 0

8 Answers

Anonymous 0 Comments

You almost never deal with individual bits. They are always grouped into longer words consisting of 8, 16 or more bits.

A computer program is a sequence of instructions, usually of variable length where a word is given a meaning such as “read from memory”, “add”, “multiply” and so on, followed by the appropriate number of “operands” or data that is being worked on by that instruction. The processor knows how many words to read in from the program and moves the counter to the next instruction.

While a single bit can only have two possibilities, a sequence of two bits now has four: 00,01,10,11. You could transmit one of four “states” of a sensor, or commands that instruct a machine to do one of four jobs (instead of just two). Sixteen bits can encode a number between 0 and 65 thousand. The world length is chosen to encompass the expected number of “meanings” or the range of data values with some reserve.

Anonymous 0 Comments

Programs are millions of bytes *because* they are stored in binary. Just think of how many digits it take in binary to express the number 2, 4, 8, 16, 32, 64, etc. And those are just the *numbers*, not even the commands for the program to run! Computers use binary because it can read it with high reliability. If the charge for a given byte is lower than typical for some odd reason, it won’t matter as the computer will treat that as the same as a 1 because the charge is greater than 0. They are currently trying to develop quantum computing which could utilize trinary instead of binary.

Anonymous 0 Comments

A single 1 or 0 is a bit. 8 bits is a byte, a kilobyte is 1024 bytes (and another 1024 for each prefix, Mega, Giga, Tera)

If we look that the byte 0000 1111 that’s is the number 15, or 0x0F in hexadecimal of we send the computer instructions, we can tell it what to do with that 15. Those instructions are just specially selected patterns of binary numbers that the OS knows what to do with.

When we have the number, it’s store in RAM (in a place called a register) where each bit has a pin that either has voltage or doesn’t. We can then tell the computer to take those voltages from the register, and put them through the processor, where it can add, subtract, multiply, divide, or do any logic operations (AND, OR, XOR, etc) on that data. The data then gets put in another register in RAM. When you run out of room in RAM, it has to be put on the disk. Each spot on the disk has a memory address, which is another string of 1’s and 0’s, and as long as you have that, you can find any data you want on the disk. The very beginning of memory on the disk is a list of where everything is stored, so if you don’t know where to look, you just start looking at the beginning of the disk and hopefully it tells you.

8GB of RAM means you have over 2 billion registers, which each contain 32 bits. If you have e a 64 bit operating system, it simply uses 2 registers to hold 64 bits.

We can tell a computer to interpret a piece of data in different ways. Normally, it’s just a number, but I’d we tell the computer it’s a letter. For example, number 97 (0x61in hexadecimal, 0110 0001 in binary) as a character according to ascii standards is a lowercase a. If we look at 65 (0x41, 0010 0001), that’s a capital A.

At some point, we just agreed that certain numbers mean something, and every time we see data, we need to look at the context to know how to interpret it and what to do with it.

Anonymous 0 Comments

Yes information in a modern computer is made up of binary 0s and 1s. *

*Very* simply a computer’s processor can execute a number of instructions. Each instruction is made up of a fixed number of bits (a 0 or a 1). The first microcomputers like the Apple 2 and Commodore PET were 8-bit computers which meant they used instructions made of eight bits.

The first four bits of each instruction in an 8-bit computer is called the opcode and tells the computer what to do. A fixed set of opcodes are burned into the CPU of the computer and those are all you can use. The remaining four bits say what data should be used and is called the operand. The operand can be a location in the computer’s memory, a value stored in the processor’s ‘registers’ or a value from an input or output port and so on…

The computer follows through the instructions to process data.

Each processor has its own different instruction sets and they might use different number of bits; so the binary code cannot normally be ported easily from processor to processor. This is got round by programming in a ‘high level language’ which is translated into processor-specific instructions by a compiler or interpreter.

And yes, there are ways of processing data in other ways, including analogue computers which were widely used from the 1930s onwards to simulate very complex systems including economic models – one of them even used the flow of water:

[https://en.wikipedia.org/wiki/MONIAC](https://en.wikipedia.org/wiki/MONIAC)

And as I said above, this is wildly simplified.

Anonymous 0 Comments

It’s less about number representation, but what you do with those numbers. How many of those 0s and 1s you put together and what they mean once they’re together.

Base 10 is 0 to 9 and we still do very complex math with those 10 digits.

Binary works like that, you only have two digits instead of 10. How you arrange them and what you do with them is what matters.

We then use transistors connected together in specific arrangements called logic gates to perform operations on those numbers. You can add, substract, multiple, divide, compare, etc. using those arrangements. The results of those operations basically determine what a program does.

For example, say you have 8 binary digits together (8 bits) and you use them to represent numbers. The combination of 0 and 1 can essentially represent 256 possible combinations which in turn means you can have numbers ranging from 0 to 255. You have an 8-bit number. That means you can then use those numbers to do some basic math. 00000000 is 0, 00000001 is 1, all the way up to 11111111 which is 255.

You can do the same with letters and characters, instead of using these 8 digits to represent numbers, you can use them to represent up to 255 characters. That’s what became known as ASCII.

You now have a way to represent numbers and letters, that means you can make programs to print out text, do math, etc.

How we use those digits to represent something is actually somewhat standardized. That’s why you have organizations that decide on specifications for USB. It’s basically down to how you will use those 0s and 1s to transfer data over. The same goes for other communication interfaces like RS-232, HDMI, Display Port, etc.

Anonymous 0 Comments

You could ask the same about denary, or base 10. You only have ten numbers, 0 – 9.

The answer lies in their position. If I write 4826, we know it’s four thousands, eight hundreds, two tens and 6 units.

4 thousands = 4 x 10^(3)

8 hundreds = 8 x 10^(2)

2 tens = 2 x 10^(1)

6 units = 6 x 10^(0)

===

Binary uses the same idea, but you’re limited to only two numbers so you’re always flowing over into the next column.

Consider 1101 =

1 x 2^(3) = 8

1 x 2^(2) = 4

0 x 2^(1) = 0

1 x 2^(0) = 1

8 + 4 + 0 + 1 = 13.

===

Early computers tr4ied using variable voltage, so 7 volts would be 7, 3 volts would be 3. Voltages can drop as they run around a system so you might get 61.V. Is that a tired 7, or a lively 6?

===

Upcoming quantum computers use quantum bits, or “qubits”. These rely on bizarre features like “superposition” (it can be 0, 1 or anywhere in between, all at the same time) and entanglement (two entangled qubits are always in the same state, even though they aren’t physically connected).

Most bizarre of all is that some people understand these phenomena well enough to design a computer out of it.

Anonymous 0 Comments

On top of everything everyone else has said. Take a look at the [ASCII table](https://www.asciitable.com/). That’s a simplified view of converting binary to decimal to characters esp in the earlier days of computing.

So decimal 8 in binary is 100. But to display the character ” 8″ in the ascii table is decimal code 56 = 111000. so when the computer sees 111000 in the display buffer it converts that to character #56 in the ASCII table which is ” 8 “.

Anonymous 0 Comments

>Are there other types of ways data can be processed?

Of course there are. You and I are exchanging data using the English language, which features 26 different values called “letters”.

But computers can’t do that (yet), so each letter is translated into a “byte”, which is a set of multiple binary “bits”. The reason for that is computers work by setting switches either “on” or “off”, hence binary.

I don’t know about you but I’m not fluent in binary and would really find it a pain to write this in binary, but fortunately my computer handles all that translating for me. Then it sends the bytes to you, and your computer translates that to letters. If English isn’t your native language you might even run it through Google Translate or your favorite chatbot to get a translation.

However you do it, your brain is processing data through a way that is *not* binary so there are definitely ways to do it. But computers at this time don’t really have an alternative that is as well-developed as binary.