Games works on CPUs and GPUs. These things work on transistors, which are like storage cells. If electricity if stored in the transistor/cell it means it’s turned on and can be used to determine the binary number one. Exactly vice versa for the binary number zero. So now you can use these 0s and 1s to do complex tasks like running any software or playing games. Since there are only two possible options so if we are using one transistor it gives us one bit(it can either be a 0 or a 1), if we use two then we have 4 possible combinations (11, 00, 10, 01), if we do three transistors we get eight bits or a byte. Number of bits equals 2^n where n is the number of transistors you have. So this explains why game improvements were in this patterns because they were all multiple of two and we learned to put more transistors in smaller spaces.
there have been computers in the past that don’t fit with this pattern. the reason the “double the last time” pattern has been so popular though is
1. it’s simpler (a system with twice the width of the previous can reuse much of the architecture and logic. exceptions to this require additional logic to deal with)
2. the growth in memory use has been exponential. since the bit number also correlates to the address space, it makes sense for it to double with each generation
TL; DR. Because binary. 8bits is a byte, 16 is 2 bytes, and 64 is 4 bytes, all of which are natural holding spots for CPUs processing ability until the next leap in CPU design
Everything in computer science up to quantum computers runs on binary, 1s and 0s. It’s hard to think on binary for humans, but really easy for computers. So, we came up with a way to make it easier for us: hexadecimal. Now, all of a sudden, we could convert a single concept in our language to an 8-biy concept in computer speak. And we did, and this is where bytes come from. A byte, ICYDK, is a collection of 8 bits or 8 sets of either 1 or 0 (e.g 00011011). Bytes had the advantage of mapping to characters really well (e.g. “a” might be 01000001), so we could *almost* type in natural language, and, with the use of a compiler. Translate to machine code really, really quick. There was also the idea that a CPU could, through clever design, actually process multiples of 2 simultaneously. 8-bits (or 1 byte) go in and 8 bits come out the other end. There was stuff before this, of course, (the Atari was, IIRC, a 4-bit system, but thanks to the video game crash in ’83, nobody really cares). But 8-bit really allowed for great game design for the first time. As things went on, adding extra bytes made sense, and CPUs could handle them thanks to improvements in chip design. 8 and 16 are pretty famous, but there were attempts at 24-bit systems, and 32s aren’t unheard of, but it was a relatively short amount of time between 16 and 64-bit systems, so it just made sense to invest in the 64-bit systems instead of holding back and developing 32-bit systems, although this did happen.
But, basically, its all about how much data you can pass to the CPU at once, known as the bus rate. And the bus rate is currently only limited by the pathways to the CPU, not the CPU itself (8×8 is 256, which should be the next jump in processing capability, but 8×1, 8×2 and 8×4 are what we have now, representing the number of bytes that a given CPU can process simultaneously)
Computers work in binary where every digit has one of two states (0 or 1) and is called a bit (binary digit). Whenever you add a bit to something you are basically doubling how many numbers it can represents:
1 bit = 2 numbers (0 and 1).
2 bits = 4 numbers (00, 01, 10, 11)
etc.
We ultimately settled on 8 bits being a fundamental unit of computing called a byte. So everything is done in units of bytes. 8-bits is 1 byte, 16-bits is 2 bytes, 32-bits is 4 bytes, etc.
They’re not just multiples of 8, but of 2. Since digital computers work in binary, multiples of 2 are neat. But they’re not absolutely necessary.
The 8 bit is a little more famous because of the byte, which is 8 bits. Originally it was used to encode the standard characters of the time. With 8 bits you could have 256 characters, which was enough for a good while.
Because IBM System/360 used 8 bits to represent characters, and it was a really influential family of computers. That’s why the popular word size (“word” here is the size of data unit of a CPU) was 8 bits.
If you want to upgrade a CPU design to encode more words in a same instrunction, it makes sense to use a multiple of the previous size, so that’s why we get multiples of 8.
Latest Answers