Why did the console bit wars end? During the 32 bit era, PS1 and Saturn were 32 bit systems, and Nintendo was boasting about having a 64 bit system. The last time console makers boasted about bits in their system was the sixth generation, with the Dreamcast, GameCube, and PS2 being 128 bits.

634 views

Why didn’t the bit war continue into the seventh generation? Why didn’t the amount of bits double to 256 bits like they did in past generations? Any insight into this would be appreciated.

In: 609

34 Answers

Anonymous 0 Comments

Because it ceased to matter for performance and capabilities of the system. “128-bit” consoles were labeled as such just for marketing. Like nearly all modern computers, current gen consoles are 64 bit machines.

Anonymous 0 Comments

The N64 and PS1 weren’t exactly 64-bit.

That was a bit of hot air on Nintendo’s part. While the N64 did technically have a 64-bit CPU and co-processor, the vast majority of code was written for 32-bit execution since storage was pretty lacking.

More modern consoles do have some small 128-bit aspects of the operation, but the vast majority of software and data has only really advanced to 64-bit and stopped there.

Anonymous 0 Comments

Because it was never relevant to public, and in many cases was not even realistic depiction of actual word length used by said system.

It was a marketing gimmick basically, and once it dried they moved to next irrelevant thing, like clock, then cores, nowadays it is lithography, and every time it starts as telling the irrelevant truth and spirals into irrelevant nonsense.

Anonymous 0 Comments

128 bit consoles don’t exist. Everything from the ps2 generation forward is 64 bit. The number of bits is the word size of the CPU in the system. It hasn’t grown past 64 bits because 64 bits is enough. It’s a bit hard to describe exactly why, but one example is with a 32 bit CPU you can only address up to 4 gigabytes of memory, so your system essentially can’t have more than 4 gigs of RAM. With a 64 bit CPU, you can address something like 18.5 billion gigabytes of memory. We don’t have a reason to go past 64.

Edit: Correction, the PS2 Emotion Engine is KIND OF 128 bit, but near as I can tell not properly. It doesn’t work directly on 128 bit integers it works on multiple smaller integers at once.

Anonymous 0 Comments

Each bit doubles the quantity of possible numbers you can process. 1 bit is two numbers, 0 or 1, 2 bits is four, 3 is 8, etc. An 8 bit machine can only do math on numbers up to 255 in any given clock cycle, so any math that requires larger numbers requires shifting the data around and additional cycles. That takes time and introduces lag.

64 bits is 18.5e18, which is astronomically large. But that allows single clock cycle math on almost every possible number a game system could use in its processors. Doubling THAT to 128 is pointless.

I will point out that sometimes optimizing HOW the registers are used is more efficient than the raw number of registers. Look at the very first game produced on the PS3 and compare it to the very last game released 10+ years later. The last game could run on the first gen hardware, meaning the new games were optimizing the processing capabilities of the same old hardware and getting more performance out of them.

Anonymous 0 Comments

First thing to appreciate is that going from 32 bits to 64 bit depth isn’t a doubling of “capacity”. These numbers refer to the length of the basic data size. A 32 bit machine handles data that has a precision up to 2^32 (ELI5 – loosely speaking) which is roughly 4 billion. Essentially each 32 bit word can hold an integer between 0 and 4 billion. A 64 bit word can hold a number up to 18 quintillion. (essentially 4 billion billion)

For example think of colors.

1 bit color can show essentially black and white (2 gradations).

2 bit color can show up to 4 colors (fairly boring)

8 bit color can go up to 256 colors (well within a human to distinguish easily)

16 bit color is 32,000 colors (this allows for fairly good picture representation)

32 bit color is 4 billion colors (a regular human eye cannot detect this many colors/shades)

Anything more than 32 bits is already overkill for colors. And the same goes for sound etc.

At some point increasing bit depth is only really necessary for scientific/industrial work where extreme precision might be needed.

Anonymous 0 Comments

It’s been a long time since I was studying anything hardware-related, but I believe the answer is that 64-bit addressable memory can access more memory addresses, but once you get to 128-bit addressable memory, actual read/writes from those addresses takes longer and slows down the system.

Anonymous 0 Comments

Anything past 64 bits is going to be useless for a very long time.

The key thing larger bitness helps you with is accessing storage space: the count of bits refers to the maximum number of “digits” you can use to address memory. Imagine a street number locked to two digits long. You’d be able to talk about only 100 houses. If you added another digit, you’d be able to talk about ten times the houses of the original, since that new digit could have ten values.

In computers, as you might have heard, we have binary, so each digit can only have two values. So, 33 bits is double the space of 32.

So, compared to 32 bits, 64 bits is doubling the storage space 32 times. It turns out that 32 bits gets you around 4 GB of memory, which ended up not being enough around the late 2000s. 4 GB doubled 32 times is probably going to be enough for the next century, at least.

We don’t even use all that space today. Modern processors are theoretically designed to use 64 bits, but in practice they’re all locked to 48. We simply don’t need the extra.

Anonymous 0 Comments

The bits they’re referring to change over time, sometimes the bits refer to memory bus, sometimes vector instructions, but usually the CPU architecture.

We’ve had memory buses up to 512 bit and beyond for a while, with memory speed scaling linearly with speed and bitwidth. There’s even been 2048 bit memory buses.
We have 512 bit vector instructions which is typical packed 32 or 64 but instructions, and can go higher. We might occasionally need high bit width for things like cryptography but we usually have dedicated hardware for that.

The cpu bit width allows both default register / variable size as well as memory bit width, but 32 bit integers and floating point numbers are more than enough for games. 64 bit only mattered for memory addressing, 32 bit limits you to 4GB of ram. The N64 did have a 64 bit CPU but it didn’t need it or really use that functionality.

GPUs also have their own bitwidths since they use simd or vliw instructions too packing many 32 bit ops together. For many graphics operations, even 16 bit or 8 bit can be useful.

We typically scale other aspects of computer hardware than bitwidth since 32 bit was generally plenty for most computations. We scale clock speed, cores, instructions per clock cycle, and then for GPUs generally FLOPS while CPUs are more concerned with MIPS.

Anonymous 0 Comments

didn’t the bits refer to the number of drawn polygons and not the processing power or something like that?