If a gigabyte is 10^9 bytes, then why do common technologies use numbers like 32, 64, 128, 256 gigabytes instead of something like 100, 200, 500 to easily file into 10s?

570 views

What is the purpose of these seemingly arbitrary multiples of 2

In: Technology

8 Answers

Anonymous 0 Comments

Okay, first there’s a terminology difference. Traditionally in the computer fields people used the following definitions:

– Kilobyte: 2^10 or 1,024 bytes
– Megabyte: 2^20 or 1,048,576 bytes
– Gigabyte: 2^30 or 1,073,741,824 bytes

Once computers started to get really popular, in the late 1990’s this made some people start to get grumpy about the situation. Those grumpy folks said basically “Okay guys, the whole rest of science uses kilo to mean 1,000, mega to mean 1,000,000, and giga to mean 1,000,000,000. If you insist on working with numbers like 1,024, 1,048,576 or 1,073,741,824, you can’t call them kilo, mega, and giga. You have to call them something else, how about kibi, mebi and gibi?”

So there are now two camps of computer folks. One camp agrees with the grumpy pedants and use “gigabyte” to mean 1,000,000,000. Your question says “a gigabyte is 10^9 bytes.” So you would be in this camp.

But there’s another camp. There are a lot of people in the field who prefer the older traditional usage, and will use the word “gigabyte” to refer to 1,073,741,824. The kibi / mibi / gibi prefixes sound a little silly, and they never took off in terms of marketing or advertising of computers and related products. For example, effectively all RAM actually has power-of-2 sizes, but I doubt you’ll be able to find any RAM for sale anywhere that’s advertised or labeled using “gibibytes”.

As to why they pick powers of 2, it comes from the number of possible patterns in some number of wires carrying digital binary signals.

If you have say three wires, there are eight possible signals: 000, 001, 010, 011, 100, 101, 110, 111. If you want to represent eight possible values, it works great.

However if you want to represent ten possible values, three wires is too few, but if you add a fourth wire and start counting combinations: 0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000, 1001 — that’s ten — but there are still some combinations left, 1010 1011 1100 1101 1110 1111. You’d need to then have an extra circuit to detect these “extra” combinations and have the chip do something else.

That extra circuit has actual costs in terms of size, power usage, speed and money (not to mention extras design / testing). So designers instead try to match the sizes of things to the number of combinations available on particular wires, in particular powers of 2.

If you think in binary, it makes a lot of sense. One billion only looks like a round number to us because we use a decimal (base-10) number system. To a computer scientist who thinks in binary (base-2) number system, one billion is `111011100110101100101000000000`. Which is very not-round, at least when you compare it to 1,073,741,824, whose binary representation is `1000000000000000000000000000000`.

You are viewing 1 out of 8 answers, click here to view all answers.