If a gigabyte is 10^9 bytes, then why do common technologies use numbers like 32, 64, 128, 256 gigabytes instead of something like 100, 200, 500 to easily file into 10s?

522 views

What is the purpose of these seemingly arbitrary multiples of 2

In: Technology

8 Answers

Anonymous 0 Comments

Because a gigabyte +isn’t+ 10^9 bytes, not exactly. Instead, it’s 2^30 bytes, or 1,043,741,824 (says my calculator) bytes.

Computers work on binary – it’s much easier to build switches with 2 positions, on and off, than it is to build ones with 10 positions. A kilobyte is actually 2^10 bytes, 1,024, and a megabyte is 2^20 , 1,048,576. So the sizes in between are also gonna use powers of 2, not powers of ten.

–Dave, they translate answers back into base 10 for our convenience, not for their own

Anonymous 0 Comments

A byte is made up of 8 bits. The bits are either a 1 or a 0. Different configurations of 1’s and 0’s in a byte control the signals sent to different technologies or the information being stored in a device. Most signals sent and received in common technologies are configured into bytes because 8 bits is the minimum amount needed to code a single character.

When technologies are designed to be able to compute a certain amount of bits the amount of bytes will usually be a multiple of 8 such as 32, 64, 128, or 256.

Kilobytes, mega, giga, tera, and peta are all used to quantify the amount of bytes which can be done in the same way as prefixes work in the metric system with base 10. But since the technology is made to hold however many bits of information the amount of bytes is almost always a multiple of 8.

Long story short the numbers not being multiples of ten is because instead of making new prefixes for binary they used the same ones that were already in use for other common measurement systems eventhough they signify multiples of 10 for things that are multiples of 8 bits.

Anonymous 0 Comments

Okay, first there’s a terminology difference. Traditionally in the computer fields people used the following definitions:

– Kilobyte: 2^10 or 1,024 bytes
– Megabyte: 2^20 or 1,048,576 bytes
– Gigabyte: 2^30 or 1,073,741,824 bytes

Once computers started to get really popular, in the late 1990’s this made some people start to get grumpy about the situation. Those grumpy folks said basically “Okay guys, the whole rest of science uses kilo to mean 1,000, mega to mean 1,000,000, and giga to mean 1,000,000,000. If you insist on working with numbers like 1,024, 1,048,576 or 1,073,741,824, you can’t call them kilo, mega, and giga. You have to call them something else, how about kibi, mebi and gibi?”

So there are now two camps of computer folks. One camp agrees with the grumpy pedants and use “gigabyte” to mean 1,000,000,000. Your question says “a gigabyte is 10^9 bytes.” So you would be in this camp.

But there’s another camp. There are a lot of people in the field who prefer the older traditional usage, and will use the word “gigabyte” to refer to 1,073,741,824. The kibi / mibi / gibi prefixes sound a little silly, and they never took off in terms of marketing or advertising of computers and related products. For example, effectively all RAM actually has power-of-2 sizes, but I doubt you’ll be able to find any RAM for sale anywhere that’s advertised or labeled using “gibibytes”.

As to why they pick powers of 2, it comes from the number of possible patterns in some number of wires carrying digital binary signals.

If you have say three wires, there are eight possible signals: 000, 001, 010, 011, 100, 101, 110, 111. If you want to represent eight possible values, it works great.

However if you want to represent ten possible values, three wires is too few, but if you add a fourth wire and start counting combinations: 0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000, 1001 — that’s ten — but there are still some combinations left, 1010 1011 1100 1101 1110 1111. You’d need to then have an extra circuit to detect these “extra” combinations and have the chip do something else.

That extra circuit has actual costs in terms of size, power usage, speed and money (not to mention extras design / testing). So designers instead try to match the sizes of things to the number of combinations available on particular wires, in particular powers of 2.

If you think in binary, it makes a lot of sense. One billion only looks like a round number to us because we use a decimal (base-10) number system. To a computer scientist who thinks in binary (base-2) number system, one billion is `111011100110101100101000000000`. Which is very not-round, at least when you compare it to 1,073,741,824, whose binary representation is `1000000000000000000000000000000`.

Anonymous 0 Comments

Computers think in binary. A “bit” is either on (1) or off (0). This means that one bit can have two values. Two bits have four (00, 01, 10, 11), three bits have eight, and so on. Here’s where the multiple of two comes from.

When it comes to memory, you need to “address” this. When your computer needs to look up where a certain value is stored, it needs to know the location of this value in memory. This is done by addresses. These addresses are stored in a computer readable format (for obvious reasons 😉 ). Meaning that a certain number of bits is in use. How much bits are needed for an address depends on how many addresses there are. To have the most efficient use of your address space, the size of your memory would ideally be a 2^x value.

Here’s where the kibi/kilo mebi/mega, gibi/giga comes in. In the metric system, kilo, mega, giga (and so forth) denote base-10 exponents. A kilogram is 1000 grams, a megagram (ok, fair, this is also called a metric ton, but work with me here!) is 1000 kilograms. In the same vein, a “true” kilobyte is 1000 bytes (a byte being 8 bits by the way – this is true all the way around). A megabyte is 1000 kilobytes and so on. A kibibyte (the bi denoting that this is the “binary kilo”) is 1024 bytes – 2^10 bytes. A mebibyte is 1024 kibibytes, and so on.

Anonymous 0 Comments

In low level, computers don’t work in base 10, only in base 2 (binary, only ones and zeros). Because of that, storage also have to be designed and organized in blocks of multiples of 2, when you see something advertised as a gigabyte (10^9), it’s actual size is a gibibyte (2^30). It’s just called a gigabyte because modern humans think in base 10 and it’s easier to estimate what 10^9 is than 2^30. Some websites actually have disclaimers to clarify things when they sell you a product (I think there are such disclaimers on Apple’s website).

Anonymous 0 Comments

You guys are worried about semantics but haven’t answered the question. It boils down to this. In computers, things get faster or more powerful by doubling. First you start with a computer that can handle one bit of information. A single 1 or zero. Well that can’t do much. So then we double it. Now it’s 2 bits. Still not much information, but now we can do some shit. Ok, put two of those together now we got 4 bits, 8,16,32,64(sounds like video game consoles doesn’t it?) until it gets up to kilobits, mega bits, gigabyte, tera etc. But really a kilobit(kilobyte) isn’t 1000 bits. It’s 1024. Because everything doubles. So then a 2kb chip is really 2048 bits. They just round it off at large multiples. Why they do it like that and don’t just make a 1000 bit chip? I dunno I guess it’s just easier to put 2 things you have together than it is to make a new one. Anyway, this applies to almost everything on computers. Hard drives, RAM, CPU chips, video boards etc.

Anonymous 0 Comments

[deleted]

Anonymous 0 Comments

It’s important not to mix up gibibytes and gigabytes.

Colloquially we use the word “gigabytes” for gibibytes but one is base 10 and the other is base 2.

2^30 vs 10^9