Why is network bandwidth measured in kilo/mega/gigaBITS rather than kilo/mega/gigaBYTES?

587 views

Why is network bandwidth measured in kilo/mega/gigaBITS rather than kilo/mega/gigaBYTES?

In: Technology

8 Answers

Anonymous 0 Comments

Most people don’t know the difference between bytes and bits.

If they see 10MB next to 80Mb for the same price they will always go for the bits.

Anonymous 0 Comments

The same reason postal mail is measured by weight instead of words: the post office doesn’t care *what* you’re sending, just how hard it is to send. *You* care about the words, but once it’s in an envelope, the words stop mattering.

The network doesn’t care about the fact that you organize your bits into 8-bit groups–it cares how many bits it needs to put “on the wire”.

Anonymous 0 Comments

Historically, it’s because network communication works in bits, not bytes. Bytes are used for storage and memory, not transmission. By now it’s simply because using bits instead of bytes sound more impressive (200Mbps sounds more than 25MBps).

Anonymous 0 Comments

A Bit is the smallest possible quantity of information, thus it is the elementary unit of information. A Byte is just one of infinite possible “word lengths”. Whether a certain word length makes sense or not is entirely up to how the information will go from A to B.

If you send simple on or off signals, but want to transmit higher values than 1, e.g. 15 (4 Bit), the 1-Bit sender and the 4-Bit receiver must agree on a transmission protocoll, such that you code your numbers in binary e.g. 10dez = 1010bin and send it bit by bit. The receiver will write down the received bitstream “1010” and interprets it as a 4-Bit word with the value 10dez. This is called a serial transmission. As you can see, it takes four times as much time to send these bits than doing it parallel (simultaneously) over a 4-wire databus from a 4-Bit system.

Anonymous 0 Comments

At this point it is mostly because it sound bigger (by a factor of 8), historically it was because you didn’t transfer data in bytes, but bits.

When transferring data people who care about the transfer speed weren’t really looking at the actual amount of data received as that could differ quite a bit depending on the scheme used. what they cared about was the number of bits that were physically transferred. In practice there was a lot of overhead and redundancy involved some of it on the bit level.

There is this whole layer structure where some engineers looked at the lowest level of bits send and received and considered how this was used on higher levels someone else’s problem.

To be fair it only makes sense to measure the speed of a connection based on the gross bit rate that the connection actually has and not all the other stuff that gets subtracted at higher level that are not the connections fault.

It might help to think of it like as hipping company transporting a container full of goods. The shipping company only cares about the weight of the container. The end user after unpacking all the boxes inside the container will receive a lot less stuff weight wise than the shipping company billed them for, but as far as the the shipping company is concerned it is not their fault that 10% of the weight they billed you for turned out to be cardboard and plastic packaging.

So the people responsible for the wires only care for the bits that go over the wires, that these bits get assembled into bytes and then 10% of it gets discarded when everything is unpacked at the end is not their problem.

Data transfer is in bits and the data you care about receiving is in bytes. There will always be less than one eights of the bits actually making it to your computer as many of them get discarded as packing and filling and shipping labels and stuff.

An older measure of data transfer speed was baud. It did not care for bits and bytes, bot for symbols. Symbols can be understood to be mean byte in many contexts so baud is like byte per second, but it is more flexible as you can for example transfer ASCII character with 7 bit be per character and even stuff like morse code can be measured in baud even though it doesn’t really use bits and bytes.

But most of that is history nowadays advertisers for data transfer stuff talk about the gross bit rate since it is the largest number, when most people care more about how many actual bytes they receive in an standard use case scenario.

It isn’t helped that there are competing definitions for the use of si prefixes with bits and bytes. Originally engineers decided to use kilo etc to mean powers of 1024 as this was a nice round number in binary. But the people who run the metric system aren’t happy about that as kilo means 1000 to them. People who sell hard-drives are happy to use the 1000 definition in their advertising as it makes their stuff sound bigger and some operating system have followed suit.

These different defintion makes the question of how many kilobytes of data you get out of a certain amount of kilobits per second of transfer even harder to understand at a glance.

Anonymous 0 Comments

Simply because often, for transmission, bits are added to mark the other bits being transmitted. Think about this: if you’d listen to someone reading a list of zeros and ones through a telephone, you might get confused after the 28th zero, or something. So they’ just add a one every now and then (or even a zero-one), that you’d not lose track.

It’s very tricky actually. It’s so much of a hassle that often, a seperate “timing” line was added, that just signals *when a bit arrives*. Think of it: you’re *running another wire down the whole* length. You could send “good” bits through it instead, and thus double your transmission speed. And then you’d be stuck with the counting problem again. Current “Ethernet” (the LAN you see sometimes being used) uses two wires and a special coding to overcome this problem.

Any way, when talking about transmission speed, you could inadvertently make a mistake, as in taking some abstract number for the actual transmission speed, measured in payload bits going in at point A and coming out at point B. And then, for this to happen, it’s actually necessary to add more bits. So, for the sake of precision, you’d talk about the actual number of bits transmitted.

Additionally, there were computer architectures that used odd word sizes. For example, the mainframes of the 60s era often used 18-bit word size, making it hard to define what a “byte” would be, exactly. Other than “8 bit”, that is. And those were the computers the foundations of the internet itself were laid on, so there needed to be a standard to agree on. Memory of either kind (volatile or mass storage) it often measured in words instead of bytes, since it makes so much more sense to store a word instead of some arbitrary number of bits.

Anonymous 0 Comments

In Asynchronous mode, the byte can start at any time. So some bits, called start bits, were added, to signal that the data was about to start. Likewise, stop bits were added. (Since there are only 2 states, silence/absence of data could be mistaken for a string of zeros, unless you have the start and stop bits.)

If you were sending a byte at a time, asynchronously, you’d send bits to start, 8 bits of data, and bits to stop, which means it took maybe 10 or 11 bits to send 8 bits of data. If you were sending a block of bytes, you would have your start bit(s), a bunch of 8-bit bytes, and stop bit(s).

This means that you can’t just take the number of bits per second and divide by 8 (or even 10 or 11) to arrive at the number of bytes per second. You can’t do that math with asynch, because it really depends on how long the blocks of data are, so that you can calculate the overhead rate consisting of start and stop bits.

Hence, bits per second makes more sense in Asynch communications.

Anonymous 0 Comments

It doesn’t have anything to do with how impressive things sound or marketing, it’s all about how networks transfer data.

Network technologies transfer data in *serial:* One bit at a time. Therefore transfer rates are labeled in the relevant units, bits/second. It’s that simple.

RAM, on the other hand, transfers in *parallel*: DDR RAM has two 64 bit channels, meaning it transfers 128 bits, or 16 bytes at a time. Therefore, [its speed is measured in Bytes/Second](https://www.transcend-info.com/Support/FAQ-292). SATA (literally Serial ATA) drive transfer rates are measured in [bits/second](https://en.wikipedia.org/wiki/Serial_ATA), but older IDE drives used a 16 bit/2 byte parallel connection were measured in [bytes/second](https://en.wikipedia.org/wiki/Parallel_ATA).

Nothing nefarious going on, just engineers being specific about what they’re talking about based on the technology used.