: FLOPs, GIGAFLOPS, TERAFLOPS unit.

229 views

I understand gigahertz, gigabytes, but not gigaflops.

​

Thanks

In: 5

18 Answers

Anonymous 0 Comments

Floatingpoint operations per second. Basically a number to express how powerful a computer is. The more operations a computer can do per second the faster/more powerful the computer is. And the floating point system is a way computers often handle numbers.

Anonymous 0 Comments

A “flop” is short for “floating point operation”. Floating point numbers are a type of number representation in computers, similar to scientific exponential notation (such as 3600 = 3.6e3).

FLOPS (or FLOP/s) tells you how many operations on floating point numbers a computer can do per second. A gigaflop is 1 billion flops, a teraflop is 1 trillion. The prefixes work the same for every unit.

Anonymous 0 Comments

A “flop” is short for “floating point operation”. Floating point numbers are a type of number representation in computers, similar to scientific exponential notation (such as 3600 = 3.6e3).

FLOPS (or FLOP/s) tells you how many operations on floating point numbers a computer can do per second. A gigaflop is 1 billion flops, a teraflop is 1 trillion. The prefixes work the same for every unit.

Anonymous 0 Comments

Let’s decipher what those letters mean – FLOPS – Floating Point Operations Per Second.

Floating Point is a type of number that processing unit can do calculations with. FP means it’s specially formated decimal value. In contrast to integers, it can be a value in very large range, from very small numbers to very large.

With flops we describe how many calculations with “decimal” numbers can a computer do.

Anonymous 0 Comments

Floatingpoint operations per second. Basically a number to express how powerful a computer is. The more operations a computer can do per second the faster/more powerful the computer is. And the floating point system is a way computers often handle numbers.

Anonymous 0 Comments

Let’s decipher what those letters mean – FLOPS – Floating Point Operations Per Second.

Floating Point is a type of number that processing unit can do calculations with. FP means it’s specially formated decimal value. In contrast to integers, it can be a value in very large range, from very small numbers to very large.

With flops we describe how many calculations with “decimal” numbers can a computer do.

Anonymous 0 Comments

Everyone else has pretty much explained the basics of what FLOPS are. Floating point operations per second.

It’s a quick and dirty way of expressing the computing power of a computer or processor, typically it’s used in advertising as a quick and simple way of saying “Hey, this GPU/graphics card is the many time more powerful than this other one because the number of FLOPS is bigger!”

Anonymous 0 Comments

Floatingpoint operations per second. Basically a number to express how powerful a computer is. The more operations a computer can do per second the faster/more powerful the computer is. And the floating point system is a way computers often handle numbers.

Anonymous 0 Comments

Some processors and GPUs can do floating point arithmetic (fractions) in hardware, which is fast. Some chips have to do floating point arithmetic using integers, which is much slower.

Many applications, like video games, require floating point arithmetic, so a good measure of how useful the chip will be is floating point operations per second, or flops.

Anonymous 0 Comments

How many floating point (i.e. numbers with decimals in them) operations (e.g. addition, subtraction and multiplication) the computer can do in a second.

This is different from the Hz, which is the number of processor clock cycles.

A single floating point operation may take many, many processor clock cycles to happen. It’s not just as simple as asking it to do the multiplication and the answer is ready for the next clock cycle. It never has been.

And nowadays it’s even more complicated.

The multiply instruction can take MANY cycles to complete and give the answer back. The floating point numbers might have to be brought in from memory or cache, which can take up cycles. The floating point numbers might be too big for the processor to process in one hit so it might take a different number of cycles to complete the calculation depending on if it’s a 32-bit or 64-bit or even larger number that it’s working with. And it might even be capable of calculating LOTS of different floating point operations simultaneously, but only if they are the same type of calculation (e.g. a multiply).

So the FLOPS gives you a real-world number of how much actual processing of useful numbers the machine can do, not just it’s clock speed. The two do not necessarily correlate – some RISC computers will vary significantly in the number of clock cycles a FLOP would take, for instance, so a 2GHz RISC computer and a 2GHz CISC computer will have vastly different FLOPS values. A 32-bit machine wouldn’t be able to get as many FLOPS as one working with 64-bit numbers.

It’s similar to another metric which datacentres are often interested in which is basically how many database operations they can perform in a second (TOPS, database transactions per second).

It’s basically saying “I don’t care how fast your internals operate… how many of this specific type of calculation can you do in one second”. Because it might actually make more sense to buy a 4GHz computer that has a higher FLOPS count than a 10GHz computer with a lower count if what you’re interested in is large amounts of floating-point operations (e.g. statistics, physics modellings, etc.). But if you’re running large databases, you don’t care about FLOPS, you want to know the TOPS of a system, so you can get the most database transactions completed in the shortest time. It’s unlikely that a system with high FLOPS would also have high TOPS and vice-versa.

And nobody is interested in the baseline Hz value if you’re talking real-world performance, because it doesn’t correlate. Even 50 years ago, some computers would take hundreds of cycles to perform one processor instruction in some cases. The same is still true now of certain types of operation performed on certain types of hardware with certain types of cache, memory, bus, storage, thermal, etc. configurations.