How does asymptotic notation work?


I just don’t understand the f(n) = O(n) if f(n) <= C * N for N >= N_0. Also how is this relevant to runtime?

In: 0

>f(n) = O(n) if f(n) <= C * N for N >= N_0

This basically says O(n) means f(n) will be smaller than some arbitrary constant multiplyed by n for sufficiently large n.

As n gets bigger f can get bigger, but never faster than linearly.

If f was a quadratic function f(n)=n×n, then for n that is getting larger you couldn’t name any constant C that guarantees that f stays below c × n.

This notation allows ignoring “overhead”. Like, if f(n)=O(n), then f(n)+100=O(n). A constant offset doesn’t matter for large n, no matter how big, at some point n will be the dominant factor.

For runtime this is relevant to give an estimation of how sensitive a function is to input size. You can scale your computer up to handle linearly growing runtime (If 1 CPU needs a minute to solve, then 2 CPU need a minute to solve a problem twice the input size). But if O(n²) then doubling the input length forces you to quadruple the computing power or runtime.

We found that this is usefull because input size tends to vary a much greater deal than all the constants involved. For example a sort algorithm might be used to sort a short list, or a whole database. If a single operation takes 2 or 3 microseconds won’t matter as much as how much adding one more entry multiplies the number of operations required by the algorithm.

Something I want to add to the other answer, since I always found the equal sign disturbing:
*O(n)* is a set and NOT really equal to *f(n)*. So the commonly used notation *f(n) = O(n)* actually means *f(n) is in O(n).*

What you have written there means: *f(n) is in the set called O(n) if the following property holds: …*