How does asymptotic notation work?

417 views

I just don’t understand the f(n) = O(n) if f(n) <= C * N for N >= N_0. Also how is this relevant to runtime?

In: 0

3 Answers

Anonymous 0 Comments

>f(n) = O(n) if f(n) <= C * N for N >= N_0

This basically says O(n) means f(n) will be smaller than some arbitrary constant multiplyed by n for sufficiently large n.

As n gets bigger f can get bigger, but never faster than linearly.

If f was a quadratic function f(n)=n×n, then for n that is getting larger you couldn’t name any constant C that guarantees that f stays below c × n.

This notation allows ignoring “overhead”. Like, if f(n)=O(n), then f(n)+100=O(n). A constant offset doesn’t matter for large n, no matter how big, at some point n will be the dominant factor.

For runtime this is relevant to give an estimation of how sensitive a function is to input size. You can scale your computer up to handle linearly growing runtime (If 1 CPU needs a minute to solve, then 2 CPU need a minute to solve a problem twice the input size). But if O(n²) then doubling the input length forces you to quadruple the computing power or runtime.

We found that this is usefull because input size tends to vary a much greater deal than all the constants involved. For example a sort algorithm might be used to sort a short list, or a whole database. If a single operation takes 2 or 3 microseconds won’t matter as much as how much adding one more entry multiplies the number of operations required by the algorithm.

You are viewing 1 out of 3 answers, click here to view all answers.