eli5 logarithmic algorithms for complexity analysis

365 views

Despite taking multiple math courses that have each explained them, and now a computer science course that requires knowledge of them, logarithms make no sense to me. Specifically their application to Big-O and complexity for computer science, like how does one make an algorithm that follows a logarithmic complexity?

In: 1

9 Answers

Anonymous 0 Comments

I’m assuming you’re familiar with the binary search algorithm, or have at least heard of it. If you haven’t, here’s the rundown:

Suppose you’re trying to find a book on a very long bookshelf, where all the books are sorted alphabetically. If there’s no signage indicating where certain letters stop and start, how can you find the book you want the fastest? One intuitive way is to go to the midpoint of the shelf, pick up the book there, and check the first letter of that book. If the book you want is earlier in the alphabet than the book you picked up, go to the midpoint of the left half of the bookshelf and repeat the process. Otherwise, go to the right half and repeat the process. Repeat until you find the book you’re looking for. This is, in essence, the binary search algorithm.

The binary search is probably the most well-known example of a logarithmic algorithm. Here is a sample implementation in Python:

def binary_search(arr, target):

l = 0
r = len(arr) – 1
while (r >= l):
mdpt = l + (r – l) // 2
if arr[mdpt] == target:
return mdpt
if arr[mdpt] > target:
r = mdpt – 1
else:
l = mdpt + 1
return -1

To figure out why its time complexity is logarithmic, consider an example case where you search through an array that is 4 elements long. While we search, let’s keep track of the number of searches we make.

First, we find the midpoint of the whole array (4 elements long). We have made one search.

Next, we find the midpoint of one of the halfs of the array (2 elements long). We have made two searches.

Finally, we have arrived at our element (since the next half is 1 element long, and that one element is the one we desire). In total, we have made 3 searches. Note that it is possible to find the target in less than 3 searches; however, for Big-O notation we only care about the upper bound.

Now, consider a second case where the array is 8 elements long. First, we find the midpoint of the whole array, so we have made one search. Then, the subarray we are searching is 4 elements long, and we know that it takes a maximum of 3 searches when the array is 4 elements long. Thus, we conclude that it takes a maximum of 4 searches if the array is 8 elements long.

It is clear to see that this pattern continues as you increase the array length by powers of 2. It is also clear that the number of operations required when the array is equal to some length n is log_2(n) + 1. This is true when the length of the array is not a power of 2 – in that case, the maximum number of operations is the same as that of the next lowest power of 2.

Because the number of operations as a function of n is bound by log(n), we can prove that binary search has a time complexity equal to O(log n) using the standard method for proving asymptotic complexity.

You are viewing 1 out of 9 answers, click here to view all answers.