Despite taking multiple math courses that have each explained them, and now a computer science course that requires knowledge of them, logarithms make no sense to me. Specifically their application to Big-O and complexity for computer science, like how does one make an algorithm that follows a logarithmic complexity?
Algorithms with logarithmic complexity are ones that employ logical strategies like divide-and-conquer to remove the need to visit every element in the data you’re acting on. Most of them rely on having data pre-sorted or organized in some useful way in order to be able to ignore large parts of it.
For example, finding an element in a balanced, sorted tree: If each node in the tree has a value, a left child, and a right child, and the left child is set to always have its own value be less than the current node’s value, you can use these rules to search for any value in the tree starting from the root without needing to check every node. If the tree is balanced you visit lg(N) nodes, so this is an O(lg(N)) algorithm where N is the number of nodes.
Note that to create a sorted, balanced tree in the first place is an O(N * lg(N)) process because you need to find where to put each value. Only searching within a tree you’ve already got is logarithmic.
I’m assuming you’re familiar with the binary search algorithm, or have at least heard of it. If you haven’t, here’s the rundown:
Suppose you’re trying to find a book on a very long bookshelf, where all the books are sorted alphabetically. If there’s no signage indicating where certain letters stop and start, how can you find the book you want the fastest? One intuitive way is to go to the midpoint of the shelf, pick up the book there, and check the first letter of that book. If the book you want is earlier in the alphabet than the book you picked up, go to the midpoint of the left half of the bookshelf and repeat the process. Otherwise, go to the right half and repeat the process. Repeat until you find the book you’re looking for. This is, in essence, the binary search algorithm.
The binary search is probably the most well-known example of a logarithmic algorithm. Here is a sample implementation in Python:
def binary_search(arr, target):
l = 0
r = len(arr) – 1
while (r >= l):
mdpt = l + (r – l) // 2
if arr[mdpt] == target:
if arr[mdpt] > target:
r = mdpt – 1
l = mdpt + 1
To figure out why its time complexity is logarithmic, consider an example case where you search through an array that is 4 elements long. While we search, let’s keep track of the number of searches we make.
First, we find the midpoint of the whole array (4 elements long). We have made one search.
Next, we find the midpoint of one of the halfs of the array (2 elements long). We have made two searches.
Finally, we have arrived at our element (since the next half is 1 element long, and that one element is the one we desire). In total, we have made 3 searches. Note that it is possible to find the target in less than 3 searches; however, for Big-O notation we only care about the upper bound.
Now, consider a second case where the array is 8 elements long. First, we find the midpoint of the whole array, so we have made one search. Then, the subarray we are searching is 4 elements long, and we know that it takes a maximum of 3 searches when the array is 4 elements long. Thus, we conclude that it takes a maximum of 4 searches if the array is 8 elements long.
It is clear to see that this pattern continues as you increase the array length by powers of 2. It is also clear that the number of operations required when the array is equal to some length n is log_2(n) + 1. This is true when the length of the array is not a power of 2 – in that case, the maximum number of operations is the same as that of the next lowest power of 2.
Because the number of operations as a function of n is bound by log(n), we can prove that binary search has a time complexity equal to O(log n) using the standard method for proving asymptotic complexity.
Oh boy, I don’t think I could explain that to a 5 year old but uuhhh here we go.
In code for every loop you add to the complexity, as you reduce the looping you can get down to a linear complexity. If you are able to split the work, you can narrow that down again to logarithmic complexity.
If you look at what goes on in sorting algorithms, bubble vs merge, one has to look at all the values over and over again, whereas the other does a comparison and is able to make an assumption the next time