What is “dynamic programming”? How does it work?

1.82K views

Having a hard time wrapping my head on why it is so special.

In: Technology

2 Answers

Anonymous 0 Comments

Dynamic programming is a strategy that incorporates memoization. The whole point is to store results and then reference them to make a more efficient algorithm. From a programming perspective these results are stored in a 2d array. One example is the 0-1 knapsack problem (all or nothing weights). Basically given a max knapsack weight and many items with weights, you can calculate how many items can fit within that weight and build upon it. If you want a more detailed analysis on how the knapsack problem is solved through dynamic programming, [this](https://youtu.be/8LusJS5-AGo) is a pretty good run down of how to go about it. One thing I want to emphasize is that dynamic programming is not a single algorithm, its just a technique used to speed up algorithms.

ELI5 Version:
Run algorithms that store the results so you can reference them later without rerunning the calculation.

Anonymous 0 Comments

Dynamic programming is the practice of writing algorithms that store previous results to allow you to speed up computing future results by avoiding having to repeat any work.

A classic example is computing the nth Fibonacci number. If we do this naively/recursively it might look something like this:

def fib(N):
if N == 1 or N == 2:
return 1
else:
return fib(N-1) + fib(N-2)

But by doing this we’re gonna end up repeating a lot of operations. If you trying running this code on small values of N then it’ll be fast (N<10 are all in the microsecond range on my PC), but this time increases exponentially. For N=33 it takes over half a second.

This reason that this is so bad is that we are repeating a huge number of operations. fib(10) will call fib(9) and fib(8), but fib(9) will also call fib(8). This quickly gets out of hand.

We can make this into a dynamic programming algorithm very easily by storing previously calculated results:

mem = {}

def memo_fib(N):
if N==1 or N==2:
return 1
if N in mem:
return mem[N]
else:
f = memo_fib(N-1) + memo_fib(N-2)
mem[N] = f
return f

This can calculate the 1000th entry in the series in a few milliseconds, a vast improvement over what we had before, at only the cost of a little bit of extra memory.

Note: There are much better ways of calculating Fibonacci numbers that are faster and/or use less memory, but I didn’t want to complicate the answer :).