Remember your past. Yes this is all about Dynamic Programming.
Jonathan Paulson explains Dynamic Programming in his amazing Quora answer here.
Writes down "1+1+1+1+1+1+1+1 =" on a sheet of paper.
"What's that equal to?"
Writes down another "1+" on the left.
"What about that?"
"Nine!" " How'd you know it was nine so fast?"
"You just added one more!"
"So you didn't need to recount because you remembered there were eight! Dynamic Programming is just a fancy way to say remembering stuff to save time later!"
Dynamic programming is always remembering what you solved in past and instead of solving the same problem again, use its result. You can relate above given example with this statement.
If you are given a problem, which can be broken down into smaller sub-problems, and these smaller sub-problems can still be broken into smaller ones and if you manage to find out that there are some over-lapping sub-problems, then you’ve encountered a DP problem.
Some famous Dynamic Programming algorithms are:
- Unix diff for comparing two files
- Bellman-Ford for shortest path routing in networks
- TeX the ancestor of LaTeX
- WASP – Winning and Score Predictor
In programming world, Dynamic Programming is a powerful technique that allows one to solve different types of problems in time O(n2) or O(n3) for which a naive approach would take exponential time.
Dynamic programming is not Recursion
Dynamic Programming is more like Recursion + common sense.
Let’s see Fibonacci example. Below is the basic code example
if n <= 1
return fib(n − 1) + fib(n − 2)
Notice that if we call, say,
fib(5), we produce a call tree that calls the function on the same value many different times
fib(4) + fib(3)
(fib(3) + fib(2)) + (fib(2) + fib(1))
((fib(2) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))
(((fib(1) + fib(0)) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))
fib(2) was calculated three times from scratch. In larger examples, many more values of
fib, or subproblems, are recalculated, leading to an exponential time algorithm.
Now, suppose we have a simple map object, m, which maps each value of
fib that has already been calculated to its result, and we modify our function to use it and update it. The resulting function requires only O(n) time instead of exponential time (but requires O(n) space):
var m := map(0 → 0, 1 → 1)
if key n is not in map m
m[n] := fib(n − 1) + fib(n − 2)
This technique of saving values that have already been calculated is called memoization. It is memorizing the results of some specific states, which can then be later accessed to solve other sub-problems.
Majority of the Dynamic Programming problems can be categorized into two types:
1. Optimization problems.
2. Combinatorial problems.
Optimization problems: – In computer science, a problem is said to have optimal substructure if an optimal solution can be constructed from optimal solutions of its subproblems. i.e. Say you have problem P which can be broken down to sub-problems p1, p2 and p3. Then optimal solution of p1, p2, and p3 will be the optimal solution of P.
The live example of Optimization Problems is finding a shortest path for travelling between two cities.
Combinatorial problems:- In computer science, a problem is said to have overlapping sub-problems if the problem can be broken down into sub-problems which are reused several times or a recursive algorithm for the problem solves the same sub-problem over and over rather than always generating new sub-problems. i.e. The problem of computing the Fibonacci sequence shows Overlapping sub-problems. The problem of computing the nth Fibonacci number F(n), can be broken down into the sub-problems of computing F(n − 1) and F(n − 2), and then adding the two. The sub-problem of computing F(n − 1) can itself be broken down into a sub-problem that involves computing F(n − 2). Therefore, the computation of F(n − 2) can be reused and the Fibonacci sequence thus exhibits overlapping sub-problems.
Combinatorial problems can be solved in either of two ways
Top-down approach: This falls in category of recursive formulation of any problem. If the solution to any problem can be formulated recursively using the solution to its sub-problems, and if its sub-problems are overlapping then we can use memoization or store the solutions to the sub-problems in a table.
Whenever we attempt to solve a new sub-problem, we first check the table to see if it is already solved. If a solution has been recorded, we can use it directly, otherwise we solve the sub-problem and add its solution to the table.
As you can see in example below, sub-problem p6 need to be solved to solve sub-problems p2 and p3. So better we save its result while solving it first time and reuse the same result while solving p3.
Bottom-up approach: Try solving the sub-problems first and use their solutions to build-on and arrive at solutions to bigger sub-problems. This is also usually done in a tabular form by iteratively generating solutions to bigger and bigger sub-problems by using the solutions to small sub-problems. For example, as you can see in below image, First we will solve p1, p2, p3, p4 and p5 and then p6 and p7 will be solved and then p8.
Hope this had cleared your some of doubt regarding Dynamic Programming.