Dynamic Programming
Dr. Dwiti Krishna Bebarta
Text Books:
1. Ellis Horowitz, SartajSahni, Sanguthevar Rajasekaran, “Fundamentals of
Computer Algorithms”, 2nd Edition, Universities Press.
2. Introduction to Algorithms Thomas H. Cormen, PHI Learning
The General Method
What is Dynamic Programming?
• Dynamic programming solves optimization problems by
combining solutions to sub-problems
• “Programming” refers to a tabular method with a series of
choices.
• A set of choices must be made to arrive at an optimal
solution
• As choices are made, sub-problems of the same form arise
frequently
• The key is to store the solutions of sub-problems to be
reused in the future
Dynamic programming
• Dynamic programming is a method for designing
efficient algorithms for recursively solvable problems with
the following properties:
1. Optimal Substructure: An optimal solution to an
instance contains an optimal solution to its sub-
instances.
2. Overlapping Sub-problems: The number of sub-
problems is small so during the recursion same
instances are referred to over and over again.
Example 1
• Fibonacci numbers are defined by:
F0 0
F1 1
Fi Fi 1 Fi 2 for i 2.
Time Complexity?
Fibonacci Example
Waste time redoing work
Time:
Exponential
Memoization in Optimization
• Definition: An algorithmic technique which saves
(memorizes) a computed answer for later reuse, rather than
re-computing the answer.
• Remember the solutions for the sub-instances
• If the same sub-instance needs to be solved again, the same
answer can be used.
Memoization
Memoization is an optimization technique in computer
programming that speeds up applications by storing the
results of expensive function calls and returning them
when the same inputs are encountered again, avoiding
redundant calculations
From Memoization to Dynamic Programming
• Determine the set of subinstances that need to be
solved.
• Instead of recursing from top to bottom, solve each of
the required subinstances in smallest to largest order,
storing results along the way.
Dynamic Programming
First determine the complete set of subinstances
{100, 99, 98,…, 0}
Compute them in an order
Smallest to largest
such that no friend must wait.
1
0 9
• Dynamic programming solves optimization problems by combining
solutions to sub-problems.
• This sounds familiar: divide and conquer also combines solutions to sub-
problems, but applies when the sub-problems are disjoint. For example,
here is the recursion tree for merge sort on an array A[1..8]. Notice that
the indices at each level do not overlap)
• Dynamic programming applies when the sub-problems overlap. Notice
that not only do lengths repeat, but also that there are entire sub-trees
repeating. It would be redundant to redo the computations in these sub-
trees. (FIGURE IN NEXT SLIDE)
• Dynamic programming solves each sub-problem just once, and saves its
answer in a table, to avoid the re-computation. It uses additional memory
to save computation time: an example of a time-memory trade-off.
• There are many examples of computations that require exponential time
without dynamic programming but become polynomial with dynamic
programming.
Dynamic Programming vs Divide-and-Conquer
• Recall the divide-and-conquer approach
– Partition the problem into independent subproblems
– Solve the subproblems recursively
– Combine solutions of subproblems
– e.g., mergesort, quicksort
• This contrasts with the dynamic programming
approach
Dynamic Programming vs Divide-and-Conquer
• Dynamic programming is applicable when sub-problems
are not independent
– i.e., sub-problems share sub-sub-problems
– Solve every sub-sub-problem only once and store the
answer for use when it reappears
A dynamic programming approach consists of a sequence of
4 steps
1. Characterize the structure of an optimal solution
2. Recursively define the value of an optimal solution
3. Compute the value of an optimal solution in a bottom-
up fashion
4. Construct an optimal solution from computed
information (Optional)
Applications
• Multistage Graphs
• All pairs-shortest paths
• Optimal Binary search trees
• 0/1 Knapsack Problem
• The traveling salesperson problem
• Matrix Chain Multiplications
• Longest Common Subsequence
• Assembly Line Scheduling
• Optimal Polygon Triangulation
• Minimum Edit Distance