0% found this document useful (0 votes)
3 views

AP_Dynamic Programming_ Algorithms and Complexity Ana

Dynamic programming (DP) is an algorithmic paradigm that optimizes complex problems by breaking them into simpler subproblems, utilizing optimal substructure and overlapping subproblems. The document covers various classic DP problems, including the 0/1 Knapsack Problem, Longest Common Subsequence, Travelling Salesman Problem, and others, providing algorithms and complexity analyses for each. The conclusion emphasizes DP's efficiency in solving optimization problems across diverse applications.

Uploaded by

siddmehta21
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

AP_Dynamic Programming_ Algorithms and Complexity Ana

Dynamic programming (DP) is an algorithmic paradigm that optimizes complex problems by breaking them into simpler subproblems, utilizing optimal substructure and overlapping subproblems. The document covers various classic DP problems, including the 0/1 Knapsack Problem, Longest Common Subsequence, Travelling Salesman Problem, and others, providing algorithms and complexity analyses for each. The conclusion emphasizes DP's efficiency in solving optimization problems across diverse applications.

Uploaded by

siddmehta21
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Dynamic Programming: Algorithms and

Complexity Analysis
Dynamic programming (DP) is both a mathematical optimization method and an algorithmic
paradigm developed by Richard Bellman in the 1950s. It simplifies complex problems by
breaking them down into simpler subproblems and solving them recursively. Two key attributes
make a problem suitable for dynamic programming: optimal substructure (the solution to the
main problem can be constructed from optimal solutions of its subproblems) and overlapping
subproblems (the same calculations are performed multiple times) [1] [2] .

Understanding of Dynamic Programming Approach


Dynamic programming is fundamentally about solving complex problems by breaking them into
smaller, more manageable parts. Unlike divide-and-conquer, DP specifically targets problems
with overlapping subproblems [2] . The main idea is to:
1. Break down a problem into simpler subproblems
2. Solve each subproblem only once and store the result
3. Reuse these solutions to avoid redundant calculations
4. Combine the solutions to solve the original problem
Implementation approaches include:
Memoization (Top-down): Start with the original problem and recursively solve
subproblems, storing results to avoid recomputation
Tabulation (Bottom-up): Start with the smallest subproblems and iteratively build up to the
original problem
The general process involves:
1. Understanding the problem structure
2. Identifying overlapping subproblems
3. Formulating a recursive relation
4. Computing and storing solutions
5. Constructing the final solution [3]
0/1 Knapsack Problem

Problem Definition
Given a set of items, each with a weight and a value, determine which items to include in a
collection so that the total weight is less than or equal to a given limit and the total value is
maximized. Unlike fractional knapsack, items cannot be broken down – you either take an item
completely or leave it [4] .

Dynamic Programming Approach


The key insight is to create a 2D array dp[n+1][W+1] where n is the number of items and W is the
maximum weight capacity. Each cell dp[i][j] represents the maximum value that can be
obtained using the first i items with a weight limit of j [5] .

Algorithm

function KnapsackDP(W, wt[], val[], n):


// Create a 2D array for storing solutions
dp[n+1][W+1]

// Initialize base cases (0 items or 0 weight capacity)


for i from 0 to n:
dp[i][^0] = 0
for j from 0 to W:
dp[^0][j] = 0

// Fill the dp table in bottom-up manner


for i from 1 to n:
for j from 1 to W:
// If current item's weight is less than or equal to capacity
if wt[i-1] <= j:
// Maximum of including or excluding current item
dp[i][j] = max(val[i-1] + dp[i-1][j-wt[i-1]], dp[i-1][j])
else:
// Can't include current item
dp[i][j] = dp[i-1][j]

return dp[n][W]

Example
Consider the following items:

Item Weight Value

A 1 2

B 3 4

C 5 7
Item Weight Value

D 7 10

With knapsack capacity W = 8


Using dynamic programming, the constructed table leads to a maximum value of 12 (by
selecting items with weights 1 and 7, i.e., A and D) [5] .

Complexity Analysis
Time Complexity: O(N*W) where N is the number of items and W is the knapsack
capacity [6]
Space Complexity: O(N*W) for storing the DP table [6]

Longest Common Subsequence Problem

Problem Definition
Given two sequences, find the longest subsequence common to both sequences. A
subsequence is a sequence that appears in the same relative order but not necessarily
contiguous [7] .

Dynamic Programming Approach


Create a table L[m+1][n+1] where m and n are the lengths of the two sequences. Each cell L[i][j]
represents the length of the LCS of the first i characters of sequence 1 and the first j characters
of sequence 2 [7] .

Algorithm

function LCS(X, Y, m, n):


// Create a table for storing solutions
L[m+1][n+1]

// Initialize the table


for i from 0 to m:
for j from 0 to n:
if i == 0 or j == 0:
L[i][j] = 0
elif X[i-1] == Y[j-1]:
L[i][j] = L[i-1][j-1] + 1
else:
L[i][j] = max(L[i-1][j], L[i][j-1])

// L[m][n] contains the length of LCS


return L[m][n]
Example
For sequences X = "ABCBDAB" and Y = "BDCABA", the LCS is "BCBA" with length 4 [7] .

Complexity Analysis
Time Complexity: O(m*n) where m and n are the lengths of the two sequences [8]
Space Complexity: O(m*n) for storing the DP table [8]

Travelling Salesman Problem

Problem Definition
Given a set of cities and the distance between every pair of cities, find the shortest possible
route that visits each city exactly once and returns to the origin city [9] .

Dynamic Programming Approach


We use a state-based DP approach. Define C(S,i) as the cost of the minimum path starting from
a fixed source (like city 1), visiting all vertices in set S exactly once, and ending at vertex i [10] .

Algorithm

function TSP_DP(dist, n, start):


// Create a table for storing solutions
// dp[mask][j] = min cost path visiting all vertices in mask and ending at j
dp[2^n][n]

// Initialize all entries as infinity


for i from 0 to 2^n-1:
for j from 0 to n-1:
dp[i][j] = infinity

// Base case: cost to visit just the starting vertex


dp[1 << start][start] = 0

// Iterate over all subsets of vertices


for mask from 1 to 2^n-1:
for end from 0 to n-1:
// Skip if end vertex is not in current subset
if (mask & (1 << end)) == 0:
continue

// If only one vertex in subset, and it's not the start, invalid
if mask == (1 << end) and end != start:
continue

// Try all possible previous vertices


for prev from 0 to n-1:
// Skip if prev is same as end or not in subset
if prev == end or (mask & (1 << prev)) == 0:
continue
// Calculate new mask without end vertex
prev_mask = mask ^ (1 << end)

// Update dp value
dp[mask][end] = min(dp[mask][end], dp[prev_mask][prev] + dist[prev][end])

// Find minimum cost to return to starting city


answer = infinity
for end from 0 to n-1:
if end != start:
answer = min(answer, dp[(1 << n) - 1][end] + dist[end][start])

return answer

Example
For a complete graph with 4 vertices, the algorithm computes the minimum cost path visiting all
vertices and returning to the start [9] .

Complexity Analysis
Time Complexity: O(n²*2ⁿ) where n is the number of cities [11]
Space Complexity: O(n*2ⁿ) due to the storage of states [11]

Single Source Shortest Path: Bellman Ford Algorithm

Problem Definition
Find the shortest paths from a source vertex to all other vertices in a weighted graph, even with
negative edge weights (but without negative cycles) [12] .

Dynamic Programming Approach


Bellman Ford is based on the "principle of relaxation." It iteratively relaxes edges to find shorter
paths, considering the possibility that the shortest path might have more edges [12] .

Algorithm

function BellmanFord(graph, source):


// Initialize distances
for each vertex v in graph:
distance[v] = infinity
predecessor[v] = null
distance[source] = 0

// Relax all edges |V|-1 times


for i from 1 to |V|-1:
for each edge (u, v) with weight w in graph:
if distance[u] + w < distance[v]:
distance[v] = distance[u] + w
predecessor[v] = u

// Check for negative cycles


for each edge (u, v) with weight w in graph:
if distance[u] + w < distance[v]:
return "Graph contains a negative cycle"

return distance, predecessor

Example
For a graph with negative edge weights but no negative cycles, Bellman Ford correctly
computes the shortest paths from the source vertex to all others [13] .

Complexity Analysis
Time Complexity: O(V*E) where V is the number of vertices and E is the number of
edges [13]
Space Complexity: O(V+E) using adjacency list representation [14]

All-Pair Shortest Path Problem: Floyd-Warshall Algorithm

Problem Definition
Find shortest paths between all pairs of vertices in a weighted graph, which may contain
negative edge weights but no negative cycles [15] .

Dynamic Programming Approach


The algorithm considers each vertex as a potential intermediate vertex in the shortest path
between any two vertices and updates the distance matrix accordingly [16] .

Algorithm

function FloydWarshall(graph):
// Initialize distance matrix with direct edges
dist[1...V][1...V] = graph

// For each vertex as potential intermediate


for k from 1 to V:
// For all possible source vertices
for i from 1 to V:
// For all possible destination vertices
for j from 1 to V:
// If going through vertex k gives shorter path
if dist[i][k] + dist[k][j] < dist[i][j]:
dist[i][j] = dist[i][k] + dist[k][j]

// Check for negative cycles (optional)


for i from 1 to V:
if dist[i][i] < 0:
return "Graph contains a negative cycle"

return dist

Example
For a directed weighted graph with 4 vertices, Floyd-Warshall computes the shortest distances
between all pairs of vertices [17] .

Complexity Analysis
Time Complexity: O(V³) due to three nested loops [18]
Space Complexity: O(V²) for storing the distance matrix [18]

Optimal Binary Search Tree (OBST)

Problem Definition
Construct a binary search tree that minimizes the expected search cost, given the probabilities
of successful and unsuccessful searches for keys [19] .

Dynamic Programming Approach


The approach involves finding the optimal root for each possible subrange of keys by evaluating
all possibilities and choosing the one that minimizes the total cost [20] .

Algorithm

function OptimalBST(p[], q[], n):


// p[] = probabilities of successful searches
// q[] = probabilities of unsuccessful searches
// n = number of keys

// Create cost and root tables


cost[n+1][n+1]
root[n+1][n+1]
w[n+1][n+1]

// Initialize for single key subtrees


for i from 1 to n:
cost[i][i-1] = q[i-1]
w[i][i-1] = q[i-1]

// Fill tables bottom-up


for L from 1 to n:
for i from 1 to n-L+1:
j = i+L-1
cost[i][j] = infinity
w[i][j] = w[i][j-1] + p[j] + q[j]
// Try each key as root
for r from i to j:
t = cost[i][r-1] + cost[r+1][j] + w[i][j]
if t < cost[i][j]:
cost[i][j] = t
root[i][j] = r

return cost[^1][n], root

Example
For keys {10, 20, 30} with probabilities {0.5, 0.1, 0.05} and unsuccessful search probabilities
{0.15, 0.1, 0.05, 0.05}, the optimal BST has an expected cost of 1.75 [20] .

Complexity Analysis
Time Complexity: O(n³) in the basic form, but can be reduced to O(n²) [19]
Space Complexity: O(n²) for the cost and root tables [21]

Coin Change Problem

Problem Definition
Given a set of coin denominations and a target amount, find the minimum number of coins
needed to make the amount (or the number of different ways to make the amount) [22] .

Dynamic Programming Approach


For the minimum coins problem, we define dp[i] as the minimum number of coins needed to
make amount i. For the counting ways problem, dp[i] represents the number of ways to make
amount i [22] .

Algorithm (Minimum coins)

function minCoins(coins[], amount):


// Initialize dp array
dp[amount+1]
dp[^0] = 0

// Fill dp array for each amount


for i from 1 to amount:
dp[i] = infinity
for coin in coins:
if i - coin >= 0:
dp[i] = min(dp[i], dp[i-coin] + 1)

// Return result or -1 if amount cannot be made


if dp[amount] == infinity:
return -1
else:
return dp[amount]

Algorithm (Count ways)

function countWays(coins[], amount):


// Initialize dp array
dp[amount+1]
dp[^0] = 1

// Fill dp array for each coin and amount


for coin in coins:
for i from coin to amount:
dp[i] += dp[i-coin]

return dp[amount]

Example
For coins {1, 2, 5} and amount 11, the minimum number of coins needed is 3 (5+5+1) [22] .

Complexity Analysis
Time Complexity: O(amount * number of coin types) [22]
Space Complexity: O(amount) [22]

Matrix Chain Multiplication

Problem Definition
Given a sequence of matrices, find the most efficient way to multiply these matrices together to
minimize the number of scalar multiplications [23] .

Dynamic Programming Approach


Define m[i,j] as the minimum number of scalar multiplications needed to compute the product of
matrices A_i through A_j [23] .

Algorithm

function MatrixChainMultiplication(p[], n):


// p[] = dimensions array where matrix i has dimensions p[i-1] x p[i]
// n = number of matrices + 1

// Create tables for costs and split positions


m[n][n]
s[n][n]

// Initialize diagonal (single matrix case)


for i from 1 to n-1:
m[i][i] = 0

// L is chain length
for L from 2 to n-1:
for i from 1 to n-L:
j = i+L-1
m[i][j] = infinity

// Try each split position


for k from i to j-1:
cost = m[i][k] + m[k+1][j] + p[i-1]*p[k]*p[j]
if cost < m[i][j]:
m[i][j] = cost
s[i][j] = k

return m[^1][n-1], s

Example
For matrices with dimensions 23×26, 26×27, and 27×20, the minimum number of scalar
multiplications is 26,000 [23] .

Complexity Analysis
Time Complexity: O(n³) where n is the number of matrices [24]
Space Complexity: O(n²) for the m and s tables [24]

Conclusion
Dynamic programming is a powerful technique for solving complex optimization problems
efficiently. All the algorithms discussed share the core principle of breaking down problems into
overlapping subproblems and building solutions incrementally. The time and space complexities
of these algorithms make them practical for a variety of real-world applications, from resource
allocation (Knapsack) to route optimization (TSP) to string matching (LCS).
By understanding these classic dynamic programming problems and their solutions, we can
develop the insights needed to approach new problems using similar principles of optimal
substructure and memoization.

1. https://en.wikipedia.org/wiki/Dynamic_programming
2. https://stackoverflow.blog/2022/01/31/the-complete-beginners-guide-to-dynamic-programming/
3. https://www.masaischool.com/blog/understanding-dynamic-programming-101/
4. https://www.gatevidyalay.com/0-1-knapsack-problem-using-dynamic-programming-approach/
5. https://www.tutorialspoint.com/data_structures_algorithms/01_knapsack_problem.htm
6. https://www.interviewbit.com/blog/0-1-knapsack-problem/
7. https://www.programiz.com/dsa/longest-common-subsequence
8. https://www.enjoyalgorithms.com/blog/longest-common-subsequence/
9. https://www.baeldung.com/cs/tsp-dynamic-programming
10. https://www.interviewbit.com/blog/travelling-salesman-problem/
11. https://blog.heycoach.in/time-complexity-of-tsp-algorithms/
12. https://www.shiksha.com/online-courses/articles/introduction-to-bellman-ford-algorithm/
13. https://www.scholarhat.com/tutorial/datastructures/bellman-fords-algorithm
14. https://blog.heycoach.in/bellman-ford-algorithm-and-space-complexity/
15. https://brilliant.org/wiki/floyd-warshall-algorithm/
16. https://www.tutorialspoint.com/data_structures_algorithms/floyd_warshall_algorithm.htm
17. https://www.shiksha.com/online-courses/articles/about-floyd-warshall-algorithm/
18. https://en.wikipedia.org/wiki/Floyd–Warshall_algorithm
19. https://www.scaler.com/topics/optimal-binary-search-tree/
20. https://codecrucks.com/optimal-binary-search-tree-how-to-solve-using-dynamic-programming/
21. https://blog.heycoach.in/space-complexity-of-optimal-bst-construction/
22. https://www.simplilearn.com/tutorials/data-structure-tutorial/coin-change-problem-with-dynamic-prog
ramming
23. https://www.tutorialspoint.com/data_structures_algorithms/matrix_chain_multiplication.htm
24. https://www.scaler.in/matrix-chain-multiplication/

You might also like