0% found this document useful (0 votes)
28 views54 pages

Co SC 3131 Chap 4

The document discusses dynamic programming and its applications. It explains the concepts of dynamic programming, its differences from other algorithms like divide and conquer and greedy algorithms. It also covers the steps and principles of dynamic programming and provides examples like multistage graphs.

Uploaded by

hagosabate9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views54 pages

Co SC 3131 Chap 4

The document discusses dynamic programming and its applications. It explains the concepts of dynamic programming, its differences from other algorithms like divide and conquer and greedy algorithms. It also covers the steps and principles of dynamic programming and provides examples like multistage graphs.

Uploaded by

hagosabate9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 54

Chapter 4

Dynamic Programming and Tree


Traversal

1
Dynamic Programming
 Dynamic programming is typically applied to optimization problem.
 In the word dynamic programming the word “programming” stands
for planning and it does not mean by computer programming.
 DP is a technique for solving problems with overlapping subproblems.
 In this method each subproblems is solved only once.
 The result of each subproblem is recorded in a table from which we
can obtain a solution to the original problem.

2
Dynamic Programming

 For each given problem, we may get any number of


solutions we seek for optimum solutions.
 And such an optimal solution becomes the solution to the
given problem.
 Dynamic Programming is an algorithm design method that
can be used when the solution to a problem may be viewed
as the result of a sequence of decisions.
3
Difference between divide and conquer and dynamic programming

4
Difference between greedy algorithm and dynamic programming

5
Steps of Dynamic Programming
 Dynamic programming design involves 4 major steps.
1. Characterize the structure of optimum solution (develop a mathematical
notation that can express any solution and sub-solution for the given
problem.)
2. Recursively define the value of an optimal solution.
3. By using bottom-up technique compute value of optimal solution. (for that
you have to develop a recurrence relation the relates a solution to its sub-
solutions using the mathematical notation of step 1)
4. Compute an optimal solution from computed information.
6
Principles of optimality

 The dynamic programming algorithm obtains the solution using principles


of optimality.
 The principles of optimality states that “ in an optimal sequence of
decisions or choices each subsequence must also be optimal.”
 When it is not possible to apply the principle of optimality it is almost
impossible to obtain the solution using the dynamic programming
approach.
 E.g. Finding the shortest path in a given graph uses the principles of
optimality.
7
Applications of Dynamic programming
 Multi-stage graph.
 Optimal binary search trees.
 0/1 knapsack problem.
 All pairs shortest path problem.
 Traveling salesperson problem.
 Reliability design.

8
Multistage graphs
 A multistage graph G = (V, E) which is directed graph.
 In this graph all the vertices are partitioned into the k stages where k>=2.
 In multistage graph we have to find the shortest path from source to sink.
 The cost of each path is calculated by using the weight given along that edge.
 The cost of a path from source to sink is the sum of the costs of edges on the
path.
 In multistage graph problem we have to find the path from source to sink.

9
Multistage graphs

Multistage graph can be solved using forward and backward


approach. Stage 2 Stage 3 Stage 4

A D
3
Stage1
S
1 B
4 E
6 T
8
2 10 2
7 C 3 10

10
Multistage graphs

 Backward approach:
d (S, T) = min {1+d (A, T), 2+ d (B, T), 7+ d (C, T)}

we will now compute d (A, T), d (B, T) and d (C, T)

d (A, T) = min {3+d (D, T), 6+d (E, T)}

d (B, T) = min {4+ d (D, T), 10+d (E, T)}

d (C, T) = min {3+ d (E, T), d (C, T)}

now let us compute d (D, T) and d (E, T).

d (D, T) = 8

d (E, T) = 2 backward vertex = E


11
Multistage graphs
let us put these values in the above equations
d (A, T) = min {3+8, 6+2}
d (A, T) = 8 A-E-T
d (B, T) = min {4+8, 10+2}
d (B, T) = 12 A-D-T
d (C, T) = min {3+2, 10}
d (C, T) = 5 C-E-T
d (S, T) = min {1+d (A, T), 2+d (B, T), 7+ d (C, T)}
=min {1+8, 2+12, 7+5}
=min {9, 14, 12}
d (S, T) = 9 S-A-E-T
the path with minimum cost is S-A-E-T with the cost 9.

12
Multistage graphs

d (S, T) = min {1+d (A, T), 2+d (B, T), 7+ d (C, T)}
=min {1+8, 2+12, 7+5}
=min {9, 14, 12}
d (S, T) = 9 S-A-E-T
the path with minimum cost is S-A-E-T with the cost 9.
The method is called backward reasoning.
Time complexity is (|V|+|E|)
13
Multistage graphs
Forward approach
d (S, A) = 1
d (S, B) = 2
d (S, C) = 7
d (S, D) = min {1+ d (A, D), 2+ d (B, D)}
= min {1+3, 2+4}
= min {4, 6}
d (S, D) = 4
d (S, E) = min {1+ d (A, E), 2+ d (B, E), 7+ d (C, E)}
= min {1+6, 2+10, 7+3}
= min {7, 12, 10}
d (S, E) = 7 i’e. path S-A-E is chosen.
14
Multistage graphs
d (S, T) = min {d (S, D) +d (D, T), d (S, E) + d (E, T), d (S, C)
+ d (C, T)}
= min {4+8, 7+2, 7+10}
= min {12, 9, 17}
d (S, T) = 9 i’e. path S-E, E-T is
chosen.
 Therefore, the minimum cost = 9 with the path S-A-E-T.
 This method is called forward reasoning.
 Time complexity is (|V|+|E|)

15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54

You might also like