0% found this document useful (0 votes)
31 views

Dynamic

Dynamic programming is an algorithm design technique for solving problems with optimal substructure. It works by breaking down a problem into subproblems and storing the results of already solved subproblems, avoiding recomputing them. The document provides examples of using dynamic programming to find the shortest path in a multi-stage graph and solve the 0/1 knapsack problem. Dynamic programming relies on the principle of optimality - if a sequence is optimal, its subsequences must also be optimal.

Uploaded by

Siddhantpsingh
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Dynamic

Dynamic programming is an algorithm design technique for solving problems with optimal substructure. It works by breaking down a problem into subproblems and storing the results of already solved subproblems, avoiding recomputing them. The document provides examples of using dynamic programming to find the shortest path in a multi-stage graph and solve the 0/1 knapsack problem. Dynamic programming relies on the principle of optimality - if a sequence is optimal, its subsequences must also be optimal.

Uploaded by

Siddhantpsingh
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 14

Dynamic Programming

7 -1
Fibonacci sequence
 Fibonacci sequence: 0 , 1 , 1 , 2 , 3 , 5 , 8 , 13 , 21 , …
Fi = i if i  1
Fi = Fi-1 + Fi-2 if i  2
 Solved by a recursive program: f5

f4 f3

f3 f2 f2 f1

f2 f1 f1 f0 f1 f0

f1 f0

 Much replicated computation is done.


 It should be solved by a simple loop.
7 -2
Dynamic Programming
 Dynamic Programming is an algorithm
design method that can be used when
the solution to a problem may be
viewed as the result of a sequence of
decisions

7 -3
The shortest path
 To find a shortest path in a multi-stage graph
3 2 7

1 4
S A B 5
T

5 6
 Apply the greedy method :
the shortest path from S to T :
1+2+5=8

7 -4
The shortest path in
multistage graphs
 e.g. A
4
D
1 18
11 9

2 5 13
S B E T
16 2

5
C 2
F
 The greedy method can not be applied to this case:
(S, A, D, T) 1+4+18 = 23.
 The real shortest path is:
(S, C, F, T) 5+2+2 = 9.

7 -5
Dynamic programming approach
 Dynamic programming approach (forward approach):

A
4
D 1 A
1 d(A, T)
18
11 9

2 d(B, T)
S
2
B
5
E
13
T S B T
16 2

5 d(C, T)
5
C 2
F C

 d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}


4
d(A,T) = min{4+d(D,T), A D
d(D, T)
11+d(E,T)}
11 T
= min{4+18, 11+13} = 22. E d(E, T)

7 -6
 d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)}
= min{9+18, 5+13, 16+2} = 18.
4
A D
1 18
11 9

2 5 13
S B E T
16 2

5
C 2
F
 d(C, T) = min{ 2+d(F, T) } = 2+2 = 4
 d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}
= min{1+22, 2+18, 5+4} = 9.
 The above way of reasoning is called
backward reasoning.

7 -7
Backward approach
(forward reasoning)
4
A D
1 18
11 9

2 5 13
S B E T

 d(S, A) = 1 16 2

d(S, B) = 2 5
C F
2
d(S, C) = 5
 d(S,D)=min{d(S,A)+d(A,D), d(S,B)+d(B,D)}
= min{ 1+4, 2+9 } = 5
d(S,E)=min{d(S,A)+d(A,E), d(S,B)+d(B,E)}
= min{ 1+11, 2+5 } = 7
d(S,F)=min{d(S,B)+d(B,F), d(S,C)+d(C,F)}
= min{ 2+16, 5+2 } = 7
7 -8
 d(S,T) = min{d(S, D)+d(D, T), d(S,E)+
d(E,T), d(S, F)+d(F, T)}
= min{ 5+18, 7+13, 7+2 }
=9 4
A D
1 18
11 9

2 5 13
S B E T
16 2

5
C 2
F

7 -9
Principle of optimality
 Principle of optimality: Suppose that in solving a
problem, we have to make a sequence of decisions
D1, D2, …, Dn. If this sequence is optimal, then the
last k decisions, 1  k  n must be optimal.
 e.g. the shortest path problem
If i, i1, i2, …, j is a shortest path from i to j, then i1,
i2, …, j must be a shortest path from i1 to j
 In summary, if a problem can be described by a
multistage graph, then it can be solved by dynamic
programming.

7 -10
Dynamic programming
 Forward approach and backward approach:
 Note that if the recurrence relations are formulated
using the forward approach then the relations are
solved backwards . i.e., beginning with the last
decision
 On the other hand if the relations are formulated
using the backward approach, they are solved
forwards.
 To solve a problem by using dynamic
programming:
 Find out the recurrence relations.
 Represent the problem by a multistage graph.

7 -11
0/1 knapsack problem
 n objects , weight W1, W2, ,Wn
profit P1, P2, ,Pn
capacity M
 Pi xi
maximize 1i  n
 Wi xi
subject to 1in  M
xi = 0 or 1, 1in
 e. g. i Wi Pi
1 10 40 M=10
2 3 20
3 5 30
7 -12
The multistage graph solution
 The 0/1 knapsack problem can be
described by a multistage graph.
x3=0
x2=0 10 100
0
1 0
0
x1=1 011
40 x3=1
30 0
S T
01 0 010 0
0 x2=1 x3=0
x1=0 20
0
0
x3=1 001 0
0 30
x2=0 00
0
x3=0 000
7 -13
The dynamic programming
approach
 The longest path represents the optimal
solution:
x1=0, x2=1, x3=1
 P x = 20+30 = 50
i i

 Let fi(Q) be the value of an optimal solution


to objects 1,2,3,…,i with capacity Q.
 fi(Q) = max{ fi-1(Q), fi-1(Q-Wi)+Pi }
 The optimal solution is fn(M).

7 -14

You might also like