0% found this document useful (0 votes)
32 views39 pages

Dynamic Programming Explained: Methods & Applications

Dynamic programming is an algorithm design technique that solves problems by breaking them down into simpler sub-problems and storing their solutions to avoid redundant computations. Key applications include the all pairs shortest path problem, traveling salesperson problem, and 0/1 knapsack problem, with algorithms like Floyd's for shortest paths and methods for optimal decision sequences. The principle of optimality is crucial for determining whether dynamic programming can be applied effectively to a given problem.

Uploaded by

swastik arora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views39 pages

Dynamic Programming Explained: Methods & Applications

Dynamic programming is an algorithm design technique that solves problems by breaking them down into simpler sub-problems and storing their solutions to avoid redundant computations. Key applications include the all pairs shortest path problem, traveling salesperson problem, and 0/1 knapsack problem, with algorithms like Floyd's for shortest paths and methods for optimal decision sequences. The principle of optimality is crucial for determining whether dynamic programming can be applied effectively to a given problem.

Uploaded by

swastik arora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

UNIT IV:

Dynamic Programming: General method, applications-Matrix chain multiplication, Optimal


binary search trees, 0/1 knapsack problem, All pairs shortest path problem, Travelling sales
person problem, Reliability design.

Dynamic Programming
Dynamic programming is a name, coined by Richard Bellman in 1955. Dynamic
programming, as greedy method, is a powerful algorithm design technique that can
be used when the solution to the problem may be viewed as the result of a sequence
of decisions. In the greedy method we make irrevocable decisions one at a time,
using a greedy criterion. However, in dynamic programming we examine the decision
sequence to see whether an optimal decision sequence contains optimal decision
subsequence.

When optimal decision sequences contain optimal decision subsequences, we can


establish recurrence equations, called dynamic-programming recurrence equations,
that enable us to solve the problem in an efficient way.

Dynamic programming is based on the principle of optimality (also coined by


Bellman). The principle of optimality states that no matter whatever the initial state
and initial decision are, the remaining decision sequence must constitute an optimal
decision sequence with regard to the state resulting from the first decision. The
principle implies that an optimal decision sequence is comprised of optimal decision
subsequences. Since the principle of optimality may not hold for some formulations
of some problems, it is necessary to verify that it does hold for the problem being
solved. Dynamic programming cannot be applied when this principle does not hold.

The steps in a dynamic programming solution are:

 Verify that the principle of optimality holds

 Set up the dynamic-programming recurrence equations

 Solve the dynamic-programming recurrence equations for the value of the


optimal solution.

 Perform a trace back step in which the solution itself is constructed.

Dynamic programming differs from the greedy method since the greedy method
produces only one feasible solution, which may or may not be optimal, while dynamic
programming produces all possible sub-problems at most once, one of which
guaranteed to be optimal. Optimal solutions to sub-problems are retained in a table,
thereby avoiding the work of recomputing the answer every time a sub-problem is
encountered

The divide and conquer principle solve a large problem, by breaking it up into smaller
problems which can be solved independently. In dynamic programming this principle
is carried to an extreme: when we don't know exactly which smaller problems to
solve, we simply solve them all, then store the answers away in a table to be used
later in solving larger problems. Care is to be taken to avoid recomputing previously
computed values, otherwise the recursive program will have prohibitive complexity.
In some cases, the solution can be improved and in other cases, the dynamic

67
programming technique is the best approach.
Two difficulties may arise in any application of dynamic programming:
1. It may not always be possible to combine the solutions of smaller problems to
form the solution of a larger one.
2. The number of small problems to solve may be un-acceptably large.
There is no characterized precisely which problems can be effectively solved with
dynamic programming; there are many hard problems for which it does not seen to
be applicable, as well as many easy problems for which it is less efficient than
standard algorithms.

All pairs shortest paths:


In the all pairs shortest path problem, we are to find a shortest path between every
pair of vertices in a directed graph G. That is, for every pair of vertices (i, j), we are
to find a shortest path from i to j as well as one from j to i. These two paths are the
same when G is undirected.

When no edge has a negative length, the all-pairs shortest path problem may be
solved by using Dijkstra’s greedy single source algorithm n times, once with each of
the n vertices as the source vertex.

The all pairs shortest path problem is to determine a matrix A such that A (i, j) is the
length of a shortest path from i to j. The matrix A can be obtained by solving n
single-source problems using the algorithm shortest Paths. Since each application of
this procedure requires O (n2) time, the matrix A can be obtained in O (n3) time.

The dynamic programming solution, called Floyd’s algorithm, runs in O (n 3) time. Floyd’s
algorithm works even when the graph has negative length edges (provided there are no
negative length cycles).

The shortest i to j path in G, i ≠ j originates at vertex i and goes through some


intermediate vertices (possibly none) and terminates at vertex j. If k is an
intermediate vertex on this shortest path, then the subpaths from i to k and from k
to j must be shortest paths from i to k and k to j, respectively. Otherwise, the i to j
path is not of minimum length. So, the principle of optimality holds. Let Ak (i, j)
represent the length of a shortest path from i to j going through no vertex of index
greater than k, we obtain:

Ak (i, j) = {min {min {Ak-1 (i, k) + Ak-1 (k, j)}, c (i, j)}
1<k<n

Algorithm All Paths (Cost, A, n)


// cost [1:n, 1:n] is the cost adjacency matrix of a graph which
// n vertices; A [I, j] is the cost of a shortest path from vertex
// i to vertex j. cost [i, i] = 0.0, for 1 < i < n.
{
for i := 1 to n do
for j:= 1 to n do
A [i, j] := cost [i, j]; // copy cost into A.
for k := 1 to n do
for i := 1 to n do
for j := 1 to n do
A [i, j] := min (A [i, j], A [i, k] + A [k, j]);
}
68
Complexity Analysis: A Dynamic programming algorithm based on this recurrence
involves in calculating n+1 matrices, each of size n x n. Therefore, the algorithm has
a complexity of O (n3).

Example 1:

Given a weighted digraph G = (V, E) with weight. Determine the length of the
shortest path between all pairs of vertices in G. Here we assume that there are no
cycles with zero or negative cost.

6
1 2 0 4 11
4
  
Cost adjacency matrix (A ) = 6
0
0 2 
3 1 1 2  
 3  0 

3

General formula: min {Ak-1 (i, k) + Ak-1 (k, j)}, c (i, j)}
1<k<n

Solve the problem for different values of k = 1, 2 and 3

Step 1: Solving the equation for, k = 1;

A1 (1, 1) = min {(Ao (1, 1) + Ao (1, 1)), c (1, 1)} = min {0 + 0, 0} = 0


A1 (1, 2) = min {(Ao (1, 1) + Ao (1, 2)), c (1, 2)} = min {(0 + 4), 4} = 4
A1 (1, 3) = min {(Ao (1, 1) + Ao (1, 3)), c (1, 3)} = min {(0 + 11), 11} = 11
A1 (2, 1) = min {(Ao (2, 1) + Ao (1, 1)), c (2, 1)} = min {(6 + 0), 6} = 6
A1 (2, 2) = min {(Ao (2, 1) + Ao (1, 2)), c (2, 2)} = min {(6 + 4), 0)} = 0
A1 (2, 3) = min {(Ao (2, 1) + Ao (1, 3)), c (2, 3)} = min {(6 + 11), 2} = 2
A1 (3, 1) = min {(Ao (3, 1) + Ao (1, 1)), c (3, 1)} = min {(3 + 0), 3} = 3
A1 (3, 2) = min {(Ao (3, 1) + Ao (1, 2)), c (3, 2)} = min {(3 + 4), } = 7
A1 (3, 3) = min {(Ao (3, 1) + Ao (1, 3)), c (3, 3)} = min {(3 + 11), 0} = 0




69
0 4 11
 
A (1)
= 2 
6 0
3 7 0 

Step 2: Solving the equation for, K = 2;

A2 (1, 1) = min {(A1 (1, 2) + A1 (2, 1), c (1, 1)} = min {(4 + 6), 0} = 0
A2 (1, 2) = min {(A1 (1, 2) + A1 (2, 2), c (1, 2)} = min {(4 + 0), 4} = 4
A2 (1, 3) = min {(A1 (1, 2) + A1 (2, 3), c (1, 3)} = min {(4 + 2), 11} = 6
A2 (2, 1) = min {(A (2, 2) + A (2, 1), c (2, 1)} = min {(0 + 6), 6} = 6
A2 (2, 2) = min {(A (2, 2) + A (2, 2), c (2, 2)} = min {(0 + 0), 0} = 0
A2 (2, 3) = min {(A (2, 2) + A (2, 3), c (2, 3)} = min {(0 + 2), 2} = 2
A2 (3, 1) = min {(A (3, 2) + A (2, 1), c (3, 1)} = min {(7 + 6), 3} = 3
A2 (3, 2) = min {(A (3, 2) + A (2, 2), c (3, 2)} = min {(7 + 0), 7} = 7
A2 (3, 3) = min {(A (3, 2) + A (2, 3), c (3, 3)} = min {(7 + 2), 0} = 0

0 4 6 
 
A(2) = 2 
6 0
3 7 0 

Step 3: Solving the equation for, k = 3;

A3 (1, 1) = min {A2 (1, 3) + A2 (3, 1), c (1, 1)} = min {(6 + 3), 0} = 0
A3 (1, 2) = min {A2 (1, 3) + A2 (3, 2), c (1, 2)} = min {(6 + 7), 4} = 4
A3 (1, 3) = min {A2 (1, 3) + A2 (3, 3), c (1, 3)} = min {(6 + 0), 6} = 6
A3 (2, 1) = min {A2 (2, 3) + A2 (3, 1), c (2, 1)} = min {(2 + 3), 6} = 5
A3 (2, 2) = min {A2 (2, 3) + A2 (3, 2), c (2, 2)} = min {(2 + 7), 0} = 0
A3 (2, 3) = min {A2 (2, 3) + A2 (3, 3), c (2, 3)} = min {(2 + 0), 2} = 2
A3 (3, 1) = min {A2 (3, 3) + A2 (3, 1), c (3, 1)} = min {(0 + 3), 3} = 3
A3 (3, 2) = min {A2 (3, 3) + A2 (3, 2), c (3, 2)} = min {(0 + 7), 7} = 7
A3 (3, 3) = min {A2 (3, 3) + A2 (3, 3), c (3, 3)} = min {(0 + 0), 0} = 0

70
0 4 6 
 
A(3) = 5 0 2 
 
3 7 0 

TRAVELLING SALESPERSON PROBLEM:

Let G = (V, E) be a directed graph with edge costs Cij. The variable cij is defined such
that cij > 0 for all I and j and cij =  if < i, j>  E. Let |V| = n and assume n > 1. A
tour of G is a directed simple cycle that includes every vertex in V. The cost of a tour
is the sum of the cost of the edges on the tour. The traveling sales person problem is
to find a tour of minimum cost. The tour is to be a simple path that starts and ends
at vertex 1.

Let g (i, S) be the length of shortest path starting at vertex i, going through all
vertices in S, and terminating at vertex 1. The function g (1, V – {1}) is the length of
an optimal salesperson tour. From the principal of optimality it follows that:

g 1, V -  1   min c1k  g  k, V   1, k  -- 1


2 k n

Generalizing equation 1, we obtain (for i  S)

g i, S   minci j  g i, S   j   -- 2


j S

The Equation can be solved for g (1, V – 1}) if we know g (k, V – {1, k}) for all
choices of k.

Example :

For the following graph find minimum cost tour for the traveling salesperson
problem:

1 2
0 10 15 20
5 
The cost adjacency matrix =  0 9 10 

6 13 0 12  
 
8 9 0 
3 4 8

71


Let us start the tour from vertex 1:

g (1, V – {1}) = min {c1k + g (k, V – {1, K})} - (1)


2<k<n

More generally writing:

g (i, s) = min {cij + g (J, s – {J})} - (2)

Clearly, g (i, ) = ci1 , 1 ≤ i ≤ n. So,

g (2, ) = C21 = 5

g (3, ) = C31 = 6

g (4, ) = C41 = 8

Using equation – (2) we obtain:

g (1, {2, 3, 4}) = min {c12 + g (2, {3, 4}, c13 + g (3, {2, 4}), c14 + g (4, {2, 3})}

g (2, {3, 4}) = min {c23 + g (3, {4}), c24 + g (4, {3})}
= min {9 + g (3, {4}), 10 + g (4, {3})}

g (3, {4}) = min {c34 + g (4, )} = 12 + 8 = 20

g (4, {3}) = min {c43 + g (3, )} = 9 + 6 = 15


Therefore, g (2, {3, 4}) = min {9 + 20, 10 + 15} = min {29, 25} = 25

g (3, {2, 4}) = min {(c32 + g (2, {4}), (c34 + g (4, {2})}

g (2, {4}) = min {c24 + g (4, )} = 10 + 8 = 18

g (4, {2}) = min {c42 + g (2, )} = 8 + 5 = 13

Therefore, g (3, {2, 4}) = min {13 + 18, 12 + 13} = min {41, 25} = 25

g (4, {2, 3}) = min {c42 + g (2, {3}), c43 + g (3, {2})}

g (2, {3}) = min {c23 + g (3, } = 9 + 6 = 15

g (3, {2}) = min {c32 + g (2, } = 13 + 5 = 18

Therefore, g (4, {2, 3}) = min {8 + 15, 9 + 18} = min {23, 27} = 23

g (1, {2, 3, 4}) = min {c12 + g (2, {3, 4}), c13 + g (3, {2, 4}), c14 + g (4, {2, 3})}
= min {10 + 25, 15 + 25, 20 + 23} = min {35, 40, 43} = 35

The optimal tour for the graph has length = 35

The optimal tour is: 1, 2, 4, 3, 1.

72
0/1 – KNAPSACK:

We are given n objects and a knapsack. Each object i has a positive weight w i and a
positive value Vi. The knapsack can carry a weight not exceeding W. Fill the knapsack
so that the value of objects in the knapsack is optimized.

A solution to the knapsack problem can be obtained by making a sequence of


decisions on the variables x1, x2, . . . . , xn. A decision on variable xi involves
determining which of the values 0 or 1 is to be assigned to it. Let us assume that

decisions on the xi are made in the order xn, xn-1, x1. Following a decision on xn,
we may be in one of two possible states: the capacity remaining in m – wn and a
profit of pn has accrued. It is clear that the remaining decisions xn-1, , x1 must be
optimal with respect to the problem state resulting from the decision on x n.
Otherwise, xn, , x1 will not be optimal. Hence, the principal of optimality holds.

Fn (m) = max {fn-1 (m), fn-1 (m - wn) + pn} -- 1

For arbitrary fi (y), i > 0, this equation generalizes to:

Fi (y) = max {fi-1 (y), fi-1 (y - wi) + pi} -- 2

Equation-2 can be solved for fn (m) by beginning with the knowledge fo (y) = 0 for all
y and fi (y) = - , y < 0. Then f1, f2, . . . fn can be successively computed using
equation–2.

When the wi’s are integer, we need to compute fi (y) for integer y, 0 < y < m. Since fi
(y) = -  for y < 0, these function values need not be computed explicitly. Since
each fi can be computed from fi - 1 in Θ (m) time, it takes Θ (m n) time to compute
fn. When the wi’s are real numbers, fi (y) is needed for real numbers y such that 0 <
y < m. So, fi cannot be explicitly computed for all y in this range. Even when the w i’s
are integer, the explicit Θ (m n) computation of fn may not be the most efficient
computation. So, we explore an alternative method for both cases.

The fi (y) is an ascending step function; i.e., there are a finite number of y’s, 0 = y1
< y2 < . . . . < yk, such that fi (y1) < fi (y2) < . . . . . < fi (yk); fi (y) = -  , y < y1; fi
(y) = f (yk), y > yk; and fi (y) = fi (yj), yj < y < yj+1. So, we need to compute only fi
(yj), 1 < j < k. We use the ordered set Si = {(f (yj), yj) | 1 < j < k} to represent fi
(y). Each number of Si is a pair (P, W), where P = fi (yj) and W = yj. Notice that S0 =
{(0, 0)}. We can compute Si+1 from Si by first computing:

Si1= {(P, W) | (P – p i, W – w i)  Si}

Now, Si+1 can be computed by merging the pairs in Si and Si together.


1 Note that if
Si+1 contains two pairs (Pj, Wj) and (Pk, Wk) with the property that Pj < Pk and Wj >
Wk, then the pair (Pj, Wj) can be discarded because of equation-2. Discarding or
purging rules such as this one are also known as dominance rules. Dominated tuples
get purged. In the above, (Pk, Wk) dominates (Pj, Wj).

Example 1:

Consider the knapsack instance n = 3, (w1, w2, w3) = (2, 3, 4), (P1, P2, P3) = (1, 2,
5) and M = 6.

73
Solution:

Initially, fo (x) = 0, for all x and fi (x) = -  if x < 0.

Fn (M) = max {fn-1 (M), fn-1 (M - wn) + pn}

F3 (6) = max (f2 (6), f2 (6 – 4) + 5} = max {f2 (6), f2 (2) + 5}

F2 (6) = max (f1 (6), f1 (6 – 3) + 2} = max {f1 (6), f1 (3) + 2}


F1 (6) = max (f0 (6), f0 (6 – 2) + 1} = max {0, 0 + 1} = 1

F1 (3) = max (f0 (3), f0 (3 – 2) + 1} = max {0, 0 + 1} = 1

Therefore, F2 (6) = max (1, 1 + 2} = 3

F2 (2) = max (f1 (2), f1 (2 – 3) + 2} = max {f1 (2), -  + 2}

F1 (2) = max (f0 (2), f0 (2 – 2) + 1} = max {0, 0 + 1} = 1

F2 (2) = max {1, -  + 2} = 1

Finally, f3 (6) = max {3, 1 + 5} = 6

Other Solution:

For the given data we have:

S0 = {(0, 0)}; S0 = {(1,1 2)}

S1 = (S0 U S01) = {(0, 0), (1, 2)}

X - 2 = 0 => x = 2. y – 3 = 0 => y = 3
X - 2 = 1 => x = 3. y – 3 = 2 => y = 5

S11 = {(2, 3), (3, 5)}

S2 = (S1 U S1 1) = {(0, 0), (1, 2), (2, 3), (3, 5)}

X – 5 = 0 => x = 5. y–4=0 => y = 4


X – 5 = 1 => x = 6. y–4=2 => y = 6
X – 5 = 2 => x = 7. y–4=3 => y = 7
X – 5 = 3 => x = 8. y–4=5 => y = 9

S21 = {(5, 4), (6, 6), (7, 7), (8, 9)}

S3 = (S2 U S21) = {(0, 0), (1, 2), (2, 3), (3, 5), (5, 4), (6, 6), (7, 7), (8, 9)}

By applying Dominance rule,

S3 = (S2 U S21) = {(0, 0), (1, 2), (2, 3), (5, 4), (6, 6)}

From (6, 6) we can infer that the maximum Profit  pi xi = 6 and weight  xi wi = 6

74
Reliability Design:

The problem is to design a system that is composed of several devices connected in


series. Let ri be the reliability of device Di (that is ri is the probability that device i
will function properly) then the reliability of the entire system is  ri. Even if the
individual devices are very reliable (the ri’s are very close to one), the reliability of
the system may not be very good. For example, if n = 10 and ri = 0.99, i < i < 10,
then  ri = .904. Hence, it is desirable to duplicate devices. Multiply copies of the
same device type are connected in parallel.

If stage i contains mimcopies of device Di. Then the probability that all mi have
mi a
i
malfunction is (1 - r ) . Hence the reliability of stage i becomes 1 – (1 - r) .
i i

The reliability of stage ‘i’ is given by a function i (mi).

Our problem is to use device duplication. This maximization is to be carried out under
a cost constraint. Let ci be the cost of each unit of device i and let c be the maximum
allowable cost of the system being designed.

We wish to solve:

Maximize 
1 i  n
i mi 

Subject to  C m C
1 i  n
i i

mi > 1 and interger, 1 < i < n

75
Assume each Ci > 0, each mi must be in the range 1 < mi < ui, where
  n  
ui   C  Ci 
 CJ 

Ci 
  1  

The upper bound ui follows from the observation that mj > 1

An optimal solution m1, m2 ............... mn is the result of a sequence of decisions, one


decision for each mi.

Let fi (x) represent the maximum value of   mJ


1  j i
Subject to the constrains:

C
1 j i
J mJ  x and 1 < mj < uJ, 1 < j < i

76
Example :

Design a three stage system with device types D1, D2 and D3. The costs are $30, $15
and $20 respectively. The Cost of the system is to be no more than $105. The
reliability of each device is 0.9, 0.8 and 0.5 respectively.

Solution:

We assume that if if stage I has mi devices of type i in parallel, then  i (mi) =1 – (1-
ri)mi

Since, we can assume each ci > 0, each mi must be in the range 1 ≤ mi ≤ ui. Where:
  n  
u i    C  Ci 
 CJ
 Ci 
  1  

Using the above equation compute u1, u2 and u3.


105 30 3015  20
u1   70  2
30 30

10515 3015  20 55


u2   3
15 15

105 20 3015  20 60


u3   3
20 20

We useS ji  i:stage number and J: no. of devices in stage i  mi

S o  fo (x), x initially fo x  1 and x  0, so, So  1, 0



Compute S1, S2 and S3 as follows:

S1 = depends on u1 value, as u1 = 2, so

S1  S , S 
1 1

1 2

S2 = depends on u2 value, as u2 = 3, so

77
S2  S 2
, S 2 , S2 
1 2 3

S3 = depends on u3 value, as u3 = 3, so

S3  S , S 3 3
, S3 
1 2 3

Now find,S11  f (x), x 


1

f1 x  1 (1) fo  , 1 (2) f 0 ()} With devices m1 = 1 and m2 = 2

Compute 1 (1) and 1 (2) using the formula: i mi)  1 (1  ri )mi

1 1  1 1  r1m 1 = 1 – (1 – 0.9)1 = 0.9

1 2  1 1 0.92  0.99

S1   f1 x, x     0.9, 30


1

S12  0.99 , 30  30    0.99, 60


Therefore, S1 = {(0.9, 30), (0.99, 60)}

Next findS 2  f (x), x 


1 2

f2 (x)  {2 1 * f1  , 2 2 * f1  , 2 3 * f1  }


2 1  1  1   = 1 – (1 – 0.8)
rI mi 1
= 1 – 0.2 = 0.8

2 2  1  1  0.8 2  0.96

2 3  1  1  0.8 3  0.992

S12  {(0.8(0.9),30  15), (0.8(0.99),60  15)} = {(0.72, 45), (0.792, 75)}

S22  {(0.96(0.9),30  15 15) , (0.96(0.99),60  15  15)}


= {(0.864, 60), (0.9504, 90)}

S32  {(0.992(0.9),30  15 1515) , (0.992(0.99),60  15  1515)}


= {(0.8928, 75), (0.98208, 105)}
S2  S 2
, S2 , S2 
1 2 3

By applying Dominance rule to S2:

Therefore, S2 = {(0.72, 45), (0.864, 60), (0.8928, 75)}

78
Dominance Rule:
If Si contains two pairs (f1, x1) and (f2, x2) with the property that f1 ≥ f2 and x1 ≤ x2,
then (f1, x1) dominates (f2, x2), hence by dominance rule (f2, x2) can be discarded.
Discarding or pruning rules such as the one above is known as dominance rule.
Dominating tuples will be present in Si and Dominated tuples has to be discarded
from Si.

Case 1: if f1 ≤ f2 and x1 > x2 then discard (f1, x1)

Case 2: if f1 > f2 and x1 < x2 the discard (f2, x2)

Case 3: otherwise simply write (f1, x1)

S2 = {(0.72, 45), (0.864, 60), (0.8928, 75)}

 3 1  1  1  rI  mi = 1 – (1 – 0.5)1 = 1 – 0.5 = 0.5

 3 2  1  1  0.5 2  0.75

 3 3  1  1  0.5 3  0.875

S13  0.5 (0.72), 45  20, 0.5 (0.864), 60  20, 0.5 (0.8928), 75  20

S13  0.36, 65, 0.437, 80, 0.4464, 95

S23 {0.75 (0.72), 45  20  20, 0.75 (0.864), 60  20  20,


0.75 (0.8928), 75  20  20}
= {(0.54, 85), (0.648, 100), (0.6696, 115)}

S33   0.875 (0.72), 45  20  20  20, 0.875 (0.864), 60  20  20  20,


0.875 (0.8928), 75  20  20  20 

S33  (0.63, 105), 1.756, 120, 0.7812, 135

If cost exceeds 105, remove that tuples

S3 = {(0.36, 65), (0.437, 80), (0.54, 85), (0.648, 100)}

The best design has a reliability of 0.648 and a cost of 100. Tracing back for the
solution through Si ‘s we can determine that m3 = 2, m2 = 2 and m1 = 1.

79
Optimal Binary Search Tree:
In computer science, an optimal binary search tree (Optimal BST), sometimes called a weight-balanced
binary tree,[1] is a binary search tree which provides the smallest possible search time (or expected search
time) for a given sequence of accesses (or access probabilities).

The no of external nodes are same in both trees.

80
The C (i, J) can be computed as:

C (i, J) = min {C (i, k-1) + C (k, J) + P (K) + w (i, K-1) + w (K, J)}
i<k<J

= min {C (i, K-1) + C (K, J)} + w (i, J) -- (1)


i<k<J
Where W (i, J) = P (J) + Q (J) + w (i, J-1) -- (2)

Initially C (i, i) = 0 and w (i, i) = Q (i) for 0 < i < n.


C (i, J) is the cost of the optimal binary search tree 'T ij' during computation we record
the root R (i, J) of each tree 'Tij'. Then an optimal binary search tree may be
constructed from these R (i, J). R (i, J) is the value of 'K' that minimizes equation (1).

We solve the problem by knowing W (i, i+1), C (i, i+1) and R (i, i+1), 0 ≤ i < 4;
Knowing W (i, i+2), C (i, i+2) and R (i, i+2), 0 ≤ i < 3 and repeating until W (0, n),
C (0, n) and R (0, n) are obtained.

81
82
83
Matrix chain multiplication

The problem
Given a sequence of matrices A1, A2, A3, ..., An, find the best way (using the minimal number
of multiplications) to compute their product.

• Isn’t there only one way? ((· · · ((A1 · A2 ) · A3 ) · · ·) · An )


• No, matrix multiplication is associative.
e.g. A1 · (A2 · (A3 · (· · · (An−1 · An ) · · ·))) yields the same matrix.
• Different multiplication orders do not cost the same:
– Multiplying p × q matrix A and q × r matrix B takes p · q · r multiplications; result is a
p × r matrix.
– Consider multiplying 10 × 100 matrix A1 with 100 × 5 matrix A2 and 5 × 50 matrix A3.
– (A1 · A2) · A3 takes 10 · 100 · 5 + 10 · 5 · 50 = 7500 multiplications.
– A1 · (A2 · A3) takes 100 · 5 · 50 + 10 · 50 · 100 = 75000 multiplications.

Notation

• In general, let Ai be pi−1 × pi matrix.


• Let m(i, j) denote minimal number of multiplications needed to compute Ai · Ai+1 · · · Aj
• We want to compute m(1, n).

Recursive algorithm

• Assume that someone tells us the position of the last product, say k. Then we have to
compute recursively the best way to multiply the chain from i to k, and from k + 1 to j,
and add the cost of the final product. This means that

m(i, j) = m(i, k) + m(k + 1, j) + pi−1 · pk · pj


• If noone tells us k, then we have to try all possible values of k and pick the best solution.
• Recursive formulation of m(i, j):
.
0 If i = j
m(i, j) =
mini≤k<j{m(i,k)+m(k +1,j)+p i−1 ·pk·p j} If i < j

• To go from the recursive formulation above to a program is pretty straightforward:

84
Matrix-chain(i, j)
IF i = j THEN return 0
m=∞
FOR k = i TO j − 1 DO
q = Matrix-chain(i, k) + Matrix-chain(k + 1, j) +pi−1 · pk · pj
IF q < m THEN m = q
OD
Return m
END Matrix-chain

Return Matrix-chain(1, n)

• Running
time: Σ
n−1
T (n) = (T (k) + T (n− k) + O(1))
k=1
Σn−1
= 2· T (k) + O(n)
k=1

≥ 2 · T (n − 1)
≥ 2 · 2 · T (n − 2)
≥2·2·2...
= 2n
• Exponential is ...
SLOW!

• Problem is that we compute the same result over and over again.
– Example: Recursion tree for Matrix-chain(1, 4)
1,4

1,1 2,4 1,2 3,4 1,3 4,4

2,2 3,4 2,3 4,4 1,1 2,2 3,3 4,4 1,1 2,3 1,2 3,3

3,3 4,4 2,2 3,3 2,2 3,3 1,1 2,2

85
For example, we compute Matrix-chain(3, 4) twice.

Dynamic programming with a table and recursion

• Solution is to “remember” the values we have already computed in a table. This is called
memoization. We’ll have a table T[1..n][1..n] such that T[i][j] stores the solution to
problem Matrix-CHAIN(i,j). Initially all entries will be set to ∞.
FOR i = 1 to n
DO FOR j = i
to n DO
T [i][j] = ∞
OD
OD
• The code for MATRIX-CHAIN(i,j) stays the same, except that it now uses the table.
The first thing MATRIX-CHAIN(i,j) does is to check the table to see if T [i][j] is
already computed. Is so, it returns it, otherwise, it computes it and writes it in the
table. Below is the updated code.

Matrix-chain(i, j)
IF T [i][j] < ∞ THEN return T [i][j]
IF i = j THEN T [i][j] = 0, return 0
m=∞
FOR k = i to j − 1 DO
q = Matrix-chain(i, k) + Matrix-chain(k + 1, j)+pi−1 · pk · pj
IF q < m THEN m = q
OD
T [i][j] = m
return m
END Matrix-chain

return Matrix-chain(1, n)

• The table will prevent a subproblem MATRIX-CHAIN(i,j) to be computed more


than once.
• Running time:
– Θ(n2) different calls to matrix-chain(i, j).
– The first time a call is made it takes O(n) time, not counting recursive calls.
– When a call has been made once it costs O(1) time to make it again.

O(n3) time
– Another way of thinking about it: Θ(n2) total entries to fill, it takes O(n) to fill
one.

86
UNIT V:
Branch and Bound: General method, applications - Travelling sales person
problem,0/1 knapsack problem- LC Branch and Bound solution, FIFO Branch
and Bound solution.
NP-Hard and NP-Complete problems: Basic concepts, non deterministic
algorithms, NP - Hard and NP Complete classes, Cook’s theorem.

Branch and Bound

General method:
Branch and Bound is another method to systematically search a solution space. Just
like backtracking, we will use bounding functions to avoid generating subtrees that
do not contain an answer node. However branch and Bound differs from backtracking
in two important manners:

1. It has a branching function, which can be a depth first search, breadth first
search or based on bounding function.

2. It has a bounding function, which goes far beyond the feasibility test as a
mean to prune efficiently the search tree.

Branch and Bound refers to all state space search methods in which all children of
the E-node are generated before any other live node becomes the E-node

Branch and Bound is the generalization of both graph search strategies, BFS and D-
search.

 A BFS like state space search is called as FIFO (First in first out) search
as the list of live nodes in a first in first out list (or queue).

 A D search like state space search is called as LIFO (Last in first out)
search as the list of live nodes in a last in first out (or stack).

Definition 1: Live node is a node that has been generated but whose children have
not yet been generated.
Definition 2: E-node is a live node whose children are currently being explored. In
other words, an E-node is a node currently being expanded.
Definition 3: Dead node is a generated node that is not to be expanded or explored
any further. All children of a dead node have already been expanded.
Definition 4: Branch-an-bound refers to all state space search methods in which all
children of an E-node are generated before any other live node can
become the E-node.
Definition 5: The adjective "heuristic", means" related to improving problem solving
performance". As a noun it is also used in regard to "any method or trick
used to improve the efficiency of a problem solving problem". But
imperfect methods are not necessarily heuristic or vice versa. "A heuristic
(heuristic rule, heuristic method) is a rule of thumb, strategy, trick
simplification or any other kind of device which drastically limits search
for solutions in large problem spaces. Heuristics do not guarantee optimal
solutions, they do not guarantee any solution at all. A useful heuristic
offers solutions which are good enough most of thetime.

87
Least Cost (LC) search:

In both LIFO and FIFO Branch and Bound the selection rule for the next E-node in
rigid and blind. The selection rule for the next E-node does not give any preference
to a node that has a very good chance of getting the search to an answer node
quickly.

The search for an answer node can be speeded by using an “intelligent” ranking
function c( ) for live nodes. The next E-node is selected on the basis of this ranking
function. The node x is assigned a rank using:

c( x ) = f(h(x)) + g( x )

where, c( x ) is the cost of x.

h(x) is the cost of reaching x from the root and f(.) is any non-decreasing
function.

g ( x ) is an estimate of the additional effort needed to reach an answer node


from x.

A search strategy that uses a cost function c( x ) = f(h(x) + g( x ) to select the next
E-node would always choose for its next E-node a live node with least c(.) is called a
LC–search (Least Cost search)

BFS and D-search are special cases of LC-search. If g( x ) = 0 and f(h(x)) = level of
node x, then an LC search generates nodes by levels. This is eventually the same as
a BFS. If f(h(x)) = 0 and g( x ) > g( y ) whenever y is a child of x, then the search is
essentially a D-search.

An LC-search coupled with bounding functions is called an LC-branch and bound


search

We associate a cost c(x) with each node x in the state space tree. It is not possible to
easily compute the function c(x). So we compute a estimate c( x ) of c(x).

Control Abstraction for LC-Search:

Let t be a state space tree and c() a cost function for the nodes in t. If x is a node in
t, then c(x) is the minimum cost of any answer node in the subtree with root x. Thus,
c(t) is the cost of a minimum-cost answer node in t.

A heuristic c(.) is used to estimate c(). This heuristic should be easy to compute and
generally has the property that if x is either an answer node or a leaf node, then
c(x) = c( x ) .

LC-search uses c to find an answer node. The algorithm uses two functions Least() and
Add() to delete and add a live node from or to the list of live nodes, respectively.

Least() finds a live node with least c(). This node is deleted from the list of live nodes
and returned.

88
Add(x) adds the new live node x to the list of live nodes. The list of live nodes be
implemented as a min-heap.

Algorithm LCSearch outputs the path from the answer node it finds to the root node
t. This is easy to do if with each node x that becomes live, we associate a field parent
which gives the parent of node x. When the answer node g is found, the path from g
to t can be determined by following a sequence of parent values starting from the
current E-node (which is the parent of g) and ending at node t.

Listnode = record
{
Listnode * next, *parent; float cost;
}

Algorithm LCSearch(t)
{ //Search t for an answer node
if *t is an answer node then output *t and return;
E := t; //E-node.
initialize the list of live nodes to be empty;
repeat
{
for each child x of E do
{
if x is an answer node then output the path from x to t and return;
Add (x); //x is a new live node.
(x  parent) := E; // pointer for path to root
}
if there are no more live nodes then
{
write (“No answer node”);
return;
}
E := Least();
} until (false);
}

The root node is the first, E-node. During the execution of LC search, this list
contains all live nodes except the E-node. Initially this list should be empty.
Examine all the children of the E-node, if one of the children is an answer node, then
the algorithm outputs the path from x to t and terminates. If the child of E is not an
answer node, then it becomes a live node. It is added to the list of live nodes and its
parent field set to E. When all the children of E have been generated, E becomes a
dead node. This happens only if none of E’s children is an answer node. Continue the
search further until no live nodes found. Otherwise, Least(), by definition, correctly
chooses the next E-node and the search continues from here.

LC search terminates only when either an answer node is found or the entire state
space tree has been generated and searched.

Bounding:

A branch and bound method searches a state space tree using any search
mechanism in which all the children of the E-node are generated before another node
becomes the E-node. We assume that each answer node x has a cost c(x) associated
with it and that a minimum-cost answer node is to be found. Three common search
strategies are FIFO, LIFO, and LC. The three search methods differ only in the
selection rule used to obtain the next E-node.

89
A good bounding helps to prune efficiently the tree, leading to a faster exploration of
the solution space.

A cost function c(.) such that c( x ) < c(x) is used to provide lower bounds on
solutions obtainable from any node x. If upper is an upper bound on the cost of a
minimum-cost solution, then all live nodes x with c(x) > c( x ) > upper. The starting
value for upper can be obtained by some heuristic or can be set to  .

As long as the initial value for upper is not less than the cost of a minimum-cost
answer node, the above rules to kill live nodes will not result in the killing of a live
node that can reach a minimum-cost answer node. Each time a new answer node is
found, the value of upper can be updated.

Branch-and-bound algorithms are used for optimization problems where, we deal


directly only with minimization problems. A maximization problem is easily converted
to a minimization problem by changing the sign of the objective function.

To formulate the search for an optimal solution for a least-cost answer node in a
state space tree, it is necessary to define the cost function c(.), such that c(x) is
minimum for all nodes representing an optimal solution. The easiest way to do this is
to use the objective function itself for c(.).

 For nodes representing feasible solutions, c(x) is the value of the objective
function for that feasible solution.

 For nodes representing infeasible solutions, c(x) = .

 For nodes representing partial solutions, c(x) is the cost of the minimum-cost
node in the subtree with root x.

Since, c(x) is generally hard to compute, the branch-and-bound algorithm will use an
estimate c( x ) such that c( x ) < c(x) for all x.

FIFO Branch and Bound:

A FIFO branch-and-bound algorithm for the job sequencing problem can begin with
upper =  as an upper bound on the cost of a minimum-cost answer node.

Starting with node 1 as the E-node and using the variable tuple size formulation of
Figure 8.4, nodes 2, 3, 4, and 5 are generated. Then u(2) = 19, u(3) = 14, u(4) =
18, and u(5) = 21.

The variable upper is updated to 14 when node 3 is generated. Since c (4) and
c(5) are greater than upper, nodes 4 and 5 get killed. Only nodes 2 and 3 remain
alive.

Node 2 becomes the next E-node. Its children, nodes 6, 7 and 8 are generated.
Then u(6) = 9 and so upper is updated to 9. The cost c(7) = 10 > upper and node 7
gets killed. Node 8 is infeasible and so it is killed.

Next, node 3 becomes the E-node. Nodes 9 and 10 are now generated. Then u(9) =
8 and so upper becomes 8. The cost c(10) = 11 > upper, and this node is killed.

90
The next E-node is node 6. Both its children are infeasible. Node 9’s only child is also
infeasible. The minimum-cost answer node is node 9. It has a cost of 8.

When implementing a FIFO branch-and-bound algorithm, it is not economical to kill


live nodes with c(x) > upper each time upper is updated. This is so because live
nodes are in the queue in the order in which they were generated. Hence, nodes with
c(x) > upper are distributed in some random way in the queue. Instead, live nodes
with c(x) > upper can be killed when they are about to become E-nodes.

The FIFO-based branch-and-bound algorithm with an appropriate c(.) and u(.) is


called FIFOBB.

LC Branch and Bound:

An LC Branch-and-Bound search of the tree of Figure 8.4 will begin with upper = 
and node 1 as the first E-node.

When node 1 is expanded, nodes 2, 3, 4 and 5 are generated in that order.

As in the case of FIFOBB, upper is updated to 14 when node 3 is generated and


nodes 4 and 5 are killed as c(4) > upper and c(5) > upper.

Node 2 is the next E-node as c(2) = 0 and c(3) = 5. Nodes 6, 7 and 8 are generated
and upper is updated to 9 when node 6 is generated. So, node 7 is killed as c(7) = 10
> upper. Node 8 is infeasible and so killed. The only live nodes now are nodes 3 and
6.

Node 6 is the next E-node as c(6) = 0 < c(3) . Both its children are infeasible.

Node 3 becomes the next E-node. When node 9 is generated, upper is updated to 8
as u(9) = 8. So, node 10 with c(10) = 11 is killed on generation.

Node 9 becomes the next E-node. Its only child is infeasible. No live nodes remain.
The search terminates with node 9 representing the minimum-cost answer node.

2 3
The path = 1  3  9 = 5 + 3 = 8

Traveling Sale Person Problem:

By using dynamic programming algorithm we can solve the problem with time
complexity of O(n22n) for worst case. This can be solved by branch and bound
technique using efficient bounding function. The time complexity of traveling sale
person problem using LC branch and bound is O(n22n) which shows that there is no
change or reduction of complexity than previous method.

We start at a particular node and visit all nodes exactly once and come back to initial
node with minimum cost.

Let G = (V, E) is a connected graph. Let C(i, J) be the cost of edge <i, j>. cij =  if
<i, j> E and let |V| = n, the number of vertices. Every tour starts at vertex 1 and
ends at the same vertex. So, the solution space is given by S = {1, , 1 |  is a

91
permutation of (2, 3, . . . , n)} and |S| = (n – 1)!. The size of S can be reduced by
restricting S so that (1, i 1, i2, . . . . in-1, 1)  S iff <ij, ij+1>  E, 0 < j < n - 1 and
i0 = in =1.

Procedure for solving traveling sale person problem:

1. Reduce the given cost matrix. A matrix is reduced if every row and column is
reduced. A row (column) is said to be reduced if it contain at least one zero
and all-remaining entries are non-negative. This can be done as follows:

a) Row reduction: Take the minimum element from first row, subtract it
from all elements of first row, next take minimum element from the
second row and subtract it from second row. Similarly apply the same
procedure for all rows.
b) Find the sum of elements, which were subtracted from rows.

c) Apply column reductions for the matrix obtained after row reduction.

Column reduction: Take the minimum element from first column,


subtract it from all elements of first column, next take minimum
element from the second column and subtract it from second column.
Similarly apply the same procedure for all columns.

d) Find the sum of elements, which were subtracted from columns.

e) Obtain the cumulative sum of row wise reduction and column wise
reduction.

Cumulative reduced sum = Row wise reduction sum + column wise


reduction sum.

Associate the cumulative reduced sum to the starting state as lower


bound and  as upper bound.

2. Calculate the reduced cost matrix for every node R. Let A is the reduced cost
matrix for node R. Let S be a child of R such that the tree edge (R, S)
corresponds to including edge <i, j> in the tour. If S is not a leaf node, then
the reduced cost matrix for S may be obtained as follows:

a) Change all entries in row i and column j of A to .

b) Set A (j, 1) to .

c) Reduce all rows and columns in the resulting matrix except for rows
and column containing only . Let r is the total amount subtracted to
reduce the matrix.

c) Find c S   cR  A i, j  r, where ‘r’ is the total amount


subtracted to reduce the matrix, cR indicates the lower bound of the
ith node in (i, j) path and c S  is called the cost function.

3. Repeat step 2 until all nodes are visited.

92
Example:

Find the LC branch and bound solution for the traveling sale person problem whose
cost matrix is as follows:

  20 30 10 11
 
15  16 4 2 
The cost matrix is  3 5  2 4 
 
18  3 
19 6 
 16 4
 7 16  


Step 1: Find the reduced cost matrix.

Apply row reduction method:

Deduct 10 (which is the minimum) from all values in the 1 st row.


Deduct 2 (which is the minimum) from all values in the 2nd row.
Deduct 2 (which is the minimum) from all values in the 3rd row.
Deduct 3 (which is the minimum) from all values in the 4th row.
Deduct 4 (which is the minimum) from all values in the 5th row.

 10 20 0 1 
13 
  14 2 0 
The resulting row wise reduced cost matrix =  1 3  0 0 

3 15  0 
16
12 0 3 12 

Row wise reduction sum = 10 + 2 + 2 + 3 + 4 = 21




Now apply column reduction for the above matrix:

Deduct 1 (which is the minimum) from all values in the 1st column.
Deduct 3 (which is the minimum) from all values in the 3rd column.

 10 17 0 1 
12
  11 2 0 
The resulting column wise reduced cost matrix (A) =  0 3  0 2 

3 12  0 
15
11 0 0 12 

Column wise reduction sum = 1 + 0 + 3 + 0 + 0 = 4

Cumulative reduced sum = row wise reduction + column wise reduction sum.
= 21 + 4 = 25.

This is the cost of a root i.e., node 1, because this is the initially reduced cost matrix.

The lower bound for node is 25 and upper bound is .

93
Starting from node 1, we can next visit 2, 3, 4 and 5 vertices. So, consider to explore
the paths (1, 2), (1, 3), (1, 4) and (1, 5).

The tree organization up to this point is as follows:


U = 
1 L = 25

i=2 i = 4 i= 5
i=3
2 3 4 5

Variable ‘i’ indicates the next node to visit.

Step 2:

Consider the path (1, 2):

Change all entries of row 1 and column 2 of A to  and also set A(2, 1) to .

      
  11 2 
 0 
0   0 2 
 
 12  0 
15
11  0 12  

Apply row and column reduction for the rows and columns whose rows and
columns are not completely .

      
  11 2 
 0 
Then the resultant matrix is  0   0 2 
 
 12  0 
15
11  0 12  

Row reduction sum = 0 + 0 + 0 + 0 = 0
Column reduction sum = 0 + 0 + 0 + 0 = 0
Cumulative reduction (r) = 0 + 0 = 0

Therefore, as cS   cR  A 1, 2  r


c S  = 25 + 10 + 0 = 35

Consider the path (1, 3):

Change all entries of row 1 and column 3 of A to  and also set A(3, 1) to .

94
    
12   2 0 
 
 3  0 2 
 
  0 
15 3
11 0  12  

Apply row and column reduction for the rows and columns whose rows and
columns are not completely .

     
 1  2 0 
 
Then the resultant matrix is  3  0 2 
 
3   0 
4
0 0  12  

Row reduction sum = 0


Column reduction sum = 11
Cumulative reduction (r) = 0 + 11 = 11

Therefore, as cS   cR  A 1, 3  r


c S  = 25 + 17 + 11 = 53

Consider the path (1, 4):

Change all entries of row 1 and column 4 of A to  and also set A(4, 1) to .

     
12
  11  0 

0 3   2 
 
3 12  0 
 
11 0 0   

Apply row and column reduction for the rows and columns whose rows and
columns are not completely .

     
12
  11  0 

Then the resultant matrix is  0 3   2 
 
3 12  0 
 
11 0 0   

Row reduction sum = 0


Column reduction sum = 0
Cumulative reduction (r) = 0 + 0 = 0

95
Therefore, as cS   cR  A 1, 4  r
c S  = 25 + 0 + 0 = 25

Consider the path (1, 5):

Change all entries of row 1 and column 5 of A to  and also set A(5, 1) to .

     
12  
 11 2  
 0 3  0  
 
12   
15 3
  0 0 12  

Apply row and column reduction for the rows and columns whose rows and
columns are not completely .

     
10  9 0 
  
Then the resultant matrix is  0 3  0  
 
12 0 9   
 0 0 12  

Row reduction sum = 5


Column reduction sum = 0
Cumulative reduction (r) = 5 + 0 = 0

Therefore, as cS   cR  A 1, 5  r


c S  = 25 + 1 + 5 = 31
The tree organization up to this point is as follows:
U = 
1 L = 25

i=2 i = 4 i= 5
i=3
35 2 53 3 25 4 31 5

i = 2 i= 5
i=3

6 7 8

The cost of the paths between (1, 2) = 35, (1, 3) = 53, (1, 4) = 25 and (1, 5) = 31.
The cost of the path between (1, 4) is minimum. Hence the matrix obtained for path
(1, 4) is considered as reduced cost matrix.

96
     
12
  11  0 

A = 0 3   2 
 
3 12  0 
 
11 0 0   

The new possible paths are (4, 2), (4, 3) and (4, 5).

Consider the path (4, 2):

Change all entries of row 4 and column 2 of A to  and also set A(2, 1) to .

     
  11  0 
 
 0    2 
 
     
11  0   

Apply row and column reduction for the rows and columns whose rows and
columns are not completely .

     
  11  0 
 
Then the resultant matrix is  0    2 
 
      
11  0   

Row reduction sum = 0


Column reduction sum = 0
Cumulative reduction (r) = 0 + 0 = 0

Therefore, as cS   cR  A 4, 2  r


c S  = 25 + 3 + 0 = 28

Consider the path (4, 3):

Change all entries of row 4 and column 3 of A to  and also set A(3, 1) to .

     
12   
 0 
 3   2 
 
     
11 0    

97
Apply row and column reduction for the rows and columns whose rows and
columns are not completely .

     
 1   0 
 
Then the resultant matrix is   1   0 
 
     
 0 0    

Row reduction sum = 2


Column reduction sum = 11
Cumulative reduction (r) = 2 + 11 = 13

Therefore, as cS  cR  A 4, 3  r


c S  = 25 + 12 + 13 = 50

Consider the path (4, 5):

Change all entries of row 4 and column 5 of A to  and also set A(5, 1) to .

     
12  
 11  
 0 3    
 
  
  
  0 0   

Apply row and column reduction for the rows and columns whose rows and
columns are not completely .

     
 1 0   
 
Then the resultant matrix is  0 3    
 

    
 0 0   

Row reduction sum = 11


Column reduction sum = 0
Cumulative reduction (r) = 11+0 = 11

Therefore, as c S   cR  A 4, 5  r


c S  = 25 + 0 + 11 = 36

98
The tree organization up to this point is as follows:
U = 
1 L = 25

i=2 i = 4 i= 5
i=3
35 2 53 3 25 4 31 5

i = 2 i= 5
i=3

28 6 7 8
36
50
i=3
i=5

9 10

The cost of the paths between (4, 2) = 28, (4, 3) = 50 and (4, 5) = 36. The cost of
the path between (4, 2) is minimum. Hence the matrix obtained for path (4, 2) is
considered as reduced cost matrix.

     
  11  0 
 
A =  0    2 
 
      
11  0   

The new possible paths are (2, 3) and (2, 5).

Consider the path (2, 3):

Change all entries of row 2 and column 3 of A to  and also set A(3, 1) to .

     
     
 
    2 
 
     
11    

Apply row and column reduction for the rows and columns whose rows and
columns are not completely .

99
     
     
 
Then the resultant matrix is     0 
 
    
0     

Row reduction sum = 2


Column reduction sum = 11
Cumulative reduction (r) = 2 + 11 = 13

Therefore, as cS  cR  A 2, 3  r


c S  = 28 + 11 + 13 = 52

Consider the path (2, 5):

Change all entries of row 2 and column 5 of A to  and also set A(5, 1) to .
     
     
 
 0     
 

    
  0   

Apply row and column reduction for the rows and columns whose rows and
columns are not completely .
     
      
 
Then the resultant matrix is  0     
 
      
   0   
Row reduction sum = 0
Column reduction sum = 0
Cumulative reduction (r) = 0 + 0 = 0

Therefore, as cS  cR  A 2, 5  r


c S  = 28 + 0 + 0 = 28
The tree organization up to this point is as follows:

100
U = 
1 L = 25

i=2 i = 4 i= 5
i=3
35 2 53 3 25 4 31 5

i = 2 i= 5
i=3

28 6 7 8
36
50
i=3
i=5

52 9 10 28

i= 3

11

The cost of the paths between (2, 3) = 52 and (2, 5) = 28. The cost of the path
between (2, 5) is minimum. Hence the matrix obtained for path (2, 5) is considered
as reduced cost matrix.

     
      
 
A =  0     
 

    
  0   

The new possible paths is (5, 3).

Consider the path (5, 3):

Change all entries of row 5 and column 3 of A to  and also set A(3, 1) to .
Apply row and column reduction for the rows and columns whose rows and
columns are not completely .

     
      
 
Then the resultant matrix is       
 
     
 
      

Row reduction sum = 0
Column reduction sum = 0
Cumulative reduction (r) = 0 + 0 = 0

Therefore, as cS   cR  A 5, 3  r


c S  = 28 + 0 + 0 = 28

The overall tree organization is as follows:

101
U = 
1 L = 25

i=2 i = 4 i= 5
i=3
35 2 53 3 25 4 31 5

i = 2 i= 5
i=3

28 6 7 8
36
50
i=3
i=5

52 9 10 28

i=3

11 28

The path of traveling sale person problem is:

1 4 2 5 3 1

The minimum cost of the path is: 10 + 6 +2+ 7 + 3 = 28.

0/1 Knapsack Problem

Consider the instance: M = 15, n = 4, (P1, P2, P3, P4) = (10, 10, 12, 18) and
(w1, w2, w3, w4) = ( 2, 4, 6, 9).

0/1 knapsack problem can be solved by using branch and bound technique. In this
problem we will calculate lower bound and upper bound for each node.

Place first item in knapsack. Remaining weight of knapsack is 15 – 2 = 13. Place


next item w2 in knapsack and the remaining weight of knapsack is 13 – 4 = 9. Place
next item w3 in knapsack then the remaining weight of knapsack is 9 – 6 = 3. No
fractions are allowed in calculation of upper bound so w4 cannot be placed in
knapsack.

Profit = P1 + P2 + P3 = 10 + 10 + 12
So, Upper bound = 32

To calculate lower bound we can place w4 in knapsack since fractions are allowed in
calculation of lower bound.
3
Lower bound = 10 + 10 + 12 + ( X 18) = 32 + 6 = 38
9
Knapsack problem is maximization problem but branch and bound technique is
applicable for only minimization problems. In order to convert maximization problem
into minimization problem we have to take negative sign for upper bound and lower
bound.

Therefore, Upper bound (U) = -32


Lower bound (L) = -38

We choose the path, which has minimum difference of upper bound and lower bound.
If the difference is equal then we choose the path by comparing upper bounds and
we discard node with maximum upper bound.

102
U = - 32
1 L = -38
x1 = 1 x1 = 0

U = - 32 U = - 22
2 3
L = -38 L = -32

Now we will calculate upper bound and lower bound for nodes 2, 3.

For node 2, x1= 1, means we should place first item in the knapsack.

U = 10 + 10 + 12 = 32, make it as -32

3
L = 10 + 10 + 12 + x 18 = 32 + 6 = 38, make it as -38
9

For node 3, x1 = 0, means we should not place first item in the knapsack.

U = 10 + 12 = 22, make it as -22


5
L = 10 + 12 + x 18 = 10 + 12 + 10 = 32, make it as -32
9

Next, we will calculate difference of upper bound and lower bound for nodes 2, 3

For node 2, U – L = -32 + 38 = 6


For node 3, U – L = -22 + 32 = 10

Choose node 2, since it has minimum difference value of 6.

U = - 32
1 L = -38
x1 = 1 x1 = 0
U = - 32 U = - 22
2 3
L = - 38 L = -32

x2 = 1 x2 = 0

U = - 32 U = - 22
4 5
L = - 38 L = -36

Now we will calculate lower bound and upper bound of node 4 and 5. Calculate
difference of lower and upper bound of nodes 4 and 5.

For node 4, U – L = -32 + 38 = 6


For node 5, U – L = -22 + 36 = 14

Choose node 4, since it has minimum difference value of 6.

103
U = - 32
1 L = -38
x1 = 1 x1 = 0

U = - 32 U = - 22
2 3
L = -38 L = -32

x2 = 1 x2 = 0

U = -32 U = - 22
4 5
L = -38 L = -36

x3 = 1 x3 = 0

U = -32 U = -38
6 7
L = -38 L = -38

Now we will calculate lower bound and upper bound of node 8 and 9. Calculate
difference of lower and upper bound of nodes 8 and 9.

For node 6, U – L = -32 + 38 = 6


For node 7, U – L = -38 + 38 = 0

Choose node 7, since it is minimum difference value of 0.

U = - 32
1 L = -38
x1 = 1 x1 = 0

U = - 32 U = - 22
2 3
L = -38 L = -32

x2 = 1 x2 = 0

U = - 32 U = - 22
4 5
L = -38 L = -36

x3 = 1 x3 = 0

U = - 32 U = - 38
6 7
L = -38 L = -38

x4 = 1 x4 = 0

U = - 38 U = - 20
8 9
L = -38 L = -20

Now we will calculate lower bound and upper bound of node 4 and 5. Calculate
difference of lower and upper bound of nodes 4 and 5.

For node 8, U – L = -38 + 38 = 0


For node 9, U – L = -20 + 20 = 0

Here the difference is same, so compare upper bounds of nodes 8 and 9. Discard the
node, which has maximum upper bound. Choose node 8, discard node 9 since, it has
maximum upper bound.

Consider the path from 1  2  4  7  8

X1 = 1
X2 = 1
X3 = 0

104
X4 = 1

The solution for 0/1 Knapsack problem is (x1, x2, x3, x4) = (1, 1, 0, 1)

Maximum profit is:

Pi xi = 10 x 1 + 10 x 1 + 12 x 0 + 18 x 1
= 10 + 10 + 18 = 38.

Portion of state space tree using FIFO Branch and Bound for above problem:
As follows:

105

You might also like