daa-unit-5 (2)
daa-unit-5 (2)
Branch and bound is one of the techniques used for problem solving. It is similar to the backtracking
since it also uses the state space tree. It is used for solving the optimization problems and
minimization problems. If we have given a maximization problem then we can convert it using the
Branch and bound technique by simply converting the problem into a maximization problem.
P = {10, 5, 8, 3}
d = {1, 2, 1, 2}
The above are jobs, problems and problems given. We can write the solutions in two ways which
are given below:
Suppose we want to perform the jobs j1 and j2 then the solution can be represented in two ways:
S1 = {j1, j4}
The second way of representing the solution is that first job is done, second and third jobs are not
done, and fourth job is done.
S2 = {1, 0, 0, 1}
The solution s1 is the variable-size solution while the solution s2 is the fixed-size solution.
First, we will see the subset method where we will see the variable size.
In this case, we first consider the first job, then second job, then third job and finally we consider the
last job.
As we can observe in the above figure that the breadth first search is performed but not the depth
first search. Here we move breadth wise for exploring the solutions. In backtracking, we go depth-
wise whereas in branch and bound, we go breadth wise.
Now one level is completed. Once I take first job, then we can consider either j2, j3 or j4. If we follow
the route then it says that we are doing jobs j1 and j4 so we will not consider jobs j2 and j3.
Now we will consider the node 3. In this case, we are doing job j2 so we can consider either job j3 or
j4. Here, we have discarded the job j1.
Now we will expand the node 4. Since here we are doing job j3 so we will consider only job j4.
Now we will expand node 6, and here we will consider the jobs j3 and j4.
Now we will expand node 7 and here we will consider job j4.
Now we will expand node 9, and here we will consider job j4.
The last node, i.e., node 12 which is left to be expanded. Here, we consider job j4.
The above is the state space tree for the solution s1 = {j1, j4}
We will see another way to solve the problem to achieve the solution s1.
Now, we will expand the node 1. After expansion, the state space tree would be appeared as:
On each expansion, the node will be pushed into the stack shown as below:
Now the expansion would be based on the node that appears on the top of the stack. Since the node
5 appears on the top of the stack, so we will expand the node 5. We will pop out the node 5 from the
stack. Since the node 5 is in the last job, i.e., j4 so there is no further scope of expansion.
The next node that appears on the top of the stack is node 4. Pop out the node 4 and expand. On
expansion, job j4 will be considered and node 6 will be added into the stack shown as below:
The next node is 6 which is to be expanded. Pop out the node 6 and expand. Since the node 6 is in the
last job, i.e., j4 so there is no further scope of expansion.
The next node to be expanded is node 3. Since the node 3 works on the job j2 so node 3 will be
expanded to two nodes, i.e., 7 and 8 working on jobs 3 and 4 respectively. The nodes 7 and 8 will be
pushed into the stack shown as below:
The next node that appears on the top of the stack is node 8. Pop out the node 8 and expand. Since
the node 8 works on the job j4 so there is no further scope for the expansion.
The next node that appears on the top of the stack is node 7. Pop out the node 7 and expand. Since
the node 7 works on the job j3 so node 7 will be further expanded to node 9 that works on the job j4
as shown as below and the node 9 will be pushed into the stack.
The next node that appears on the top of the stack is node 9. Since the node 9 works on the job 4 so
there is no further scope for the expansion.
The next node that appears on the top of the stack is node 2. Since the node 2 works on the job j1 so
it means that the node 2 can be further expanded. It can be expanded upto three nodes named as 10,
11, 12 working on jobs j2, j3, and j4 respectively. There newly nodes will be pushed into the stack
shown as below:
In the above method, we explored all the nodes using the stack that follows the LIFO principle.
There is one more method that can be used to find the solution and that method is Least cost branch
and bound. In this technique, nodes are explored based on the cost of the node. The cost of the node
can be defined using the problem and with the help of the given problem, we can define the cost
function. Once the cost function is defined, we can define the cost of the node.
Let's first consider the node 1 having cost infinity shown as below:
Now we will expand the node 1. The node 1 will be expanded into four nodes named as 2, 3, 4 and 5
shown as below:
Let's assume that cost of the nodes 2, 3, 4, and 5 are 25, 12, 19 and 30 respectively.
Since it is the least cost branch n bound, so we will explore the node which is having the least cost.
In the above figure, we can observe that the node with a minimum cost is node 3. So, we will explore
the node 3 having cost 12.
Since the node 3 works on the job j2 so it will be expanded into two nodes named as 6 and 7 shown
as below:
The node 6 works on job j3 while the node 7 works on job j4. The cost of the node 6 is 8 and the cost
of the node 7 is 7. Now we have to select the node which is having the minimum cost. The node 7 has
the minimum cost so we will explore the node 7. Since the node 7 already works on the job j4 so
there is no further scope for the expansion.
Example-
The following graph shows a set of cities and distance between every pair of cities-
Solve Travelling Salesman Problem using Branch and Bound Algorithm in the following graph-
Solution-
Step-01:
Write the initial cost matrix and reduce it-
Rules
To reduce a matrix, perform the row reduction and column reduction of the matrix
separately.
A row or a column is said to be reduced if it contains at least one entry ‘0’ in it.
Row Reduction-
Cost(1)
= Sum of all reduction elements
=4+5+6+2+1
= 18
Step-02:
Now,
We reduce this matrix.
Then, we find out the cost of node-02.
Row Reduction-
Column Reduction-
Cost(2)
= Cost(1) + Sum of reduction elements + M[A,B]
= 18 + (13 + 5) + 0
= 36
Row Reduction-
Column Reduction-
Now,
We reduce this matrix.
Then, we find out the cost of node-04.
Row Reduction-
Column Reduction-
Cost(4)
= Cost(1) + Sum of reduction elements + M[A,D]
= 18 + 5 + 3
= 26
Thus, we have-
Cost(2) = 36 (for Path A → B)
Cost(3) = 25 (for Path A → C)
Cost(4) = 26 (for Path A → D)
Step-03:
Cost(3) = 25
Row Reduction-
Cost(5)
= cost(3) + Sum of reduction elements + M[C,B]
= 25 + (13 + 8) + ∞
=∞
Row Reduction-
Column Reduction-
Thus, we have-
Step-04:
Cost(6) = 25
Choosing To Go To Vertex-B: Node-7 (Path A → C → D → B)
Now,
We reduce this matrix.
Then, we find out the cost of node-7.
Row Reduction-
Column Reduction-
Cost(7)
= cost(6) + Sum of reduction elements + M[D,B]
= 25 + 0 + 0
= 25
Thus,
Optimal path is: A → C → D → B → A
Cost of Optimal path = 25 units
Knapsack problem is maximization problem but branch and bound technique is applicable for only
minimization problems. In order to convert maximization problem into minimization problem we have
to take negative sign for upper bound and lower bound.
Therefore, Upper bound (U) = -32
Lower bound (L) = -38
We choose the path, which has minimum difference of upper bound and lower bound.
If the difference is equal then we choose the path by comparing upper bounds and we discard node with
maximum upper bound.
Now we will calculate upper bound and lower bound for nodes 2, 3.
For node 2, x1= 1, means we should place first item in the knapsack.
U = 10 + 10 + 12 = 32, make it as -32
For node 3, x1 = 0, means we should not place first item in the knapsack.
U = 10 + 12 = 22, make it as -22
Next, we will calculate difference of upper bound and lower bound for nodes 2, 3
For node 2, U – L = -32 + 38 = 6
For node 3, U – L = -22 + 32 = 10
Choose node 2, since it has minimum difference value of 6.
Now we will calculate lower bound and upper bound of node 4 and 5. Calculate
difference of lower and upper bound of nodes 4 and 5.
For node 4, U – L = -32 + 38 = 6
For node 5, U – L = -22 + 36 = 14
Choose node 4, since it has minimum difference value of 6.
Now
Now we will calculate lower bound and upper bound of node 8 and 9. Calculate difference of lower and
upper bound of nodes 8 and 9.
For node 6, U – L = -32 + 38 = 6
For node 7, U – L = -38 + 38 = 0
Choose node 7, since it is minimum difference value of 0.
Now we will calculate lower bound and upper bound of node 4 and 5. Calculate difference of lower and
upper bound of nodes 4 and 5.
For node 8, U – L = -38 + 38 = 0
For node 9, U – L = -20 + 20 = 0
Here the difference is same, so compare upper bounds of nodes 8 and 9. Discard the node, which has
maximum upper bound. Choose node 8, discard node 9 since, it has maximum upper bound.
Consider the path from 1 → 2 → 4 → 7 → 8
X1 = 1
X2 = 1
X3 = 0
X4 = 1
The solution for 0/1 Knapsack problem is (x1, x2, x3, x4) = (1, 1, 0, 1)
Maximum profit is:
ΣPi xi = 10 x 1 + 10 x 1 + 12 x 0 + 18 x 1
= 10 + 10 + 18 = 38.
Portion of state space tree using FIFO Branch and Bound for above problem:
As follows:
NP-Hard and NP-Complete problems
Deterministic and non-deterministic algorithms
Deterministic: The algorithm in which every operation is uniquely defined is called deterministic algorithm.
Non-Deterministic: The algorithm in which the operations are not uniquely defined but are limited to specific set of
possibilities for every operation, such an algorithm is called non-deterministic algorithm.
The non-deterministic algorithms use the following functions:
1. Choice: Arbitrarily chooses one of the elements from given set.
2. Failure: Indicates an unsuccessful completion
3. Success: Indicates a successful completion
A non-deterministic algorithm terminates unsuccessfully if and only if there exists no set of choices leading to a success
signal. Whenever, there is a set of choices that leads to a successful completion, then one such set of choices is selected
and the algorithm terminates successfully.
In case the successful completion is not possible, then the complexity is O(1). In case of successful signal completion then
the time required is the minimum number of steps needed to reach a successful completion of O(n) where n is the number
of inputs.
The problems that are solved in polynomial time are called tractable problems and the problems that require super polynomia
time are called non-tractable problems. All deterministic polynomial time algorithms are tractable and the non-deterministi
polynomials are intractable.
Satisfiability Problem:
The satisfiability is a Boolean formula that can be constructed using the following literals and operations.
1. A literal is either a variable or its negation of the variable.
2. The literals are connected with operators ˅, ˄͢, ⇒ , ⇔
3. Parenthesis
The satisfiability problem is to determine whether a Boolean formula is true for some assignment of truth
values to the variables. In general, formulas are expressed in Conjunctive Normal Form (CNF).
A Boolean formula is in conjunctive normal form iff it is represented by
( xi 𝗏 xj 𝗏 xk 1 ) 𝖠 ( xi 𝗏 x 1 𝗏 xk )
A Boolean formula is in 3CNF if each clause has exactly 3 distinct literals.
Example:
The non-deterministic algorithm that terminates successfully iff a given formula
E(x1,x2,x3) is satisfiable.
Reducability:
Reducability:
A problem Q1 can be reduced to Q2 if any instance of Q1 can be easily rephrased as an instance of Q2. If the
solution to the problem Q2 provides a solution to the problem Q1, then these are said to be reducible
problems.
Let L1 and L2 are the two problems. L1 is reduced to L2 iff there is a way to solve L1 by a deterministic
polynomial time algorithm using a deterministic algorithm that solves L2 in polynomial time and is denoted
by L1α L2.
If we have a polynomial time algorithm for L2 then we can solve L1 in polynomial time.
Two problems L1 and L2 are said to be polynomially equivalent iff L1α L2 and L2 α L1.
Example: Let P1 be the problem of selection and P2 be the problem of sorting. Let the input have n numbers.
If the numbers are sorted in array A[ ] the ith smallest element of the input can be obtained as A[i]. Thus P1
reduces to P2 in O(1) time.
Decision Problem:
Any problem for which the answer is either yes or no is called decision problem. The algorithm for decision
problem is called decision algorithm.
Example: Max clique problem, sum of subsets problem.
Optimization Problem: Any problem that involves the identification of an optimal value (maximum or
minimum) is called optimization problem.
Example: Knapsack problem, travelling salesperson problem.
In decision problem, the output statement is implicit and no explicit statements are permitted.
The output from a decision problem is uniquely defined by the input parameters and algorithm
specification.
Many optimization problems can be reduced by decision problems with the property that a decision
problem can be solved in polynomial time iff the corresponding optimization problem can be solved in
polynomial time. If the decision problem cannot be solved in polynomial time then the optimization problem
cannot be solved in polynomial time.
Class P:
P: the class of decision problems that are solvable in O(p(n)) time, where p(n) is a polynomial of problem’s
input size n
Examples:
• searching
• element uniqueness
• graph connectivity
• graph acyclicity
• primality testing
Class NP
NP (nondeterministic polynomial): class of decision problems whose proposed solutions can be verified in
polynomial time = solvable by a nondeterministic polynomial algorithm
A nondeterministic polynomial algorithm is an abstract two-stage procedure that:
• generates a random string purported to solve the problem
• checks whether this solution is correct in polynomial time
By definition, it solves the problem if it’s capable of generating and verifying a solution on one of its tries
Example: CNF satisfiability
Problem: Is a boolean expression in its conjunctive normal form (CNF) satisfiable, i.e., are there values of its
variables that makes it true? This problem is in NP.
Nondeterministic algorithm:
• Guess truth assignment
• Substitute the values into the CNF formula to see if it evaluates to true
What problems are in NP?
• Hamiltonian circuit existence
• Partition problem: Is it possible to partition a set of n integers into two disjoint subsets with the same
sum?
• Decision versions of TSP, knapsack problem, graph coloring, and many other combinatorial optimization
problems. (Few exceptions include: MST, shortest paths)
• All the problems in P can also be solved in this manner (but no guessing is necessary), so we have:
P NP
• Big question: P = NP ?
Problem conversion
A decision problem D1 can be converted into a decision problem D2 if there is an algorithm which takes as
input an arbitrary instance I1 of D1 and delivers as output an instance I2 of D2such that I2 is a positive
instance of D2 if and only if I1 is a positive instance of D1. If D1 can be converted into D2, and we have an
algorithm which solves D2, then we thereby have an algorithm which solves D1. To solve an instance, I of
D1, we first use the conversion algorithm to generate an instance I0 of D2, and then use the algorithm for
solving D2 to determine whether or not I0 is a positive instance of D2. If it is, then we know that I is a
positive instance of D1, and if it is not, then we know that I is a negative instance of D1. Either way, we have
solved D1 for that instance. Moreover, in this case, we can say that the computational complexity of D1 is at
most the sum of the computational complexities of D2 and the conversion algorithm. If the conversion
algorithm has polynomial complexity, we say that D1 is at most polynomially harder than D2. It means that
the amount of computational work we have to do to solve D1, over and above whatever is required to solve
D2, is polynomial in the size of the problem instance.
In such a case the conversion algorithm provides us with a feasible way of solving D1, given that we know
how to solve D2.
Given a problem X, prove it is in NP-Complete.
1. Prove X is in NP.
2. Select problem Y that is known to be in NP-Complete.
3. Define a polynomial time reduction from Y to X.
4. Prove that given an instance of Y, Y has a solution iff X has a solution.
Cook’s theorem:
Cook’s Theorem implies that any NP problem is at most polynomially harder than SAT.
This means that if we find a way of solving SAT in polynomial time, we will then be in a position to solve any
NP problem in polynomial time. This would have huge practical repercussions, since many frequently
encountered problems which are so far believed to be intractable are NP. This special property of SAT is called
NP-completeness. A decision problem is NP-complete if it has the property that any NP problem can be
converted into it in polynomial time. SAT was the first NP complete problem to be recognized as such (the
theory of NP-completeness having come into existence with the proof of Cook’s Theorem), but it is by no means
the only one. There are now literally thousands of problems, cropping up in many different areas of computing,
which have been proved to be NP- complete.
In order to prove that an NP problem is NP-complete, all that is needed is to show that SAT can be converted
into it in polynomial time. The reason for this is that the sequential composition of two polynomial-time
algorithms is itself a polynomial-time algorithm, since the sum of two polynomials is itself a polynomial.
Suppose SAT can be converted to problem D in polynomial time. Now take any NP problem D0. We know we
can convert it into SAT in polynomial time, and we know we can convert SAT into D in polynomial time. The
result of these two conversions is a polynomial-time conversion of D0 into
D. since D0 was an arbitrary NP problem, it follows that D is NP-complete.
NP Problem:
The NP problems set of problems whose solutions are hard to find but easy to verify and are solved by Non-
Deterministic Machine in polynomial time.
NP-Hard Problem:
A Problem X is NP-Hard if there is an NP-Complete problem Y, such that Y is reducible to X in polynomial time. NP-
Hard problems are as hard as NP-Complete problems. NP-Hard Problem need not be in NP class.
NP-Complete Problem:
A problem X is NP-Complete if there is an NP problem Y, such that Y is reducible to X in polynomial time. NP-
Complete problems are as hard as NP problems. A problem is NP-Complete if it is a part of both NP and NP-Hard
Problem. A non-deterministic Turing machine can solve NP-Complete problem in polynomial time.
Difference between NP-Hard and NP-Complete:
NP-hard NP-Complete
NP-Hard problems (say X) can be solved if and only if there is a NP-Complete problems can be solved by a non-deterministic
NP-Complete problem (say Y) that can be reducible into X in Algorithm/Turing Machine in polynomial time.
polynomial time.
To solve this problem, it do not have to be in NP . To solve this problem, it must be both NP and NP-hard problems.
Example: Halting problem, Vertex cover problem, etc. Example: Determine whether a graph has a Hamiltonian cycle, determine
whether a Boolean formula is satisfiable or not, Circuit-satisfiability
problem, etc.
1. Dynamic Programming is used to obtain 1. Greedy Method is also used to get the optimal solution.
the optimal solution.
2. In Dynamic Programming, we choose at 2. In a greedy Algorithm, we make whatever choice seems best at
each step, but the choice may depend on the moment and then solve the sub-problems arising after the
the solution to sub-problems. choice is made.
5. It is guaranteed that Dynamic 5. In Greedy Method, there is no such guarantee of getting Optimal
Programming will generate an optimal Solution.
solution using Principle of Optimality.