0% found this document useful (0 votes)
10 views40 pages

daa-unit-5 (2)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views40 pages

daa-unit-5 (2)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

UNIT - V

Branch and Bound: General method, applications - Travelling sales person


problem, 0/1 knapsack problem - LC Branch and Bound solution, FIFO Branch
and Bound solution.
NP-Hard and NP-Complete problems: Basic concepts, non-deterministic
algorithms, NP - Hard and NP-Complete classes, Cook’s theorem.
UNIT - V
Branch and Bound: General method, applications - Travelling sales person problem, 0/1 knapsack
problem - LC Branch and Bound solution, FIFO Branch and Bound solution.
NP-Hard and NP-Complete problems: Basic concepts, non-deterministic algorithms, NP - Hard and
NP-Complete classes, Cook’s theorem.

Branch and bound

What is Branch and bound?

Branch and bound is one of the techniques used for problem solving. It is similar to the backtracking
since it also uses the state space tree. It is used for solving the optimization problems and
minimization problems. If we have given a maximization problem then we can convert it using the
Branch and bound technique by simply converting the problem into a maximization problem.

Let's understand through an example.

Jobs = {j1, j2, j3, j4}

P = {10, 5, 8, 3}
d = {1, 2, 1, 2}

The above are jobs, problems and problems given. We can write the solutions in two ways which
are given below:

Suppose we want to perform the jobs j1 and j2 then the solution can be represented in two ways:

The first way of representing the solutions is the subsets of jobs.

S1 = {j1, j4}

The second way of representing the solution is that first job is done, second and third jobs are not
done, and fourth job is done.

S2 = {1, 0, 0, 1}

The solution s1 is the variable-size solution while the solution s2 is the fixed-size solution.

First, we will see the subset method where we will see the variable size.

First method: (First in First out (FIFO)Search)

In this case, we first consider the first job, then second job, then third job and finally we consider the
last job.

As we can observe in the above figure that the breadth first search is performed but not the depth
first search. Here we move breadth wise for exploring the solutions. In backtracking, we go depth-
wise whereas in branch and bound, we go breadth wise.

Now one level is completed. Once I take first job, then we can consider either j2, j3 or j4. If we follow
the route then it says that we are doing jobs j1 and j4 so we will not consider jobs j2 and j3.
Now we will consider the node 3. In this case, we are doing job j2 so we can consider either job j3 or
j4. Here, we have discarded the job j1.

Now we will expand the node 4. Since here we are doing job j3 so we will consider only job j4.
Now we will expand node 6, and here we will consider the jobs j3 and j4.

Now we will expand node 7 and here we will consider job j4.
Now we will expand node 9, and here we will consider job j4.
The last node, i.e., node 12 which is left to be expanded. Here, we consider job j4.

The above is the state space tree for the solution s1 = {j1, j4}

Second method: ( Last in First out (LIFO) Search)

We will see another way to solve the problem to achieve the solution s1.

First, we consider the node 1 shown as below:

Now, we will expand the node 1. After expansion, the state space tree would be appeared as:

On each expansion, the node will be pushed into the stack shown as below:

Now the expansion would be based on the node that appears on the top of the stack. Since the node
5 appears on the top of the stack, so we will expand the node 5. We will pop out the node 5 from the
stack. Since the node 5 is in the last job, i.e., j4 so there is no further scope of expansion.
The next node that appears on the top of the stack is node 4. Pop out the node 4 and expand. On
expansion, job j4 will be considered and node 6 will be added into the stack shown as below:

The next node is 6 which is to be expanded. Pop out the node 6 and expand. Since the node 6 is in the
last job, i.e., j4 so there is no further scope of expansion.
The next node to be expanded is node 3. Since the node 3 works on the job j2 so node 3 will be
expanded to two nodes, i.e., 7 and 8 working on jobs 3 and 4 respectively. The nodes 7 and 8 will be
pushed into the stack shown as below:

The next node that appears on the top of the stack is node 8. Pop out the node 8 and expand. Since
the node 8 works on the job j4 so there is no further scope for the expansion.
The next node that appears on the top of the stack is node 7. Pop out the node 7 and expand. Since
the node 7 works on the job j3 so node 7 will be further expanded to node 9 that works on the job j4
as shown as below and the node 9 will be pushed into the stack.

The next node that appears on the top of the stack is node 9. Since the node 9 works on the job 4 so
there is no further scope for the expansion.

The next node that appears on the top of the stack is node 2. Since the node 2 works on the job j1 so
it means that the node 2 can be further expanded. It can be expanded upto three nodes named as 10,
11, 12 working on jobs j2, j3, and j4 respectively. There newly nodes will be pushed into the stack
shown as below:
In the above method, we explored all the nodes using the stack that follows the LIFO principle.

Third method (Least -Count Search (LC))

There is one more method that can be used to find the solution and that method is Least cost branch
and bound. In this technique, nodes are explored based on the cost of the node. The cost of the node
can be defined using the problem and with the help of the given problem, we can define the cost
function. Once the cost function is defined, we can define the cost of the node.

Let's first consider the node 1 having cost infinity shown as below:

Now we will expand the node 1. The node 1 will be expanded into four nodes named as 2, 3, 4 and 5
shown as below:
Let's assume that cost of the nodes 2, 3, 4, and 5 are 25, 12, 19 and 30 respectively.

Since it is the least cost branch n bound, so we will explore the node which is having the least cost.
In the above figure, we can observe that the node with a minimum cost is node 3. So, we will explore
the node 3 having cost 12.

Since the node 3 works on the job j2 so it will be expanded into two nodes named as 6 and 7 shown
as below:

The node 6 works on job j3 while the node 7 works on job j4. The cost of the node 6 is 8 and the cost
of the node 7 is 7. Now we have to select the node which is having the minimum cost. The node 7 has
the minimum cost so we will explore the node 7. Since the node 7 already works on the job j4 so
there is no further scope for the expansion.

 Travelling salesman problem (TSP)


 Quadratic assignment problem (QAP)
 Maximum satisfiability problem (MAX-SAT)
 Nearest neighbor search
 Flow shop scheduling
 Parameter estimation
 0/1 knapsack problem
Travelling Salesman Problem- using Branch and bound

You are given-


 A set of some cities
 Distance between every pair of cities

Travelling Salesman Problem states-


 A salesman has to visit every city exactly once.
 He has to come back to the city from where he starts his journey.
 What is the shortest possible route that the salesman must follow to complete his tour?

Example-

The following graph shows a set of cities and distance between every pair of cities-

If salesman starting city is A, then a TSP tour in the graph is-


A→B→D→C→A
Cost of the tour
= 10 + 25 + 30 + 15
= 80 units

PRACTICE PROBLEM BASED ON TRAVELLING SALESMAN


PROBLEM USING BRANCH AND BOUND APPROACH-
Problem-

Solve Travelling Salesman Problem using Branch and Bound Algorithm in the following graph-

Solution-

Step-01:
Write the initial cost matrix and reduce it-
Rules
 To reduce a matrix, perform the row reduction and column reduction of the matrix
separately.
 A row or a column is said to be reduced if it contains at least one entry ‘0’ in it.

Row Reduction-

Consider the rows of above matrix one by one.

If the row already contains an entry ‘0’, then-


 There is no need to reduce that row.

If the row does not contains an entry ‘0’, then-


 Reduce that particular row.
 Select the least value element from that row.
 Subtract that element from each element of that row.
 This will create an entry ‘0’ in that row, thus reducing that row.
Following this, we have-
 Reduce the elements of row-1 by 4.
 Reduce the elements of row-2 by 5.
 Reduce the elements of row-3 by 6.
 Reduce the elements of row-4 by 2.
Performing this, we obtain the following row-reduced matrix-
Column Reduction-

Consider the columns of above row-reduced matrix one by one.

If the column already contains an entry ‘0’, then-


 There is no need to reduce that column.

If the column does not contains an entry ‘0’, then-


 Reduce that particular column.
 Select the least value element from that column.
 Subtract that element from each element of that column.
 This will create an entry ‘0’ in that column, thus reducing that column.

Following this, we have-


 There is no need to reduce column-1.
 There is no need to reduce column-2.
 Reduce the elements of column-3 by 1.
 There is no need to reduce column-4.

Performing this, we obtain the following column-reduced matrix-

Finally, the initial distance matrix is completely reduced.


Now, we calculate the cost of node-1 by adding all the reduction elements.

Cost(1)
= Sum of all reduction elements
=4+5+6+2+1
= 18

Step-02:

 We consider all other vertices one by one.


 We select the best vertex where we can land upon to minimize the tour cost.

Choosing To Go To Vertex-B: Node-2 (Path A → B)

 From the reduced matrix of step-01, M[A,B] = 0


 Set row-A and column-B to ∞
 Set M[B,A] = ∞

Now, resulting cost matrix is-

Now,
 We reduce this matrix.
 Then, we find out the cost of node-02.

Row Reduction-

 We can not reduce row-1 as all its elements are ∞.


 Reduce all the elements of row-2 by 13.
 There is no need to reduce row-3.
 There is no need to reduce row-4.

Performing this, we obtain the following row-reduced matrix-

Column Reduction-

 Reduce the elements of column-1 by 5.


 We can not reduce column-2 as all its elements are ∞.
 There is no need to reduce column-3.
 There is no need to reduce column-4.

Performing this, we obtain the following column-reduced matrix-


Finally, the matrix is completely reduced.
Now, we calculate the cost of node-2.

Cost(2)
= Cost(1) + Sum of reduction elements + M[A,B]
= 18 + (13 + 5) + 0
= 36

Choosing To Go To Vertex-C: Node-3 (Path A → C)

 From the reduced matrix of step-01, M[A,C] = 7


 Set row-A and column-C to ∞
 Set M[C,A] = ∞

Now, resulting cost matrix is-


Now,
 We reduce this matrix.
 Then, we find out the cost of node-03.

Row Reduction-

 We can not reduce row-1 as all its elements are ∞.


 There is no need to reduce row-2.
 There is no need to reduce row-3.
 There is no need to reduce row-4.

Thus, the matrix is already row-reduced.

Column Reduction-

 There is no need to reduce column-1.


 There is no need to reduce column-2.
 We can not reduce column-3 as all its elements are ∞.
 There is no need to reduce column-4.

Thus, the matrix is already column reduced.


Finally, the matrix is completely reduced.
Now, we calculate the cost of node-3.
Cost(3)
= Cost(1) + Sum of reduction elements + M[A,C]
= 18 + 0 + 7
= 25

Choosing To Go To Vertex-D: Node-4 (Path A → D)

 From the reduced matrix of step-01, M[A,D] = 3


 Set row-A and column-D to ∞
 Set M[D,A] = ∞

Now, resulting cost matrix is-

Now,
 We reduce this matrix.
 Then, we find out the cost of node-04.

Row Reduction-

 We can not reduce row-1 as all its elements are ∞.


 There is no need to reduce row-2.
 Reduce all the elements of row-3 by 5.
 There is no need to reduce row-4.

Performing this, we obtain the following row-reduced matrix-

Column Reduction-

 There is no need to reduce column-1.


 There is no need to reduce column-2.
 There is no need to reduce column-3.
 We can not reduce column-4 as all its elements are ∞.

Thus, the matrix is already column-reduced.


Finally, the matrix is completely reduced.
Now, we calculate the cost of node-4.

Cost(4)
= Cost(1) + Sum of reduction elements + M[A,D]
= 18 + 5 + 3
= 26

Thus, we have-
 Cost(2) = 36 (for Path A → B)
 Cost(3) = 25 (for Path A → C)
 Cost(4) = 26 (for Path A → D)

We choose the node with the lowest cost.


Since cost for node-3 is lowest, so we prefer to visit node-3.
Thus, we choose node-3 i.e. path A → C.

Step-03:

We explore the vertices B and D from node-3.


We now start from the cost matrix at node-3 which is-

Cost(3) = 25

Choosing To Go To Vertex-B: Node-5 (Path A → C → B)

 From the reduced matrix of step-02, M[C,B] = ∞


 Set row-C and column-B to ∞
 Set M[B,A] = ∞

Now, resulting cost matrix is-


Now,
 We reduce this matrix.
 Then, we find out the cost of node-5.

Row Reduction-

 We can not reduce row-1 as all its elements are ∞.


 Reduce all the elements of row-2 by 13.
 We can not reduce row-3 as all its elements are ∞.
 Reduce all the elements of row-4 by 8.

Performing this, we obtain the following row-reduced matrix-


Column Reduction-

 There is no need to reduce column-1.


 We can not reduce column-2 as all its elements are ∞.
 We can not reduce column-3 as all its elements are ∞.
 There is no need to reduce column-4.

Thus, the matrix is already column reduced.


Finally, the matrix is completely reduced.
Now, we calculate the cost of node-5.

Cost(5)
= cost(3) + Sum of reduction elements + M[C,B]

= 25 + (13 + 8) + ∞

=∞

Choosing To Go To Vertex-D: Node-6 (Path A → C → D)

 From the reduced matrix of step-02, M[C,D] = ∞


 Set row-C and column-D to ∞
 Set M[D,A] = ∞

Now, resulting cost matrix is-


Now,
 We reduce this matrix.
 Then, we find out the cost of node-6.

Row Reduction-

 We can not reduce row-1 as all its elements are ∞.


 There is no need to reduce row-2.
 We can not reduce row-3 as all its elements are ∞.
 We can not reduce row-4 as all its elements are ∞.

Thus, the matrix is already row reduced.

Column Reduction-

 There is no need to reduce column-1.


 We can not reduce column-2 as all its elements are ∞.
 We can not reduce column-3 as all its elements are ∞.
 We can not reduce column-4 as all its elements are ∞.

Thus, the matrix is already column reduced.


Finally, the matrix is completely reduced.
Now, we calculate the cost of node-6.
Cost(6)
= cost(3) + Sum of reduction elements + M[C,D]
= 25 + 0 + 0
= 25

Thus, we have-

 Cost(5) = ∞ (for Path A → C → B)


 Cost(6) = 25 (for Path A → C → D)

We choose the node with the lowest cost.


Since cost for node-6 is lowest, so we prefer to visit node-6.
Thus, we choose node-6 i.e. path C → D.

Step-04:

We explore vertex B from node-6.


We start with the cost matrix at node-6 which is-

Cost(6) = 25
Choosing To Go To Vertex-B: Node-7 (Path A → C → D → B)

 From the reduced matrix of step-03, M[D,B] = 0


 Set row-D and column-B to ∞
 Set M[B,A] = ∞

Now, resulting cost matrix is-

Now,
 We reduce this matrix.
 Then, we find out the cost of node-7.

Row Reduction-

 We can not reduce row-1 as all its elements are ∞.


 We can not reduce row-2 as all its elements are ∞.
 We can not reduce row-3 as all its elements are ∞.
 We can not reduce row-4 as all its elements are ∞.

Column Reduction-

 We can not reduce column-1 as all its elements are ∞.


 We can not reduce column-2 as all its elements are ∞.
 We can not reduce column-3 as all its elements are ∞.
 We can not reduce column-4 as all its elements are ∞.

Thus, the matrix is already column reduced.


Finally, the matrix is completely reduced.

All the entries have become ∞.


Now, we calculate the cost of node-7.

Cost(7)
= cost(6) + Sum of reduction elements + M[D,B]

= 25 + 0 + 0

= 25

Thus,
 Optimal path is: A → C → D → B → A
 Cost of Optimal path = 25 units

0/1 Knapsack Problem


Consider the instance: M = 15, n = 4, (P1, P2, P3, P4) = (10, 10, 12, 18) and (w1, w2, w3, w4) = ( 2, 4, 6,
9).
0/1 knapsack problem can be solved by using branch and bound technique. In this problem we will
calculate lower bound and upper bound for each node.
Place first item in knapsack. Remaining weight of knapsack is 15 – 2 = 13. Place next item w2 in
knapsack and the remaining weight of knapsack is 13 – 4 = 9. Place next item w3 in knapsack then the
remaining weight of knapsack is 9 – 6 = 3. No fractions are allowed in calculation of upper bound so w4
cannot be placed in knapsack.
Profit = P1 + P2 + P3 = 10 + 10 + 12
So, Upper bound = 32
To calculate lower bound we can place w4 in knapsack since fractions are allowed in calculation of lower
bound.

Knapsack problem is maximization problem but branch and bound technique is applicable for only
minimization problems. In order to convert maximization problem into minimization problem we have
to take negative sign for upper bound and lower bound.
Therefore, Upper bound (U) = -32
Lower bound (L) = -38
We choose the path, which has minimum difference of upper bound and lower bound.
If the difference is equal then we choose the path by comparing upper bounds and we discard node with
maximum upper bound.

Now we will calculate upper bound and lower bound for nodes 2, 3.
For node 2, x1= 1, means we should place first item in the knapsack.
U = 10 + 10 + 12 = 32, make it as -32

For node 3, x1 = 0, means we should not place first item in the knapsack.
U = 10 + 12 = 22, make it as -22

Next, we will calculate difference of upper bound and lower bound for nodes 2, 3
For node 2, U – L = -32 + 38 = 6
For node 3, U – L = -22 + 32 = 10
Choose node 2, since it has minimum difference value of 6.

Now we will calculate lower bound and upper bound of node 4 and 5. Calculate
difference of lower and upper bound of nodes 4 and 5.
For node 4, U – L = -32 + 38 = 6
For node 5, U – L = -22 + 36 = 14
Choose node 4, since it has minimum difference value of 6.
Now

Now we will calculate lower bound and upper bound of node 8 and 9. Calculate difference of lower and
upper bound of nodes 8 and 9.
For node 6, U – L = -32 + 38 = 6
For node 7, U – L = -38 + 38 = 0
Choose node 7, since it is minimum difference value of 0.

Now we will calculate lower bound and upper bound of node 4 and 5. Calculate difference of lower and
upper bound of nodes 4 and 5.
For node 8, U – L = -38 + 38 = 0
For node 9, U – L = -20 + 20 = 0
Here the difference is same, so compare upper bounds of nodes 8 and 9. Discard the node, which has
maximum upper bound. Choose node 8, discard node 9 since, it has maximum upper bound.
Consider the path from 1 → 2 → 4 → 7 → 8
X1 = 1
X2 = 1
X3 = 0
X4 = 1
The solution for 0/1 Knapsack problem is (x1, x2, x3, x4) = (1, 1, 0, 1)
Maximum profit is:
ΣPi xi = 10 x 1 + 10 x 1 + 12 x 0 + 18 x 1
= 10 + 10 + 18 = 38.
Portion of state space tree using FIFO Branch and Bound for above problem:
As follows:
NP-Hard and NP-Complete problems
Deterministic and non-deterministic algorithms
Deterministic: The algorithm in which every operation is uniquely defined is called deterministic algorithm.
Non-Deterministic: The algorithm in which the operations are not uniquely defined but are limited to specific set of
possibilities for every operation, such an algorithm is called non-deterministic algorithm.
The non-deterministic algorithms use the following functions:
1. Choice: Arbitrarily chooses one of the elements from given set.
2. Failure: Indicates an unsuccessful completion
3. Success: Indicates a successful completion
A non-deterministic algorithm terminates unsuccessfully if and only if there exists no set of choices leading to a success
signal. Whenever, there is a set of choices that leads to a successful completion, then one such set of choices is selected
and the algorithm terminates successfully.
In case the successful completion is not possible, then the complexity is O(1). In case of successful signal completion then
the time required is the minimum number of steps needed to reach a successful completion of O(n) where n is the number
of inputs.
The problems that are solved in polynomial time are called tractable problems and the problems that require super polynomia
time are called non-tractable problems. All deterministic polynomial time algorithms are tractable and the non-deterministi
polynomials are intractable.
Satisfiability Problem:
The satisfiability is a Boolean formula that can be constructed using the following literals and operations.
1. A literal is either a variable or its negation of the variable.
2. The literals are connected with operators ˅, ˄͢, ⇒ , ⇔
3. Parenthesis
The satisfiability problem is to determine whether a Boolean formula is true for some assignment of truth
values to the variables. In general, formulas are expressed in Conjunctive Normal Form (CNF).
A Boolean formula is in conjunctive normal form iff it is represented by
( xi 𝗏 xj 𝗏 xk 1 ) 𝖠 ( xi 𝗏 x 1 𝗏 xk )
A Boolean formula is in 3CNF if each clause has exactly 3 distinct literals.
Example:
The non-deterministic algorithm that terminates successfully iff a given formula
E(x1,x2,x3) is satisfiable.
Reducability:

Reducability:
A problem Q1 can be reduced to Q2 if any instance of Q1 can be easily rephrased as an instance of Q2. If the
solution to the problem Q2 provides a solution to the problem Q1, then these are said to be reducible
problems.
Let L1 and L2 are the two problems. L1 is reduced to L2 iff there is a way to solve L1 by a deterministic
polynomial time algorithm using a deterministic algorithm that solves L2 in polynomial time and is denoted
by L1α L2.
If we have a polynomial time algorithm for L2 then we can solve L1 in polynomial time.
Two problems L1 and L2 are said to be polynomially equivalent iff L1α L2 and L2 α L1.
Example: Let P1 be the problem of selection and P2 be the problem of sorting. Let the input have n numbers.
If the numbers are sorted in array A[ ] the ith smallest element of the input can be obtained as A[i]. Thus P1
reduces to P2 in O(1) time.
Decision Problem:
Any problem for which the answer is either yes or no is called decision problem. The algorithm for decision
problem is called decision algorithm.
Example: Max clique problem, sum of subsets problem.
Optimization Problem: Any problem that involves the identification of an optimal value (maximum or
minimum) is called optimization problem.
Example: Knapsack problem, travelling salesperson problem.
In decision problem, the output statement is implicit and no explicit statements are permitted.
The output from a decision problem is uniquely defined by the input parameters and algorithm
specification.
Many optimization problems can be reduced by decision problems with the property that a decision
problem can be solved in polynomial time iff the corresponding optimization problem can be solved in
polynomial time. If the decision problem cannot be solved in polynomial time then the optimization problem
cannot be solved in polynomial time.
Class P:
P: the class of decision problems that are solvable in O(p(n)) time, where p(n) is a polynomial of problem’s
input size n
Examples:
• searching
• element uniqueness
• graph connectivity
• graph acyclicity
• primality testing
Class NP
NP (nondeterministic polynomial): class of decision problems whose proposed solutions can be verified in
polynomial time = solvable by a nondeterministic polynomial algorithm
A nondeterministic polynomial algorithm is an abstract two-stage procedure that:
• generates a random string purported to solve the problem
• checks whether this solution is correct in polynomial time
By definition, it solves the problem if it’s capable of generating and verifying a solution on one of its tries
Example: CNF satisfiability
Problem: Is a boolean expression in its conjunctive normal form (CNF) satisfiable, i.e., are there values of its
variables that makes it true? This problem is in NP.
Nondeterministic algorithm:
• Guess truth assignment
• Substitute the values into the CNF formula to see if it evaluates to true
What problems are in NP?
• Hamiltonian circuit existence
• Partition problem: Is it possible to partition a set of n integers into two disjoint subsets with the same
sum?
• Decision versions of TSP, knapsack problem, graph coloring, and many other combinatorial optimization
problems. (Few exceptions include: MST, shortest paths)
• All the problems in P can also be solved in this manner (but no guessing is necessary), so we have:
P NP
• Big question: P = NP ?

NP HARD AND NP COMPLETE


Polynomial Time algorithms
Problems whose solutions times are bounded by polynomials of small degree are called polynomial time
algorithms
Example: Linear search, quick sort, all pairs shortest path etc.
Non- Polynomial time algorithms
Problems whose solutions times are bounded by non-polynomials are called nonpolynomial time algorithms
Examples: Travelling salesman problem, 0/1 knapsack problem etc
It is impossible to develop the algorithms whose time complexity is polynomial for non-polynomial time
problems, because the computing times of non-polynomial are greater than polynomial. A problem that can
be solved in polynomial time in one model can also be solved in polynomial time.
NP-Hard and NP-Complete Problem:
Let P denote the set of all decision problems solvable by deterministic algorithm inpolynomial time. NP
denotes set of decision problems solvable by nondeterministic algorithms in polynomial time. Since,
deterministic algorithms are a special case of nondeterministic algorithms, P ⊆ NP. The nondeterministic
polynomial time problems can be classified into two classes. They are
1. NP Hard and
2. NP Complete
NP-Hard: A problem L is NP-Hard iff satisfiability reduces to L i.e., any nondeterministic polynomial time
problem is satisfiable and reducable then the problem is said to be NP-Hard.
Example: Halting Problem, Flow shop scheduling problem
NP-Complete: A problem L is NP-Complete iff L is NP-Hard and L belongs to NP (nondeterministic
polynomial).
A problem that is NP-Complete has the property that it can be solved in polynomial time iff all other NP-
Complete problems can also be solved in polynomial time. (NP=P)
If an NP-hard problem can be solved in polynomial time, then all NP- complete problems can be solved in
polynomial time. All NP-Complete problems are NP-hard, but some NP hard problems are not known to be
NP- Complete.
Normally the decision problems are NP-complete but the optimization problems are NP Hard.
Relationship between P,NP,NP-hard, NP-Complete
Let P, NP, NP-hard, NP-Complete are the sets of all possible decision problems that are solvable in
polynomial time by using deterministic algorithms, non-deterministic algorithms, NP-Hard and NP-complete
respectively. Then the relationship between P, NP, NP-hard, NP-Complete can be expressed using Venn
diagram as:
However if problem L1 is a decision problem and L2 is an optimization problem, then it is possible that L1α
L2.
Example: Knapsack decision problem can be reduced to knapsack optimization problem.
There are some NP-hard problems that are not NP-Complete.

Problem conversion
A decision problem D1 can be converted into a decision problem D2 if there is an algorithm which takes as
input an arbitrary instance I1 of D1 and delivers as output an instance I2 of D2such that I2 is a positive
instance of D2 if and only if I1 is a positive instance of D1. If D1 can be converted into D2, and we have an
algorithm which solves D2, then we thereby have an algorithm which solves D1. To solve an instance, I of
D1, we first use the conversion algorithm to generate an instance I0 of D2, and then use the algorithm for
solving D2 to determine whether or not I0 is a positive instance of D2. If it is, then we know that I is a
positive instance of D1, and if it is not, then we know that I is a negative instance of D1. Either way, we have
solved D1 for that instance. Moreover, in this case, we can say that the computational complexity of D1 is at
most the sum of the computational complexities of D2 and the conversion algorithm. If the conversion
algorithm has polynomial complexity, we say that D1 is at most polynomially harder than D2. It means that
the amount of computational work we have to do to solve D1, over and above whatever is required to solve
D2, is polynomial in the size of the problem instance.
In such a case the conversion algorithm provides us with a feasible way of solving D1, given that we know
how to solve D2.
Given a problem X, prove it is in NP-Complete.
1. Prove X is in NP.
2. Select problem Y that is known to be in NP-Complete.
3. Define a polynomial time reduction from Y to X.
4. Prove that given an instance of Y, Y has a solution iff X has a solution.
Cook’s theorem:
Cook’s Theorem implies that any NP problem is at most polynomially harder than SAT.
This means that if we find a way of solving SAT in polynomial time, we will then be in a position to solve any
NP problem in polynomial time. This would have huge practical repercussions, since many frequently
encountered problems which are so far believed to be intractable are NP. This special property of SAT is called
NP-completeness. A decision problem is NP-complete if it has the property that any NP problem can be
converted into it in polynomial time. SAT was the first NP complete problem to be recognized as such (the
theory of NP-completeness having come into existence with the proof of Cook’s Theorem), but it is by no means
the only one. There are now literally thousands of problems, cropping up in many different areas of computing,
which have been proved to be NP- complete.
In order to prove that an NP problem is NP-complete, all that is needed is to show that SAT can be converted
into it in polynomial time. The reason for this is that the sequential composition of two polynomial-time
algorithms is itself a polynomial-time algorithm, since the sum of two polynomials is itself a polynomial.
Suppose SAT can be converted to problem D in polynomial time. Now take any NP problem D0. We know we
can convert it into SAT in polynomial time, and we know we can convert SAT into D in polynomial time. The
result of these two conversions is a polynomial-time conversion of D0 into
D. since D0 was an arbitrary NP problem, it follows that D is NP-complete.

NP Problem:
The NP problems set of problems whose solutions are hard to find but easy to verify and are solved by Non-
Deterministic Machine in polynomial time.
NP-Hard Problem:
A Problem X is NP-Hard if there is an NP-Complete problem Y, such that Y is reducible to X in polynomial time. NP-
Hard problems are as hard as NP-Complete problems. NP-Hard Problem need not be in NP class.
NP-Complete Problem:
A problem X is NP-Complete if there is an NP problem Y, such that Y is reducible to X in polynomial time. NP-
Complete problems are as hard as NP problems. A problem is NP-Complete if it is a part of both NP and NP-Hard
Problem. A non-deterministic Turing machine can solve NP-Complete problem in polynomial time.
Difference between NP-Hard and NP-Complete:
NP-hard NP-Complete

NP-Hard problems (say X) can be solved if and only if there is a NP-Complete problems can be solved by a non-deterministic
NP-Complete problem (say Y) that can be reducible into X in Algorithm/Turing Machine in polynomial time.
polynomial time.

To solve this problem, it do not have to be in NP . To solve this problem, it must be both NP and NP-hard problems.

Do not have to be a Decision problem. It is exclusively a Decision problem.


NP-hard NP-Complete

Example: Halting problem, Vertex cover problem, etc. Example: Determine whether a graph has a Hamiltonian cycle, determine
whether a Boolean formula is satisfiable or not, Circuit-satisfiability
problem, etc.

Differentiate between Dynamic Programming and Greedy Method

Dynamic Programming Greedy Method

1. Dynamic Programming is used to obtain 1. Greedy Method is also used to get the optimal solution.
the optimal solution.

2. In Dynamic Programming, we choose at 2. In a greedy Algorithm, we make whatever choice seems best at
each step, but the choice may depend on the moment and then solve the sub-problems arising after the
the solution to sub-problems. choice is made.

3. Less efficient as compared to a greedy 3. More efficient as compared to a greedy approach


approach

4. Example: 0/1 Knapsack 4. Example: Fractional Knapsack

5. It is guaranteed that Dynamic 5. In Greedy Method, there is no such guarantee of getting Optimal
Programming will generate an optimal Solution.
solution using Principle of Optimality.

You might also like