0% found this document useful (0 votes)
8 views10 pages

DAA UNIT_5 Material

The document discusses NP-hard and NP-complete problems, highlighting the distinction between optimization and decision problems, with examples such as the traveling salesman problem. It explains the classes P and NP, including Cook's theorem and the implications of P = NP. Additionally, it covers approximation algorithms for various problems, including the knapsack problem and bin packing, detailing their accuracy and efficiency.

Uploaded by

ithod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views10 pages

DAA UNIT_5 Material

The document discusses NP-hard and NP-complete problems, highlighting the distinction between optimization and decision problems, with examples such as the traveling salesman problem. It explains the classes P and NP, including Cook's theorem and the implications of P = NP. Additionally, it covers approximation algorithms for various problems, including the knapsack problem and bin packing, detailing their accuracy and efficiency.

Uploaded by

ithod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

For All jntu material visit us at www.jntumaterials.in and www.jntu3u.

in

UNIT-8
NP-HARD AND NP-COMPLETE PROBLEMS
Problem Types: Optimization and Decision
• Optimization problem: find a solution that maximizes or minimizes some
objective function
• Decision problem: answer yes/no to a question
Many problems have decision and optimization versions.
E.g.: traveling salesman problem
• optimization: find Hamiltonian cycle of minimum length
• decision: find Hamiltonian cycle of length ≤ m
Decision problems are more convenient for formal investigation of their
complexity.

Class P:
P: the class of decision problems that are solvable in O(p(n)) time, where p(n) is a
polynomial of problem’s input size n
Examples:
• searching
• element uniqueness
• graph connectivity
• graph acyclicity
• primality testing

Class NP
NP (nondeterministic polynomial): class of decision problems whose proposed
solutions can be verified in polynomial time = solvable by a nondeterministic
polynomial algorithm
A nondeterministic polynomial algorithm is an abstract two-stage procedure that:
• generates a random string purported to solve the problem
• checks whether this solution is correct in polynomial time
By definition, it solves the problem if it’s capable of generating and verifying a
solution on one of its tries

1
For All jntu material visit us at www.jntumaterials.in and www.jntu3u.in

Example: CNF satisfiability


Problem: Is a boolean expression in its conjunctive normal form (CNF) satisfiable,
i.e., are there values of its variables that makes it true?
This problem is in NP. Nondeterministic algorithm:
• Guess truth assignment
• Substitute the values into the CNF formula to see if it evaluates to true
Example: (A | ¬B | ¬C) & (A | B) & (¬B | ¬D | E) & (¬D | ¬E)
Truth assignments:
ABCDE
0 0 0 0 0
. . .
1 1 1 1 1
Checking phase: O(n)

What problems are in NP?


• Hamiltonian circuit existence
• Partition problem: Is it possible to partition a set of n integers into two
disjoint subsets with the same sum?
• Decision versions of TSP, knapsack problem, graph coloring, and many
other combinatorial optimization problems. (Few exceptions include: MST,
shortest paths)
• All the problems in P can also be solved in this manner (but no guessing is
necessary), so we have:
P ⊆ NP
• Big question: P = NP ?

NP-Complete Problems
A decision problem D is NP-complete if it’s as hard as any
problem in NP, i.e.,
• D is in NP
• every problem in NP is polynomial-time reducible to D

2
For All jntu material visit us at www.jntumaterials.in and www.jntu3u.in

NP problems

NP -complete
problem

Cook’s theorem (1971): CNF-sat is NP-complete


Other NP-complete problems obtained through polynomial-
time reductions from a known NP-complete problem
NP problems

known
NP -complete
problem
candidate
for NP -
completeness

Examples: TSP, knapsack, partition, graph-coloring and


hundreds of other problems of combinatorial nature

P = NP ? Dilemma Revisited
• P = NP would imply that every problem in NP, including all NP-complete
problems, could be solved in polynomial time
• If a polynomial-time algorithm for just one NP-complete problem is
discovered, then every problem in NP can be solved in polynomial time, i.e.,
P = NP

3
For All jntu material visit us at www.jntumaterials.in and www.jntu3u.in

NP problems

NP -complete
problem

• Most but not all researchers believe that P ≠ NP , i.e. P is a proper subset of
NP

APPROXIMATION ALGORITHMS

Approximation Approach
Apply a fast (i.e., a polynomial-time) approximation algorithm to get a solution
that is not necessarily optimal but hopefully close to it

Accuracy measures:
accuracy ratio of an approximate solution sa
r(sa) = f(sa) / f(s*) for minimization problems
r(sa) = f(s*) / f(sa) for maximization problems
where f(sa) and f(s*) are values of the objective function f for the approximate
solution sa and actual optimal solution s*
performance ratio of the algorithm A
the lowest upper bound of r(sa) on all instances

Nearest-Neighbor Algorithm for TSP


Starting at some city, always go to the nearest unvisited city, and, after visiting all
the cities, return to the starting one

4
For All jntu material visit us at www.jntumaterials.in and www.jntu3u.in

Note: Nearest-neighbor tour may depend on the starting city


Accuracy: RA = ∞ (unbounded above) – make the length of AD
arbitrarily large in the above example

Multifragment-Heuristic Algorithm
Stage 1: Sort the edges in nondecreasing order of weights.
Initialize the set of tour edges to be constructed to
empty set
Stage 2: Add next edge on the sorted list to the tour, skipping
those whose addition would’ve created a vertex of
degree 3 or a cycle of length less than n. Repeat
this step until a tour of length n is obtained
Note: RA = ∞, but this algorithm tends to produce better tours
than the nearest-neighbor algorithm

Twice-Around-the-Tree Algorithm
Stage 1: Construct a minimum spanning tree of the graph
(e.g., by Prim’s or Kruskal’s algorithm)
Stage 2: Starting at an arbitrary vertex, create a path that goes
twice around the tree and returns to the same vertex
Stage 3: Create a tour from the circuit constructed in Stage 2 by
making shortcuts to avoid visiting intermediate vertices
more than once

5
For All jntu material visit us at www.jntumaterials.in and www.jntu3u.in

Note: RA = ∞ for general instances, but this algorithm tends to


produce better tours than the nearest-neighbor algorithm

Example

Christofides Algorithm
Stage 1: Construct a minimum spanning tree of the graph
Stage 2: Add edges of a minimum-weight matching of all the odd
vertices in the minimum spanning tree
Stage 3: Find an Eulerian circuit of the multigraph obtained in
Stage 2
Stage 3: Create a tour from the path constructed in Stage 2 by
making shortcuts to avoid visiting intermediate vertices
more than once
RA = ∞ for general instances, but it tends to produce better
tours than the twice-around-the-minimum-tree alg.

Example:

6
For All jntu material visit us at www.jntumaterials.in and www.jntu3u.in

Euclidean Instances
Theorem If P ≠ NP, there exists no approximation algorithm
for TSP with a finite performance ratio.
Definition An instance of TSP is called Euclidean, if its
distances satisfy two conditions:
1. symmetry d[i, j] = d[j, i] for any pair of cities i and j
2. triangle inequality d[i, j] ≤ d[i, k] + d[k, j] for any cities i, j, k
For Euclidean instances:

approx. tour length / optimal tour length ≤ 0.5( log2 n + 1)

for nearest neighbor and multifragment heuristic;


approx. tour length / optimal tour length ≤ 2 for twice-around-the-tree;
approx. tour length / optimal tour length ≤ 1.5for Christofides

Local Search Heuristics for TSP


Start with some initial tour (e.g., nearest neighbor). On each iteration, explore
the current tour’s neighborhood by exchanging a few edges in it. If the new tour
is shorter, make it the current tour; otherwise consider another edge change. If
no change yields a shorter tour, the current tour is returned as the output.

7
For All jntu material visit us at www.jntumaterials.in and www.jntu3u.in

Example of a 2-change

Example of a 3-change

Empirical Data for Euclidean Instances

8
For All jntu material visit us at www.jntumaterials.in and www.jntu3u.in

Greedy Algorithm for Knapsack Problem


Step 1: Order the items in decreasing order of relative values:
v1/w1≥… ≥ vn/wn
Step 2: Select the items in this order skipping those that don’t
fit into the knapsack

Example: The knapsack’s capacity is 16


item weight value v/w
1 2 $40 20
2 5 $30 6
3 10 $50 5
4 5 $10 2

Accuracy
• RA is unbounded (e.g., n = 2, C = m, w1=1, v1=2, w2=m, v2=m)
• yields exact solutions for the continuous version

Approximation Scheme for Knapsack Problem


Step 1: Order the items in decreasing order of relative values:
v1/w1≥… ≥ vn/wn
Step 2: For a given integer parameter k, 0 ≤ k ≤ n, generate all
subsets of k items or less and for each of those that fit the
knapsack, add the remaining items in decreasing
order of their value to weight ratios
Step 3: Find the most valuable subset among the subsets generated in Step 2 and
return it as the algorithm’s output
• Accuracy: f(s*) / f(sa) ≤ 1 + 1/k for any instance of size n
• Time efficiency: O(knk+1)
• There are fully polynomial schemes: algorithms with polynomial running
time as functions of both n and k

9
For All jntu material visit us at www.jntumaterials.in and www.jntu3u.in

Bin Packing Problem: First-Fit Algorithm


First-Fit (FF) Algorithm: Consider the items in the order given and place each item
in the first available bin with enough room for it; if there are no such bins, start a
new one
Example: n = 4, s1 = 0.4, s2 = 0.2, s3 = 0.6, s4 = 0.7
Accuracy
• Number of extra bins never exceeds optimal by more than
70% (i.e., RA ≤ 1.7)
• Empirical average-case behavior is much better. (In one
experiment with 128,000 bins, the relative error was found
to be no more than 2%.)

Bin Packing: First-Fit Decreasing Algorithm
First-Fit Decreasing (FFD) Algorithm: Sort the items in decreasing order (i.e., from
the largest to the smallest). Then proceed as above by placing an item in the first
bin in which it fits and starting a new bin if there are no such bins
Example: n = 4, s1 = 0.4, s2 = 0.2, s3 = 0.6, s4 = 0.7
Accuracy
• Number of extra bins never exceeds optimal by more than
50% (i.e., RA ≤ 1.5)
• Empirical average-case behavior is much better, too

10

You might also like