Optimization Problem
Optimization Problem
Dalam matematika, masalah optimasi adalah masalah untuk menemukan solusi terbaik dari semua solusi yang feasible. Berdasarkan jenis variabelnya, masalah optimasi dapat dibagi menjadi 2 kategori yaitu masalah optimasi dengan variabel continuous dan discrete. Masalah optimasi dengan variabel discrete dikenal dengan combinatorial optimization problem. Pada combinatorial optimization problem, kita mencari objek seperti integer, permutasi atau grafik dari pengaturan yang terbatas.
Pada masalah optimasi, tipe hubungan matematis antara tujuan dan constraints dan variabel keputusannya menentukan tingkat kesulitan penyelesaian metode atau algoritma optimasi dan tingkat kepercayaan solusi optimal.
dimana adalah fungsi tujuan yang harus diminimumkan adalah inequality constraints adalah equality constraints.
Selain itu, masalah optimasi juga dapat dilakukan dengan mengganti fungsi tujuan menjadi maksimasi.
For each combinatorial optimization problem, there is a corresponding decision problem that asks whether there is a feasible solution for some particular measure . For example, if there
is agraph from to
and
that uses the fewest edges". This problem might have an answer of, say, 4. A to that uses 10 or fewer
corresponding decision problem would be "is there a path from edges?" This problem can be answered with a simple 'yes' or 'no'.
In the field of approximation algorithms, algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem. [edit]NP
[2]
optimization problem
[3]
An NP-optimization problem (NPO) is a combinatorial optimization problem with the following additional conditions. Note that the below referred polynomials are functions of the size of the
respective functions' inputs, not the size of some implicit set of input instances. the size of every feasible solution instance , and can be recognized in polynomial time, and is polynomially bounded in the size of the given
the languages
m is polynomial-time computable.
This implies that the corresponding decision problem is in NP. In computer science, interesting optimization problems usually have the above properties and are therefore NPO problems. A problem is additionally called a P-optimization (PO) problem, if there exists an algorithm which finds optimal solutions in polynomial time. Often, when dealing with the class NPO, one is interested in optimization problems for which the decision versions are NP-hard. Note that hardness relations are always with respect to some reduction. Due to the connection between approximation algorithms and computational optimization problems, reductions which preserve approximation in some respect are for this subject preferred than the usual Turing and Karp reductions. An example of such a reduction would be the L-reduction. For this reason, optimization problems with NP-complete decision versions are not necessarily called NPOcomplete.
[4]
NPO is divided into the following subclasses according to their approximability: NPO(I): Equals FPTAS. Contains the Knapsack problem. NPO(II): Equals PTAS. Contains the Makespan scheduling problem.
[3]
NPO(III): :The class of NPO problems that have polynomial-time algorithms which computes solutions with a cost at most c times the optimal cost (for minimization problems) or a cost at
least
from this class are all NPO(II)-problems save if P=NP. Without the exclusion, equals APX. ContainsMAX-SAT and metric TSP. NPO(IV): :The class of NPO problems with polynomial-time algorithms approximating the optimal solution by a ratio that is polynomial in a logarithm of the size of the input. In Hromkovic's book, all NPO(III)-problems are excluded from this class unless P=NP. Contains the set cover problem. NPO(V): :The class of NPO problems with polynomial-time algorithms approximating the optimal solution by a ratio bounded by some function on n. In Hromkovic's book, all NPO(IV)-problems are excluded from this class unless P=NP. Contains the TSP and Max Clique problems. Another class of interest is NPOPB, NPO with polynomially bounded cost functions. Problems with this condition have many desirable properties. NP-hard (non-deterministic polynomial-time hard), in computational complexity theory, is a class of problems that are, informally, "at least as hard as the hardest problems in NP". A problem H is NP-hard if and only if there is an NP-complete problem L that is polynomial time Turing-reducible to H (i.e., LTH). In other words, L can be solved in polynomial time by an oracle machine with an oracle for H. Informally, we can think of an algorithm that can call such an oracle machine as a subroutine for solving H, and solves L in polynomial time, if the subroutine call takes only one step to compute. NP-hard problems may be of any type: decision problems, search problems, or optimization problems. As consequences of definition, we have (note that these are claims, not definitions): Problem H is at least as hard as L, because H can be used to solve L; Since L is NP-complete, and hence the hardest in class NP, also problem H is at least as hard as NP, but H does not have to be in NP and hence does not have to be a decision problem (even if it is a decision problem, it need not be in NP); Since NP-complete problems transform to each other by polynomial-time many-one reduction (also called polynomial transformation), all NP-complete problems can be solved in polynomial time by a reduction to H, thus all problems in NP reduce to H; note, however, that this involves combining two different transformations: from NP-complete decision problems to NP-complete problem L by polynomial transformation, and from L to H by polynomial Turing reduction;
If there is a polynomial algorithm for any NP-hard problem, then there are polynomial algorithms for all problems in NP, and hence P = NP;
If P NP, then NP-hard problems have no solutions in polynomial time, while P = NP does not resolve whether the NP-hard problems can be solved in polynomial time;
A common mistake is to think that the NP in NP-hard stands for non-polynomial. Although it is widely suspected that there are no polynomial-time algorithms for NP-hard problems, this has never been proven. Moreover, the class NP also contains all problems which can be solved in polynomial time.
Contents
[hide]
[edit]Examples An example of an NP-hard problem is the decision subset sum problem, which is this: given a set of integers, does any non-empty subset of them add up to zero? That is a decision problem, and happens to be NP-complete. Another example of an NP-hard problem is the optimization problem of finding the leastcost cyclic route through all nodes of a weighted graph. This is commonly known as the traveling salesman problem. There are decision problems that are NP-hard but not NP-complete, for example the halting problem. This is the problem which asks "given a program and its input, will it run forever?" That's ayes/no question, so this is a decision problem. It is easy to prove that the halting problem is NP-hard but not NP-complete. For example, the Boolean satisfiability problem can be reduced to the halting problem by transforming it to the description of a Turing machine that tries all truth value assignments and when it finds one that satisfies the formula it halts and otherwise it goes into an infinite loop. It is also easy to see that the halting problem is not in NP since all problems in NP are decidable in a finite number of operations, while the halting problem, in general, isundecidable. There are also NP-hard problems that are neither NPcomplete nor undecidable. For instance, the language of True quantified Boolean formulas is decidable in polynomial space, but not non-deterministic polynomial time (unless NP = PSPACE). [edit]Alternative
definitions
An alternative definition of NP-hard that is often used restricts NP-hard to decision problems and then uses polynomial-time many-one reduction instead of Turing reduction. So, formally, a language L is NPhard if L NP, L L. If it is also the case that L is in NP, then L is called NP-complete. However, under this definition, the trivial decision problem (the one that accepts everything) and its complement would provably not be in NP-hard, even if P = NP, since no other problems can many-one reduce to these two problems.
' '
NP-complete
From Wikipedia, the free encyclopedia
In computational complexity theory, the complexity class NP-complete (abbreviated NP-C or NPC) is a class of decision problems. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified inpolynomial time, and also in the set of NP-hard problems so that any NP problem can be converted into L by a transformation of the inputs in polynomial time. Although any given solution to such a problem can be verified quickly, there is no known efficient way to locate a solution in the first place; indeed, the most notable characteristic of NP-complete problems is that no fast solution to them is known. That is, the time required to solve the problem using any currently known algorithm increases very quickly as the size of the problem grows. As a result, the time required to solve even moderately sized versions of many of these problems easily reaches into the billions or trillions of years, using any amount of computing power available today. As a consequence, determining whether or not it is possible to solve these problems quickly, called the P versus NP problem, is one of the principal unsolved problems in computer science today.
While a method for computing the solutions to NP-complete problems using a reasonable amount of time remains undiscovered,computer scientists and programmers still frequently encounter NP-complete problems. NP-complete problems are often addressed by using approximation algorithms.
Contents
[hide]
1 Formal overview 2 Formal definition of NP-completeness 3 Background 4 NP-complete problems 5 Solving NP-complete problems 6 Completeness under different types of reduction 7 Naming 8 Common misconceptions 9 See also 10 References 11 Further reading
[edit]Formal
overview
NP-complete is a subset of NP, the set of all decision problems whose solutions can be verified in polynomial time; NP may be equivalently defined as the set of decision problems that can be solved in polynomial time on a nondeterministic Turing machine. A problem p in NP is also in NPC if and only if every other problem in NP can be transformed into p in polynomial time. NP-complete can also be used as an adjective: problems in the class NP-complete are known as NP-complete problems. NP-complete problems are studied because the ability to quickly verify solutions to a problem (NP) seems to correlate with the ability to quickly solve that problem (P). It is not known whether every problem in NP can be quickly solvedthis is called the P = NP problem. But if any single problem in NP-complete can be solved quickly, then every problem in NP can also be quickly solved, because the definition of an NP-complete problem states that every problem in NP must be quickly reducible to every problem in NP-complete (that is, it can be reduced in polynomial time). Because of this, it is often said that the NP-complete problems are harder or more difficult than NP problems in general.
[edit]Formal
definition of NP-completeness
A decision problem 1.
is NP-complete if:
Note that a problem satisfying condition 2 is said to be NP-hard, whether or not it satisfies condition 1. A consequence of this definition is that if we had a polynomial time algorithm (on a UTM, or any other Turingequivalent abstract machine) for , we could solve all problems in NP in polynomial time.
[edit]Background
The concept of NP-complete was introduced in 1971 by Stephen Cook in a paper entitled The complexity of theorem-proving procedures on pages 151158 of the Proceedings of the 3rd Annual ACM Symposium on Theory of Computing, though the term NP-complete did not appear anywhere in his paper. At that computer science conference, there was a fierce debate among the computer scientists about whether NP-complete problems could be solved in polynomial time on a deterministic Turing machine. John Hopcroft brought everyone at the conference to a consensus that the question of whether NP-complete problems are solvable in polynomial time should be put off to be solved at some later date, since nobody had any formal proofs for their claims one way or the other. This is known as the question of whether P=NP. Nobody has yet been able to determine conclusively whether NP-complete problems are in fact solvable in polynomial time, making this one of the great unsolved problems of mathematics. TheClay Mathematics Institute is offering a US$1 million reward to anyone who has a formal proof that P=NP or that PNP. In the celebrated Cook-Levin theorem (independently proved by Leonid Levin), Cook proved that the Boolean satisfiability problem is NP-complete (a simpler, but still highly technical proof of thisis available). In 1972, Richard Karp proved that several other problems were also NP-complete (see Karp's 21 NP-complete problems); thus there is a class of NP-complete problems (besides the Boolean satisfiability problem). Since Cook's original results, thousands of other problems have been shown to be NP-complete by reductions from other problems previously shown to be NP-complete; many of these problems are collected in Garey and Johnson's 1979 book Computers and Intractability: A Guide to the Theory of NPCompleteness.[1] For more details refer design and analysis of algorithm by Anany levintin.
[edit]NP-complete
problems
Some NP-complete problems, indicating the reductions typically used to prove their NP-completeness
An interesting example is the graph isomorphism problem, the graph theory problem of determining whether a graph isomorphism exists between two graphs. Two graphs are isomorphic if one can be transformed into the other simply by renaming vertices. Consider these two problems:
Graph Isomorphism: Is graph G1 isomorphic to graph G2? Subgraph Isomorphism: Is graph G1 isomorphic to a subgraph of graph G2?
The Subgraph Isomorphism problem is NP-complete. The graph isomorphism problem is suspected to be neither in P nor NP-complete, though it is in NP. This is an example of a problem that is thought to be hard, but isn't thought to be NP-complete. The easiest way to prove that some new problem is NP-complete is first to prove that it is in NP, and then to reduce some known NP-complete problem to it. Therefore, it is useful to know a variety of NP-complete problems. The list below contains some well-known problems that are NP-complete when expressed as decision problems.
Boolean satisfiability problem (Sat.) N-puzzle Knapsack problem Hamiltonian path problem Travelling salesman problem Subgraph isomorphism problem Subset sum problem
Clique problem Vertex cover problem Independent set problem Dominating set problem Graph coloring problem
To the right is a diagram of some of the problems and the reductions typically used to prove their NPcompleteness. In this diagram, an arrow from one problem to another indicates the direction of the reduction. Note that this diagram is misleading as a description of the mathematical relationship between these problems, as there exists a polynomial-time reduction between any two NP-complete problems; but it indicates where demonstrating this polynomial-time reduction has been easiest. There is often only a small difference between a problem in P and an NP-complete problem. For example, the 3-satisfiability problem, a restriction of the boolean satisfiability problem, remains NP-complete, whereas the slightly more restricted 2-satisfiability problem is in P (specifically, NL-complete), and the slightly more general max. 2-sat. problem is again NP-complete. Determining whether a graph can be colored with 2 colors is in P, but with 3 colors is NP-complete, even when restricted to planar graphs. Determining if a graph is a cycle or is bipartite is very easy (in L), but finding a maximum bipartite or a maximum cycle subgraph is NPcomplete. A solution of the knapsack problem within any fixed percentage of the optimal solution can be computed in polynomial time, but finding the optimal solution is NP-complete.
[edit]Solving
NP-complete problems
At present, all known algorithms for NP-complete problems require time that is superpolynomial in the input size, and it is unknown whether there are any faster algorithms. The following techniques can be applied to solve computational problems in general, and they often give rise to substantially faster algorithms:
Approximation: Instead of searching for an optimal solution, search for an "almost" optimal one. Randomization: Use randomness to get a faster average running time, and allow the algorithm to fail with some small probability. Note: The Monte Carlo method as such is not an example for an efficient algorithm, evolutionary approaches like Genetic algorithms may be.
Restriction: By restricting the structure of the input (e.g., to planar graphs), faster algorithms are usually possible.
Parameterization: Often there are fast algorithms if certain parameters of the input are fixed. Heuristic: An algorithm that works "reasonably well" in many cases, but for which there is no proof that it is both always fast and always produces a good result. Metaheuristic approaches are often used.
coloring during the register allocation phase of some compilers, a technique calledgraph-coloring global register allocation. Each vertex is a variable, edges are drawn between variables which are being used at the same time, and colors indicate the register assigned to each variable. Because most RISC machines have a fairly large number of general-purpose registers, even a heuristic approach is effective for this application.
[edit]Completeness
In the definition of NP-complete given above, the term reduction was used in the technical meaning of a polynomial-time many-one reduction. Another type of reduction is polynomial-time Turing reduction. A problem to a problem if, given a subroutine that solves is polynomial-time Turing-reducible
in polynomial time. This contrasts with many-one reducibility, which has the restriction
that the program can only call the subroutine once, and the return value of the subroutine must be the return value of the program. If one defines the analogue to NP-complete with Turing reductions instead of many-one reductions, the resulting set of problems won't be smaller than NP-complete; it is an open question whether it will be any larger. Another type of reduction that is also often used to define NP-completeness is the logarithmic-space many-one reduction which is a many-one reduction that can be computed with only a logarithmic amount of space. Since every computation that can be done in logarithmic space can also be done in polynomial time it follows that if there is a logarithmic-space many-one reduction then there is also a polynomial-time many-one reduction. This type of reduction is more refined than the more usual polynomial-time many-one reductions and it allows us to distinguish more classes such as P-complete. Whether under these types of reductions the definition of NPcomplete changes is still an open problem. All currently known NP-complete problems are NP-complete under log space reductions. Indeed, all currently known NP-complete problems remain NP-complete even under much weaker reductions.[2] It is known, however, that AC0 reductions define a strictly smaller class than polynomial-time reductions.[3]
[edit]Naming
According to Don Knuth, the name "NP-complete" was popularized by Alfred Aho, John Hopcroft and Jeffrey Ullman in their celebrated textbook "The Design and Analysis of Computer Algorithms". He reports that they introduced the change in the galley proofs for the book (from "polynomially-complete"), in accordance with the results of a poll he had conducted of theTheoretical Computer Science community.[4] Other suggestions made in the poll[5] included "Herculean", "formidable", Steiglitz's "hard-boiled" in honor of Cook, and Shen Lin's
acronym "PET", which stood for "probably exponential time", but depending on which way the P versus NP problem went, could stand for "provably exponential time" or "previously exponential time".[6]
[edit]Common
misconceptions
"NP-complete problems are the most difficult known problems." Since NP-complete problems are in NP, their running time is at most exponential. However, some problems provably require more time, for example Presburger arithmetic.
"NP-complete problems are difficult because there are so many different solutions." On the one hand, there are many problems that have a solution space just as large, but can be solved in polynomial time (for example minimum spanning tree). On the other hand, there are NP-problems with at most one solution that are NP-hard under randomized polynomial-time reduction (seeValiantVazirani theorem).
"Solving NP-complete problems requires exponential time." First, this would imply P NP, which is still an unsolved question. Further, some NP-complete problems actually have algorithms running in superpolynomial, but subexponential time. For example, the Independent set and Dominating set problems are NP-complete when restricted to planar graphs, but can be solved in subexponential time on planar graphs using the planar separator theorem.
"All instances of an NP-complete problem are difficult." Often some instances, or even almost all instances, may be easy to solve within polynomial time.
NP-equivalent
From Wikipedia, the free encyclopedia
In computational complexity theory, the complexity class NP-equivalent is the set of function problems that are both NP-easy and NP-hard.[1] NP-equivalent is the analogue of NP-complete forfunction problems. For example, the problem FIND-SUBSET-SUM is in NP-equivalent. Given a set of integers, FIND-SUBSETSUM is the problem of finding some nonempty subset of the integers that adds up to zero (or returning the empty set if there is no such subset). This optimization problem is similar to the decision problem SUBSETSUM. Given a set of integers, SUBSET-SUM is the problem of finding whether there exists a subset summing to zero. SUBSET-SUM is NP-complete. To show that FIND-SUBSET-SUM is NP-equivalent, we must show that it is both NP-hard and NP-easy. Clearly it is NP-hard. If we had a black box that solved FIND-SUBSET-SUM in unit time, then it would be easy to solve SUBSET-SUM. Simply ask the black box to find the subset that sums to zero, then check whether it returned a nonempty set.
It is also NP-easy. If we had a black box that solved SUBSET-SUM in unit time, then we could use it to solve FIND-SUBSET-SUM. If it returns false, we immediately return the empty set. Otherwise, we visit each element in order and remove it provided that SUBSET-SUM would still return true after we remove it. Once we've visited every element, we will no longer be able to remove any element without changing the answer from true to false; at this point the remaining subset of the original elements must sum to zero. This requires us to note that later removals of elements do not alter the fact that removal of an earlier element changed the answer from true to false. In pseudocode:
function FIND-SUBSET-SUM(set S) if not(SUBSET-SUM(S)) return {} for each x in S if SUBSET-SUM(S - {x}) S := S - {x} return S
NP-easy
From Wikipedia, the free encyclopedia
In complexity theory, the complexity class NP-easy is the set of function problems that are solvable in polynomial time by a deterministic Turing machine with an oracle for some decision problem in NP. In other words, a problem X is NP-easy if and only if there exists some problem Y in NP such that X is polynomial-time Turing reducible to Y.[1] This means that given an oracle for Y, there exists an algorithm that solves X in polynomial time (possibly by repeatedly using that oracle). NP-easy is another name for FPNP (see the function problem article) or for F2P (see the polynomial hierarchy article). An example of an NP-easy problem is the problem of sorting a list of strings. The decision problem "is string A greater than string B" is in NP. There are algorithms such as Quicksort that can sort the list using only a polynomial number of calls to the comparison routine, plus a polynomial amount of additional work. Therefore, sorting is NP-easy. There are also more difficult problems that are NP-easy. See NP-equivalent for an example. The definition of NP-easy uses a Turing reduction rather than a many-one reduction because the answers to problem Y are only TRUE or FALSE, but the answers to problem X can be more general. Therefore, there is no general way to translate an instance of X to an instance of Y with the same answer.