0% found this document useful (0 votes)
2 views

Lecture8_Approximation-Algorithms (1)

The document discusses Las Vegas algorithms, which are randomized decision algorithms with polynomial expected running time and no false outputs, but may output 'no idea'. It also covers Zero-Error Probabilistic Polynomial (ZPP) algorithms that can halt without a decision, and Bounded error Probabilistic Polynomial (BPP) algorithms that allow two-sided errors. Additionally, it introduces approximation algorithms that provide near-optimal solutions for combinatorial optimization problems, along with definitions and examples of various graph types and properties.

Uploaded by

Anıl Alkış
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture8_Approximation-Algorithms (1)

The document discusses Las Vegas algorithms, which are randomized decision algorithms with polynomial expected running time and no false outputs, but may output 'no idea'. It also covers Zero-Error Probabilistic Polynomial (ZPP) algorithms that can halt without a decision, and Bounded error Probabilistic Polynomial (BPP) algorithms that allow two-sided errors. Additionally, it introduces approximation algorithms that provide near-optimal solutions for combinatorial optimization problems, along with definitions and examples of various graph types and properties.

Uploaded by

Anıl Alkış
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

ÇUKUROVA UNIVERSITY

FACULTY OF ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Lecture 8: Las Vegas Algorithms, Approximation Algorithms

Las Vegas Algorithms


• When defining the class RP, the calculations of an algorithm depend on random
variables. The running time is always polynomial, the output can be incorrect with a
certain probability.
• Now we consider randomized decision algorithms that do not make a false statement
for languages 𝐿 ⊆ Σ ∗ , i.e. neither in the case 𝑤 ∈ 𝐿 nor in the case 𝑤 ∉ 𝐿 for all 𝑤 ∈
Σ∗.
• However, we do not always get a reasonable, i.e. polynomial time an answer - what do
we want to note with the output ”?”.
• We consider the running time 𝑡𝑖𝑚𝑒𝐴 of a randomized algorithm 𝐴 as a random variable.
We are interested in the probability distribution over the running time of words of length
𝑛 (≤ 𝑛)

Expected Polynomial (EP):

Definition: Let Σ be an alphabet. 𝐿 ⊆ Σ ∗ belongs to the class EP iff there is a randomized


decision algorithm 𝐴 for 𝐿 with a polynomial worst expected running time. Such algorithms are
called Las Vegas algorithms.

Density function: 𝑡𝑖𝑚𝑒𝐴 (𝑤) |𝑤| = 𝑛 (≤ 𝑛)

𝑠𝑢𝑝{𝐸(𝑡𝑖𝑚𝑒𝐴 (𝑤) ) | |𝑤| ≤ 𝑛} = 𝑂(𝑝(𝑛)) for a polynom 𝑝

CEN 345 Algorithms 1 Assoc. Prof. Dr. Fatih ABUT


ÇUKUROVA UNIVERSITY
FACULTY OF ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Zero-Error Probabilistic Polynomial (ZPP)

ZPP algorithms are also Las Vegas algorithms that do not make false statements, but that can
also stop without making a decision - with the output "?" ("No idea").

Definition: 𝜀: ℕ → [0,1), 𝐿 ⊆ Σ ∗ , 𝐿 ∈ 𝑍𝑃𝑃(𝜀(𝑛)) iff there is a probabilistic algorithm 𝐴 with:


𝑤 ∈ Σ∗

(1) 𝑃𝑟𝑜𝑏𝐴 [𝐴 𝑎𝑐𝑐𝑒𝑝𝑡𝑠 𝑤 | 𝑤 ∉ 𝐿] = 0

(2) 𝑃𝑟𝑜𝑏𝐴 [𝐴 𝑑𝑜𝑒𝑠𝑛′𝑡 𝑎𝑐𝑐𝑒𝑝𝑡 𝑤 | 𝑤 ∈ 𝐿] = 0

(3) 𝑃𝑟𝑜𝑏𝐴 [𝐴 𝑜𝑢𝑡𝑝𝑢𝑡𝑠 "? " ] < 𝜀(𝑛)

(4) 𝑠𝑢𝑝{𝐸(𝑡𝑖𝑚𝑒𝐴 (𝑤) ) | |𝑤| ≤ 𝑛} = 𝑂(𝑝(𝑛)) for a polynom 𝑝

A simple example:

• A variable k is generated randomly; after k is generated, k is used to index the array A.


• If this index contains the value 1, then k is returned; otherwise, the algorithm repeats
this process until it finds 1.
• Although this Las Vegas algorithm is guaranteed to find the correct answer, it does not
have a fixed runtime; due to the randomization (in line 3 of the above code), it is possible
for arbitrarily much time to elapse before the algorithm terminates.

Here is a table comparing Las Vegas and Monte Carlo algorithms:

Running Time Correctness


Las Vegas Algorithm probabilistic certain
Monte Carlo Algorithm certain probabilistic

CEN 345 Algorithms 2 Assoc. Prof. Dr. Fatih ABUT


ÇUKUROVA UNIVERSITY
FACULTY OF ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

The Bounded error Probabilistic Polynomial (BPP)


So far, we have considered randomized algorithms with one-sided error (i.e., Monte Carlo
algorithms) and those that do not make an error but also cannot make a decision (i.e., Las Vegas
algorithms). Now we consider classes of languages for which two-sided errors are allowed in
the decision.

The language 𝐿 ⊆ Σ ∗ belongs to the class BPP if and only if there exists a polynomial,
1
randomized algorithm 𝐴 and an 𝜖, 0 < 𝜖 < 2, with

1
(1) for all 𝑤 ∉ 𝐿: Prob [𝐴(𝑤) = 0] ≥ 2 + 𝜖,
1
(2) for all 𝑤 ∈ 𝐿 : Prob [𝐴(𝑤) = 1] ≥ 2 + 𝜖,
(3) for all 𝑤 ∈ Σ ∗ : time 𝐴 (𝑤) = O( poly (|𝑤|)).

A BPP algorithm is called an Atlantic City algorithm.

CEN 345 Algorithms 3 Assoc. Prof. Dr. Fatih ABUT


ÇUKUROVA UNIVERSITY
FACULTY OF ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Approximation Algorithms

• An approximation algorithm returns a solution to a combinatorial optimization problem


that is provably close to optimal (as opposed to a heuristic that may or may not find a
good solution).
• Approximation algorithms are typically used when finding an optimal solution is
intractable.
• But they can also be used in some situations where a near-optimal solution can be found
quickly and an exact solution is not needed.

Definition: Let 𝑃 be a problem, then:

1) 𝐴𝑃 : Problem instances

undirected Graphs 𝐺 = (𝑉, 𝐸)

𝑉 = {1,2,3, … , 𝑛}, 𝑛 ≥ 0
𝐸 ⊆ 𝑉 𝑥 𝑉 {𝑥, 𝑦} ∈ 𝐸, 𝑥 → 𝑦

with 𝑙: 𝐴𝑃 → ℕ0 𝑙(𝑉, 𝐸) = |𝑉|

2) 𝑆𝑃 (𝑎): Set of valid solutions for 𝑎 ∈ 𝐴𝑃

3) 𝑚𝑃𝑎 : 𝑆𝑃 (𝑎) → ℕ0

4) Now we can distinguish between 2 types of optimization problems, namely a


minimization problem and a maximization problem.

If 𝑃 is a minimization problem, then a solution 𝑥 ∈ 𝑆𝑃 (𝑎) with 𝑚𝑃𝑎 (𝑥) =


𝑚𝑖𝑛{𝑚𝑃𝑎 (𝑦) | 𝑦 ∈ 𝑆𝑃 (𝑎)} should be found for 𝑎 ∈ 𝐴𝑃 .

If 𝑃 is a maximization problem, then a solution 𝑥 ∈ 𝑆𝑃 (𝑎) with 𝑚𝑃𝑎 (𝑥) =


𝑚𝑎𝑥{𝑚𝑝𝑎 (𝑦) | 𝑦 ∈ 𝑠𝑃 (𝑎)} should be found for 𝑎 ∈ 𝐴𝑃 .

Example: 𝑃: The graph coloring problem with the constraint that no adjacent nodes have the
same color. Determine the minimum number of colors!
1) 𝐴𝑃 is the set of all undirected graphs 𝐺 = (𝑉, 𝐸) as well 𝑙(𝐺) = |𝑉|.
2) 𝑆𝑃 (𝐺) is the set of all valid coloring solutions 𝑓 of 𝐺.
3) 𝑚𝑃𝐺 (𝑓) is the number of colors that 𝑓 assigns to the nodes.
4) It is a minimization problem.

CEN 345 Algorithms 4 Assoc. Prof. Dr. Fatih ABUT


ÇUKUROVA UNIVERSITY
FACULTY OF ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Quality criteria:

Let 𝐴(𝑎) be the value of the solution that Algorithm 𝐴 delivers for problem instance a.
Furthermore, let 𝑂𝑃𝑇(𝑎) be the value of the optimal solution 𝑠 ∈ 𝑆𝑃 (𝑎).

Absolute Error: |𝑂𝑃𝑇(𝑎) − 𝐴(𝑎)|


|𝑂𝑃𝑇(𝑎)−𝐴(𝑎)|
Relative Error: 𝑂𝑃𝑇(𝑎)

If there is an ε> 0 with

|𝑂𝑃𝑇(𝑎)−𝐴(𝑎)|
≤ 𝜀, 𝜀 ≥ 0
𝑂𝑃𝑇(𝑎)

for all a ∈ 𝐴𝑃 , then A is called ε-approximation for 𝑃.

➔ There exist 3 classes for a problem 𝑃 ∈ 𝑁𝑃𝐶 in terms of approximation.

(1) Fully approximable, i.e. there is a polynomial ε-approximation for 𝑃 for all ε > 0
(2) Partially approximable, i.e. polynomial ε-approximation is only possible for particular
𝜀-values
(3) Not approximable, i.e. no polynomial ε-approximation is possible => ∄ε

NP-complete problems are


divided into 3 approximation
classes.

Induced Subgraph (𝑮𝑰𝑺 ):


𝐺 = (𝑉, 𝐸)

𝑈 ⊆ 𝑉, 𝐸𝑈 ⊆ 𝐸

𝐺𝐼𝑆 = (𝑈, 𝐸𝑈 ) is a subgraph induced by U with 𝐸𝑈 = 𝐸 ∩ 𝑈𝑥𝑈

CEN 345 Algorithms 5 Assoc. Prof. Dr. Fatih ABUT


ÇUKUROVA UNIVERSITY
FACULTY OF ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Complete graph with n nodes (𝑲𝒏 ):


𝐾𝑛 = ({1, … , 𝑛}, {1, … , 𝑛}𝑥{1, … , 𝑛})

Example of a complete graph


Bipartite Graph:
𝑉1 , 𝑉2 ⊂ 𝑉
𝐸 ⊆ 𝑉1 𝑥𝑉2
𝐺 = (𝑉1 𝑎𝑛𝑑 𝑉2 , 𝐸) with 𝑉1 ∩ 𝑉2 = ∅

V1 V2

Example of a bipartite graph

Complete Bipartite Graph (𝑲𝒏,𝒎 ):


𝑉1 = {1, … , 𝑛}, 𝑉2 = {1, … , 𝑚}, 𝐸 = 𝑉1 𝑥𝑉2

V1

V2
Example of a complete bipartite graph with 𝑛 = 5 and 𝑚 = 3

CEN 345 Algorithms 6 Assoc. Prof. Dr. Fatih ABUT


ÇUKUROVA UNIVERSITY
FACULTY OF ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Circle Graph (𝑪𝒏 ):


𝐶𝑛 = ({1, … , 𝑛}, 𝐸)
𝐸 = {(𝑖, 𝑖 + 1) | 1 ≤ 𝑖 ≤ 𝑛 − 1} ∪ {(𝑛, 1)}

Clique Number of a Graph (𝑮):


𝐺 = (𝑉, 𝐸), 𝑈 ⊆ 𝑉 is clique iff 𝐺𝐼𝐺 is complete.

Clique number of 𝑮: (𝐺) = 𝑚𝑎𝑥{|𝑈| | 𝑈 𝑖𝑠 𝑐𝑙𝑖𝑞𝑢𝑒 𝑜𝑓 𝐺}

Independence Number of a Graph 𝜶(𝑮):


𝐺 = (𝑈, 𝐸), 𝑈 ⊆ 𝑉 is independent iff ∀𝑖, 𝑗 ∈ 𝑈, 𝑖 ≠ 𝑗: (𝑖, 𝑗) ∉ 𝐸
Independence number of 𝑮: 𝛼(𝐺) = 𝑚𝑎𝑥{|𝑈| | 𝑈 𝑖𝑠 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 𝑖𝑛 𝐺}

Clique Number Independence Number


(𝐾𝑛 ) = 𝑛 𝛼(𝐾𝑛 ) = 1
(𝐶𝑛 ) = 2, 𝑛 = 2 𝑎𝑛𝑑 𝑛 ≥ 4 𝑛
𝛼(𝐶𝑛 ) = ⌊ ⌋
(𝐶𝑛 ) = 3, 𝑛 = 3 2

(𝐾𝑛,𝑚 ) = 2, 𝑚, 𝑛 ≥ 1 𝛼(𝐾𝑛,𝑚 ) = 𝑚𝑎𝑥{𝑚, 𝑛}


𝛼(𝐺) = (𝐺̅ )

Graph Coloring
𝐺 = (𝑉, 𝐸) is k-colorable iff there is a mapping 𝑐: 𝑉 → {1, … , 𝑘} with (𝑖, 𝑗) ∈ 𝐸 and 𝑐(𝑖) ≠
𝑐(𝑗)
(𝐺) = 𝑚𝑖𝑛{𝑘 | 𝐺 𝑖𝑠 𝑘 − 𝑐𝑜𝑙𝑜𝑢𝑟𝑎𝑏𝑙𝑒}
 (𝐺) is the chromatic number of 𝐺

(1) (𝐺) = 1, if 𝐸 = ∅, |𝑉| ≥ 1


(2) (𝐺) ≤ 𝑑𝐺 + 1
a. The degree of a node is the number of edges that are incident on the node.
b. The degree of a graph 𝑑𝐺 is the maximum over all node degrees.
c. Examples:

CEN 345 Algorithms 7 Assoc. Prof. Dr. Fatih ABUT


ÇUKUROVA UNIVERSITY
FACULTY OF ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

(3) (𝐺) ≥ (𝐺)


(4) (𝐾𝑛 ) = 𝑛
(5) 𝐺 bipartite: (𝐺) ≤ 2
2, 𝑛 𝑒𝑣𝑒𝑛
(6) (𝐶𝑛 ) = {
3, 𝑛 𝑜𝑑𝑑

Estimating the chromatic number of 𝐺, i.e. (𝐺):

𝐺 = (𝑉, 𝐸), |𝑉| = 𝑛

𝑛
≤ (𝐺) ≤ 𝑛 − 𝛼(𝐺) + 1
𝛼(𝐺)

Question: Is this estimation useful to us in terms of determining an approximation? Although


this estimation limits the interval, it does not really help us for an acceptable approximation.
In the following we consider special examples where this interval is arbitrarily filled.

Complete bipartite Graph 𝐾𝑛,𝑛 :


𝛼(𝐾𝑛,𝑛 ) = 𝑛
(𝐾𝑛,𝑛 ) = 2

2𝑛
≤ (𝐾𝑛,𝑛 ) ≤ 2𝑛 − 𝛼(𝐾𝑛,𝑛 ) + 1
𝛼(𝐾𝑛,𝑛 )
2𝑛
≤ 2 ≤ 2𝑛 − 𝑛 + 1
𝑛
2≤2≤𝑛+1

CEN 345 Algorithms 8 Assoc. Prof. Dr. Fatih ABUT


ÇUKUROVA UNIVERSITY
FACULTY OF ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

(Modified) Complete Graph 𝐾𝑛∗ :

𝐾𝑛∗

𝛼(𝐾𝑛∗ ) = 𝑛
(𝐾𝑛∗ ) = 𝑛
2𝑛
≤ n ≤ 2𝑛 − 𝑛 + 1
𝑛
2≤𝑛 ≤𝑛+1

Question: Can the coloring problem be approximated at all?

Theorem: Let A be a polynomial algorithm to approximate (𝐺) of an undirected graph G


with |𝑂𝑃𝑇(𝐺) − 𝐴(𝐺)| ≤ 𝑘, then it follows that 𝑃 = 𝑁𝑃

Assuming the reverse of the theorem, i.e. 𝐏 ≠ 𝐍𝐏, it follows that there can be no
polynomial algorithm for approximating (𝐺).

Instead of approximation algorithms, we must use heuristics and analyze them in terms of
their goodness.
➔ 2 well-known classical heuristic algorithms: Greedy and Johnson Coloring Algorithms

CEN 345 Algorithms 9 Assoc. Prof. Dr. Fatih ABUT


ÇUKUROVA UNIVERSITY
FACULTY OF ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Greedy Coloring Algorithm:

algorithm greedyC o l o r i n g ;
input G = (V, E) : Graph; //V = {1, 2, . . . , n}
output f : [1 . . . max] of int;
color : int;
var counter, color, i, j : int;
f := 0; color := 0; counter := 0;

while counter < n do


color := color + 1; for all i ∈ V do
if f [i] < 1 and f [i] ≠−color then
f [i] := color;
counter := counter + 1;
for all j ∈ NG(i) do
if f [j] < 1 then
f [j] := −color
endif
endfor
endif
endfor
endwhile
return f ;
return color;
endalgorithm greedyC o l o r i n g .

Colors = { R: Red, G: Green, B: Blue}


B C

E D

F G
A

I
H

Let 𝐺 = (𝑉, 𝐸) be a graph with |𝐸| = 𝑚, then it applies


greedyC o l o r i n g ( G ) . f a r b e ≤ ⌈√2𝑚⌉

CEN 345 Algorithms 10 Assoc. Prof. Dr. Fatih ABUT


ÇUKUROVA UNIVERSITY
FACULTY OF ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Johnson Coloring Algorithm:


algorithm johnsonC o l o r i n g ;
input G = (V, E) : Graph;
output f : [1 . . . max] of int;
color : int;
var U, W : subset of V ;
f := 0; color := 0; W := V ;

repeat
U := W ;
color := color + 1;
while U ≠ ∅ do
Determine node u in GIS with minimal grade;
f [u] := color;
U := U − {u} − NU (u); // NU (u): Neighbors of u
W := W − {u};
endwhile
until W = ∅;
return f ;
return color;
endalgorithm johnsonC o l o r i n g .

Colors = { R: Red, G: Green, B: Blue}

B C

E D

F G
A

I
H

Let 𝐺 = (𝑉, 𝐸) be a graph with |𝑉| = 𝑛, then it applies


4𝑛∗log((𝐺))
johnsonC o l o r i n g ( G ) . f a r b e ≤ ⌊ ⌋
log 𝑛

CEN 345 Algorithms 11 Assoc. Prof. Dr. Fatih ABUT


ÇUKUROVA UNIVERSITY
FACULTY OF ENGINEERING
COMPUTER ENGINEERING DEPARTMENT

Some examples for approximation problems:


(1) Fully approximable:
a. Triangle Traveling Salesman Problem
b. 2-Processor-Scheduling
(2) Partially approximable:
a. Vertex/Node Cover
(3) Not approximable:
a. Traveling Salesman Problem
b. Coloring Problem

CEN 345 Algorithms 12 Assoc. Prof. Dr. Fatih ABUT

You might also like