0% found this document useful (0 votes)
3 views

Module_2

Design and Analysis of Algorithms

Uploaded by

nvnmarcos5
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Module_2

Design and Analysis of Algorithms

Uploaded by

nvnmarcos5
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 48

Module 2

Greedy Strategy

Greedy method is an algorithm design strategy for


solving optimization problems

Control Abstraction for Greedy Strategy

The Fractional Knapsack Problem


Prims' and Kruskal’s -Algorithms for Minimal Spanning
Tree
Job Sequencing Problem.
Module 2
Greedy Strategy

Greedy method is an algorithm design strategy for


solving optimization problems

This strategy is characterized by making a series of


choices, each of which looks locally optimal in hopes
of finding a global optimum
Greedy strategy

• Most of problems that have ‘n’ inputs and


require us to obtain a subset that satisfies
some constraints.
• Any subset that satisfied the constraints is
called as a feasible solution.
• A feasible solution that maximize or
minimize the objective function is called
optimal solution.
Greedy strategy

Greedy algorithm works if the problem


exhibits the following properties:

• Greedy choice
• Optimal substructure
Greedy choice
• Greedy approach construct solution through
sequence of steps.
• Each step decision is taken based on the
greedy choice – a decision is made whether the
selected input leads to an optimal solution or
not.
• If the data chosen in any step is optimal, then it
is added to the partial solution set and is fixed.
• If it is infeasible eliminate that data from
solution set.
• Application of greedy choice at each step
reduces the problem size.
Optimal substructure

• A problem exhibits the Optimal


substructure if the solution at each step is
also optimal and it leads to the global
optimal solution of the problem.
The greedy approach should have the
following function
1. Candidate function : A function that
checks whether the chosen set of items
provide a solution
2. Selection function: The selection function
that tells which of the candidates is the
most promising.
3. Feasible solution : A function that checks
the feasibility of a set.
4. Objective function: An objective function
which does not appear explicitly give the
value of a solution.
Structure of Greedy Algorithm

1. Initially the set of chosen items is empty (That is


the solution set is empty).
2. At each step
3. Item will be added in a solution set by using
selection function.
4. IF the set would no longer be feasible
Reject items under consideration ( and is
never
considered again)
5. ELSE IF set is still feasible THEN
Add the current item
Control abstraction
Algorithm Greedy (a,n)//a[1:n] contains the
‘n’ inputs
{
solution: = Φ; //initialize the
solution
for i: = 1 to n
{
x: = select (a);
if (feasible (solution, x) )
solution: = Union
(solution, x);
}
Advantages
• Simple to implement and understand
• Faster
Disadvantages
• May not always produce optimum solution
• Correctness of the algorithm is hard to prove
Applications

• Fractional Knapsack problem


• Minimum Spanning tree
• Job Sequencing
Fractional Knapsack problem

• This problem is based on maximizing the


profit (benefit).

• We are given n objects each with weight


w1,w2, … wn and profit p1,p2,…pn and a
knapsack or bag of capacity m

• If a fraction xi , (0 ≤ xi ≤ 1) of object i is
placed to the knapsack, then a profit of pi
xi is earned.
• Objective is to obtain a filling into the
knapsack that maximize the total profit
earned.

• Total weight of all chosen object to be at


most m, since the capacity of the
knapsack is m.
Formally the problem can be stated as

Maximize ---objective

Subject to ----Constraint

and 0 ≤ xi ≤ 1, 0 ≤ i ≤n
Fractional Knapsack Problem

Algorithm greedyfractionalknapsack(m,n)
/*m – capacity of knapsack
n – number of objects

p[1..n] and w[1..n] contains the profits and weights


of objects respectively.

The objects are arranged in the decreasing order of


(profit/weight).

X[1..n] is the solution set.*/


Fractional Knapsack Problem

Algorithm
greedyfractionalknapsack(m,n)
{
for i=1 to n
x[i]=0.0
U=m
for i= 1 to n
{
if( w[i] > U)
break;
x[i] =1.0
U=U - w[i]
}
if(i≤n)
x[i]=U/w[i]
Fractional Knapsack Problem
{
Algorithm for i=1 to n
greedyfractionalknapsack(m,n) x[i]=0.0
U=m
/*m – capacity of knapsack
for i= 1 to n
n – number of objects {
p[1..n] and w[1..n] contains if( w[i]>U)
break;
the profits and weights of x[i]=1.0
objects respectively. U=U-w[i]
The objects are arranged in }
the decreasing order of if(i≤n)
x[i]=U/w[i]
(profit/weight).
}
X[1..n] is the solution set.*/
Analysis of Fractional Knapsack Problem

• Let the number of items be n


• The time taken for sorting the objects in the
decreasing order of p/w will take O(nlogn )time.
• Time for selecting each item take O(n)time.
• Therefore total complexity is O(nlogn)

• If the objects are sorted in p/w the time


complexity become O(n).
Eg: Given
Capacity of knapsack is
20
object A B C
profit (p) 19 13 9

Weight 13 8 6

Solution
Find P/W of each item

object A B C
profit (p) 19 13 9
Weight 13 8 6
(w)
PROFIT(p/ 1.46 1.6 1.5
w) 2
Capacity of knapsack is 20
Arrange the item in the decreasing order of
p/w
object B C A
profit (p) 13 9 19 Iteration 1
i=1, U=20
Weight (w) 8 6 13
W[1]=8<20
PROFIT(p/w) 1.6 1.5 1.46 X[1]=1.0, U=20-
2 8=12
Iteration 2
i=2, U=12
X 1.0 1.0 0.46 W[2]=6<12
index 1 2 3 X[2]=1.0,
U=12-6=6
Total Profit Iteration 3
earned =Σxi pi i=3, U=6
=1*13+1*9+0.46 W[3]=13>6
X[3]= 6/13 =.46
*19
=30.74
Minimum cost Spanning Tree (MST)
• A minimum spanning tree of G is a spanning tree of G
having a minimum cost.
• A MST can be generated using Prim’s or Kruskal’s
Algorithm.
• Prim’s and Kruskal’s Algorithm are greedy algorithms.
• To apply these algorithms, the given graph must be
weighted, connected and undirected
Minimum cost Spanning Tree (MST)

•If all the edge weights are distinct, then both the
algorithms are guaranteed to find the same MST.

•If all the edge weights are not distinct, then both the
algorithms may not always produce the same MST.

•However, cost of both the MSTs would always be same


in both the cases.
Prim’s Algorithm-

Step-01:
Randomly choose any vertex.
• one vertex connecting to the edge having least weight
is usually selected.

Step-02:
•Find all the edges that connect the tree to new vertices.
•Find the least weight edge among those edges and include
it in the existing tree.
•If including that edge creates a cycle, then reject that edge
and look for the next least weight edge.

Step-03:
Keep repeating step-02 until all the vertices are included
and Minimum Spanning Tree (MST) is obtained.
Prim’s Algorithm-
Algorithm MST_PRIM(G,w,r)
{
1. for (each u
{
2. Key[u]=
3. P[u]= Nil
}
4. key[r] =0
5. Q= V[G]
6. while (Q!=)
{
7. u= Extract_min(Q)
8. for (each v Adj[u])
9. if( v Q and w(u,v) < key[v] )
{
10. P[v] = u
11. key[v] = w(u,v)
}
}
}
Prim’s Algorithm

Step 1-4 Initialization


VERTE
X A B C D
Key[u]
0 ∞ ∞ ∞
P[u]
Nil Nil Nil Nil

Step 5 Queue
creation
Min. Priority queue
Q
Verte A B C D
x
Key 0 ∞ ∞ ∞
step 6 – while loop -constructing the MST
Algorithm exit from the loop when the Q
become empty
VERTEX
A B C D
Key[v] 0 ∞64
10 ∞
98
∞7
P[v] Nil A D
A A
D

Priority queue A B C D

key[v]
key[v] w(uv)
w(uv)
u
uu =D
=A
=B
=C V=A,B,D
V=A,B,C
V=B,C,D
V=A,C,D key[B] =∞ w(AB)
A ,C and
key[B] D are
=10
=64 not in =64
w(CB)
w(DB)
key[C] =∞
queue
=13
=10 w(AC)=9
key[D]
(key[C] =∞
No change)
=9 w(AD)=7
A and D are not in
w(DC)=8
queue
A not in Queue
Time Complexity Analysis
• Run time of prim’s algorithm depends on how
we implement priority queue
• To get the minimum weight edge, if we use min
heap as a priority queue, operations like
extracting minimum element and decreasing
key value takes O(logV) time.
• So, overall time complexity
= O(V log V + E logV) = O(E logV)
Time Complexity Analysis
• This time complexity can be improved and
reduced using Fibonacci heap.
• If we use Fibonacci heap as a priority queue,
operations like extracting minimum element
takes O(log V) time and decreasing key value
takes O(1) time.
• So, overall time complexity
= O(E + V logV) = O(V log V)
Prim’s Algorithm Time Complexity-

Worst case time complexity of Prim’s Algorithm is-

•O(ElogV) using binary heap

•O(VlogV) using Fibonacci heap


Kruskal’s Algorithm
Step-1:
Sort all the edges from low weight to high weight.

Step-2:
Take the edge with the lowest weight and use it to
connect the vertices of graph.

If adding an edge creates a cycle, then reject that


edge and go for the next least weight edge.

Step-3:
Keep adding edges until all the vertices are connected
and a Minimum Spanning Tree (MST) is obtained.
Kruskal’s Algorithm

Algorithm Kruskals(G,w)
{
1. MST=Ф
2. for each vertex vV(G)
3. Make_Set(v) //fn to make disjoint sets
4. sort the edges in E(G) by increasing weight w
5. for each sorted edge (u,v) E(G)
{
6. if ( Find_Set(u) Find_Set(v) ) //find the representative
element of a set
{
7. MST=MST U {(u,v)}
8. Union(u,v) //combine two disjoint sets
}
}
9. Return MST
}
Step 2-3
Kruskal’s Algorithm { {A}, {B}, {C},
{D} }
Step 4
Edge Cost
AD 7
CD 8
AC 9
BD 10
B BC 13
10 AB 64
A
7 Step 5-9
Edge Find_set(u) find_set(v) Select Union
8 D
C edge
AD A D
Y {A,D} { {B}, {C} }
CD C D
Y {A,C,D} { {B} }
AC C C
N
BD B C
Kruskal’s Algorithm – Time Analysis
(suppose we are implementing disjoint set with union by rank
and and path compression heuristics)

Algorithm Kruskals(G,w)
{
MST=Ф ---------------1
for each vertex vV(G) ------------ V
Make_Set(v) ----------------- V O(log V)

sort the edges in E(G) by increasing weight w --O(E log E)


for each sorted edge (u,v) E(G) ------ E
{
if ( Find_Set(u) Find_Set(v) ) -------E x O( log V)
{
MST=MST U {(u,v)} ------E x O(1)
Union(u,v) ------ E x O( log V)
}
}
Return MST --------1
} ------------------------
Total = V+E log E +2E+ 2Elog V+ V log V+2
T(n)=O (E log E)
Kruskal’s Algorithm Time Complexity-

Analysis-

•The edges are maintained as min heap.


•The sorting can be done in O(E log E) time if graph
has E edges.
•Reconstruction MST (Step 5) takes O(E) time.

•So, Kruskal’s Algorithm takes O(E logE) time.

•The value of E can be at most O(V2).


•So, O(log V) and O(log E) are same.
Kruskal’s Algorithm – Time Analysis

Union operation
• Union operation uses Find_set() to determine the roots of
the trees contain the vertex. If the roots are distinct, the
trees are combined by attaching the root of one to the root
of the other.

• If we are using this method the height of the trees can grow
as O(V).

• We can optimize it by using union by rank.


Union by rank always attaches the shorter tree to the
root of the taller tree. Then the worst case complexity of
union is: O(Log V)
Kruskal’s Algorithm Algorithm Make_Set(v)
{
Algorithm Kruskals(G,w) P[v]=v
{ }
MST=Ф
for each vertex vV(G)
Make_Set(v) Algorithm Find_Set(x)
sort the edges in E(G) by increasing {
if x P[x]
weight w
Return Find_Set(P[x])
for each sorted edge (u,v) E(G) else
{ Return x
if ( Find_Set(u) Find_Set(v) )
{ }
MST=MST U
{(u,v)} Algorithm Union(u,v)
Union(u,v) {
} ui= Find_Set(u)
} vi= Find_Set(v)
Return MST p[ui] = vi
} }
Algorithms are preferred when-
• Kruskal’s Algorithm is preferred when-
• The graph is sparse.
• There are less number of edges in the graph like E
= O(V)
• The edges are already sorted or can be sorted in
linear time.

• Prim’s Algorithm is preferred when-


• The graph is dense.
• There are large number of edges in the graph like
E = O(V2).
Difference between Prim’s Algorithm and Kruskal’s Algorithm

Prim’s Algorithm Kruskal’s Algorithm

The tree that we are making or The tree that we are making or
growing always remains growing usually remains
connected. disconnected.

Prim’s Algorithm grows a Kruskal’s Algorithm grows a


solution from a random vertex solution from the cheapest edge
by adding the next cheapest by adding the next cheapest
vertex to the existing tree. edge to the existing tree / forest.

Prim’s Algorithm is faster for Kruskal’s Algorithm is faster for


dense graphs. sparse graphs.
Practice problem
1 10
6
Kruskal’s Algorithm
12 3
4

2 14 7
16 3
2

4 18 7
2
22
5 4
14

16
1

7 24
5 7
10

25
6 5
12

2
6

5 22 28
5 1 2
4

Cost =99
Prim’s Algorithm
Practice problem
1 2
1
1
1 6
4
0 3
7
6
1
25 2
5 4
22

Cost =99 Min(10,28) =10  Edge 1-


6
Min(25,28) =25  Edge 6-5

Min(22,24,28) =22  Edge 5-


4
Min(24,28,18,12) =12  Edge
4-3
Min(24,28,18,16) =16  Edge 3 -
2
Min(24,28,18,14) =14  Edge 2-
7
Job Sequencing with Deadline

Job scheduling problem can be defined as follows.


We have the set S= {1, 2, 3, …,n} of n jobs
each with the deadline di and profit pi.

Each job takes one unit of time for completion on


the only available single processor machine.

The greedy method is used to maximize the profit


by finding the subset of maximum jobs to complete
on the machine without missing their deadline.
Algorithm Jobsequencing(S)
{
/* The set S is a set of jobs each with the deadline di
and profit pi.

The jobs are arranged in the descending order of


their profit and jobs are renamed as 1,2,3 ...N. ie,
S={1, 2, 3, …,N}

Output is set A with maximum number of jobs*/


Algorithm Jobsequencing(S)
{
N= length(S)
A=1
Assign slot[d1-1, d1] to 1
for i=2 to N
{
for m = di to 1 step -1
{
if ( slot[m-1,m] is free )
{
Assign the job to the slot[m-1,m].
A= A U { i }
break;
}
}

} // if(m=0) ignore the job

Return A
}
Time analysis
Algorithm Jobsequencing(S)
{
N= length(S)
A=1
Assign slot[d1-1, d1] to 1

for i=2 to N ------------- n


{
for m = di to 1 step -1 --------------n2
{
if ( slot[m-1,m] is free )
{
Assign the job to the slot[m-1,m].
A= A U { i }
break;
}
}
}
Return A
Total = n + n2
}
T(n) = O (n2 )
job C A D B E Algorithm Jobsequencing(S)
Profit Pi 10 30 5 25 3 {
N= length(S)
A=1
Deadline 1 2 4 4 3 Assign slot[d1-1, d1] to 1
Di
for i=2 to N
{
for m = di to 1 step -1
job A B C D E if ( slot[m-1,m] is free )
{
Number 1 2 3 4 5 Assign the job to the s
A= A U { i }
Profit Pi 30 25 10 5 3 break;
}
}
Deadline 2 4 1 4 3 Return A
Di }

job A B C D E
Number 1 2 3 4 5
Profit Pi 30 25 10 5 3

Deadline Di 2 4 1 4 3
select Yes Yes Yes Yes No
slot 1-2 3-4 0-1 2-3
Eg: 2 Given
job A B C D E
Profit Pi 100 19 27 25 15
Deadline Di 2 1 2 1 3

Arrange the jobs in decreasing order of profit & number the jobs
job A C D B E
Job Number 1 2 3 4 5
Profit Pi 100 27 25 19 15
Deadline Di 2 2 1 1 3

Job Selected and Slot


job A C E
Job Number 1 2 5
Profit Pi 100 27 15
Dead Line 2 2 3
Slot [1-2] [0-1] [2-3]
Eg : 3Given four jobs as follows
job A B C D
Profit Pi 20 10 40 30
Deadline 4 1 1 1
Di

Arrange the jobs in decreasing order of profit & number the jobs
job C D A B
Job No. 1 2 3 4
Profit Pi 40 30 20 10
Deadline 1 1 4 1
Di

Job Selected and Slot


job C A
Job Number 1 3
Profit Pi 40 20
Dead Line 1 4
Slot [0-1] [3-4]

You might also like