0% found this document useful (0 votes)
24 views18 pages

UNIT-IV

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views18 pages

UNIT-IV

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

GREEDY ALGORITHM

A greedy algorithm always makes the choice that looks best at the moment. That is it makes
a locally optimal choice that may be lead to a globally optimal solution. This algorithm is
simple and more efficient compared to other optimization algorithm. This heuristic strategy
does not always produce an optimal solution, but sometimes it does.
There are two key ingredients in greedy algorithm that will solve a particular optimization
problem.
1. Greedy choice property
2. Optimal substructure

1. Greedy choice property:

A globally optimal solution can be arrived at by making a locally optimal (greedy) choice.
In other words, when a choice is to be made, then it looks for best choice in the current
problem, without considering results from the sub-problems. In this algorithm, choice is
made that seems best at the moment and solve the sub-problems after the choice is made.
The choices made by a greedy algorithm may depend on choices so far, but it can not
depend on any future choice or solution to the sub-problems. The algorithm progress in a
top down manner, making one greedy choice one after another, reducing each given
problem instances into smaller one.

2. Optimal substructure:

A problem is said to have optimal substructure if an optimal solution can be constructed


efficiently from optimal solution to its sub-problem. The optimal substructure varies across
problem domain in two ways-
i) How many sub-problems are used in an optimal solution to the original
problem.
ii) How many choices we have in determining which sub-problem to use in an optimal
solution.
In Greedy algorithm a sub-problem is created by having made the greedy choice in the
original problem. Here, an optimal solution to the sub-problem, combined with the greedy
choice already made, yield an optimal solution to the original problem.
KNAPSACK PROBLEM

There are n items, ith item is worth vi dollars and weight wi pounds, where vi and wi are
integers. Select item to put in knapsack with total weight is less than W, So that the total
value is maximized. This problem is called knapsack problem. This problem finds, which
items should choice from n item to obtain maximum profit and total weight is less than W.
The problem can be explained as follows-

A thief robbing a store finds n items, the ith item is worth vi dollar and weight w pounds,
where vi and wi are integers. He wants to take as valuable load as possible, but he can
carry atmost W pounds in his knapsack, where W is an integer. Which item should he take?

There are two types of knapsack problem.


1. 0-1 knapsack problem: In 0-1 knapsack problem each item either be taken or left
behind

2. Fractional knapsack problem: In fractional knapsack problem fractions of items are


allowed to choose.

Consider the following problem-

There are 3 items. The knapsack can hold 50 pounds. Item1 weight 10 pounds and its
worth is 60 dollar, item2 weight 20 pounds and its worth 100 dollars, item3 weight 30
pounds and its weight 120 dollars. Find out the items with maximum profit which the
knapsack can carry.
Solution:

Here, W = 50 pounds

Item Weight Worth


(w pound) ( v dollar)
Item1 10 60
Item2 20 100
Item3 30 120

Let, an item I has weight wi pounds and worth vi dollar. Value per pound of I = vi / wi . Thus,
value per pound for- Item1 = w1 / v1= 60 dollars / 10 pounds = 6 dollars/pounds Item 2 = w2
/ v2
= 100 dollars / 20 pounds
= 5 dollars/pounds

Item3 = w3 / v3
= 120 dollars / 30 pounds
= 4 dollars/pounds

We can select maximum of 50 pounds. So, using greedy strategy in 0-1 knapsack problem
1st choice is Item1.
2nd choice is Item2. Total weight = 10 + 20 pounds
= 30 pounds
Total worth = 60 +100 dollars
=160 dollars

But this is not the optimal choice. The optimal choice will choose item 2 and 3. Then,
Total weight = 20 + 30 pounds
= 50 pounds
Total worth = 100 + 120 dollars
= 220 dollars.
Hence , 0-1 knapsack problem is not solved by greedy strategy.
Now, using greedy strategy in fractional knapsack problem – 1st choice is item1. 2nd choice
is item2 Total weight = 30 pounds
But the size of the knapsack is 50 pounds.
So, it will take remaining 20 pounds from item3 (fraction of item3) and its
worth is 4 x 20=80 dollars.
Hence, Total weights = 50 pounds.
Total worth = 60+100+ 80 dollars
=240 dollars.
Hence , an optimal solution can be obtain from fractional knapsack problem using greedy
strategy.

JOB SEQUENCING WITH DEADLINE

Now, we will discuss about the job sequencing problem. The problem is
stated as below-
1. There are n jobs to be processed on a machine
2. Each job i has a deadline di ≥ 0 and profit pi ≥ 0
3. pi is earned iff the job is completed by its deadline
4. To complete the job, it is processed in one machine for a unit of time.
5. Only one machine is available for processing job
6. Only one job is processed at a time on the machine.
7. A feasible solution is a subset of job J such that each job is completed by its
deadline.
8. An optimal solution is a feasible solution with a maximum profit.

This problem can be solved by greedy algorithm. For the optimal solution, after choosing a
job , it will add the next job to the subset such that ∑i € J pi , increases and resulting subset
become feasible. pi is the total profit of i subset of jobs. In other words we have to check all
th

possible feasible subset J with their total profit value, for a given set of jobs.
Feasible solution for a set of job J is such that, if the jobs of set J can be processed in the
order without violating any deadline then J is a feasible solution.

Example :

Let ,
no. of job, n = 4 and jobs are 1, 2, 3, 4
profit (p1,p2,p3,p4) = (100,10,15,27)
deadline (d1,d2,d3,d4) = (2,1,2,1).
Find the optimal solution set.
Solution:

SL Feasible Processing Profit


No. Solution Sequence
1 ( 2,1 ) (1,2 ) 110
2 ( 1,3 ) ( 1,3 ) or( 3,1 ) 115
3 ( 1,4 ) ( 4,1 ) 127
4 ( 2,3 ) ( 2,3 ) 25
5 ( 3,4 ) ( 4,3 ) 42
6 (1) (1) 100
7 (2) (2) 10
8 (3) (3) 15
9 (4) (4) 27
Here solution 3 is optimal. The optimal solution is got by processing the job 1 and 4 in
the order job 4 followed by job 1. The maximum profit is 127. Thus, the job 4 begins at
time zero and job 1 end at time 2.
Consider solution 3 i.e maximum profit job subset J = ( 1,4 ) Here , at first J= Ø and ∑i € J

pi=0. Job 1 is added to J as it has the largest profit and is a feasible solution. Next add
job 4 .Then also J = ( 1,4 ) is feasible because if the job processes in the sequence ( 4,1 ) then
job 4 will start in zero time and job 1 will finish in 2 time within its deadline. Next if job 3 is
added then j=(1,3,4) is not feasible because all the job 1,3,4 can not be completed within its
deadline. So job 3 is not added to the set. Similarly after adding job 2 J=(1,2,4) is not
feasible. Hence J = ( 1,4 ) is a feasible solution set with maximum profit

127. This is an optimal solution.

Prim’s Algorithm for Minimum Spanning Tree (MST)

Introduction to Prim’s algorithm:


We have discussed Kruskal’s algorithm for Minimum Spanning Tree. Like Kruskal’s algorithm,
Prim’s algorithm is also a Greedy algorithm. This algorithm always starts with a single node
and moves through several adjacent nodes, in order to explore all of the connected edges
along the way.
The algorithm starts with an empty spanning tree. The idea is to maintain two sets of
vertices. The first set contains the vertices already included in the MST, and the other set
contains the vertices not yet included. At every step, it considers all the edges that connect
the two sets and picks the minimum weight edge from these edges. After picking the edge, it
moves the other endpoint of the edge to the set containing MST.
A group of edges that connects two sets of vertices in a graph is called cut in graph
theory. So, at every step of Prim’s algorithm, find a cut, pick the minimum weight edge from
the cut, and include this vertex in MST Set (the set that contains already included vertices).
How does Prim’s Algorithm Work?
The working of Prim’s algorithm can be described by using the following steps:
Step 1: Determin an arbitraryvertex as the starting vertex of the MST.
Step 2: Follow steps 3 to 5 till there are vertices that are not included in the MST

Step 3: Find edges connecting any tree vertex with the fringe vertices.
Ste4: Find the minimum among these edges.
Step 5: Add the chosen edge to the MST if it does not form any cycle.
Step 6: Return the MST and exit

Note: For determining a cycle, we can divide the vertices into two sets [one set contains the
vertices included in MST and the other contains the fringe vertices.]
Recommended Problem Minimum Spanning Tree

Illustration of Prim’s Algorithm:


Consider the following graph as an example for which we need to find the Minimum Spanning
Tree (MST).
Step 1: Firstly, we select an arbitrary vertex that acts as the starting vertex of the Minimum
Spanning Tree. Here we have selected vertex 0 as the starting vertex.

Step 2: All the edges connecting the incomplete MST and other vertices are the edges {0, 1}
and {0, 7}. Between these two the edge with minimum weight is {0, 1}. So include the edge
and vertex 1 in the MST.

Step 3: The edges connecting the incomplete MST to other vertices are {0, 7}, {1, 7} and
{1, 2}. Among these edges the minimum weight is 8 which is of the edges {0, 7} and {1, 2}.
Let us here include the edge {0, 7} and the vertex 7 in the MST. [We could have also
included edge {1, 2} and vertex 2 in the MST].

Step 4: The edges that connect the incomplete MST with the fringe vertices are {1, 2}, {7,
6} and {7, 8}. Add the edge {7, 6} and the vertex 6 in the MST as it has the least weight
(i.e., 1).
Step 5: The connecting edges now are {7, 8}, {1, 2}, {6, 8} and {6, 5}. Include edge {6, 5}
and vertex 5 in the MST as the edge has the minimum weight (i.e., 2) among them.

Include vertex 5 in the MST

Step 6: Among the current connecting edges, the edge {5, 2} has the minimum weight. So
include that edge and the vertex 2 in the MST.

Include vertex 2 in the MST

Step 7: The connecting edges between the incomplete MST and the other edges are {2, 8},
{2, 3}, {5, 3} and {5, 4}. The edge with minimum weight is edge {2, 8} which has weight 2.
So include this edge and the vertex 8 in the MST.
Add vertex 8 in the MST

Step 8: See here that the edges {7, 8} and {2, 3} both have same weight which are
minimum. But 7 is already part of MST. So we will consider the edge {2, 3} and include that
edge and vertex 3 in the MST.

Include vertex 3 in MST

Step 9: Only the vertex 4 remains to be included. The minimum weighted edge from the
incomplete MST to 4 is {3, 4}.

Include vertex 4 in the MST

The final structure of the MST is as follows and the weight of the edges of the MST is (4 + 8
+ 1 + 2 + 4 + 2 + 7 + 9) = 37.

The structure of the MST formed using the above method

Note: If we had selected the edge {1, 2} in the third step then the MST would look like the
following.
Structure of the alternate MST if we had selected edge {1, 2} in the MST

How to implement Prim’s Algorithm?


Follow the given steps to utilize the Prim’s Algorithm mentioned above for finding MST of a
graph:
 Create a set mstSet that keeps track of vertices already included in MST.
 Assign a key value to all vertices in the input graph. Initialize all key values as INFINITE.
Assign the key value as 0 for the first vertex so that it is picked first.
 While mstSet doesn’t include all vertices
 Pick a vertex u that is not there in mstSet and has a minimum key value.
 Include u in the mstSet.
 Update the key value of all adjacent vertices of u. To update the key values, iterate through
all adjacent vertices.
 For every adjacent vertex v, if the weight of edge u-v is less than the previous key value of v,
update the key value as the weight of u-v.
The idea of using key values is to pick the minimum weight edge from the cut. The key
values are used only for vertices that are not yet included in MST, the key value for these
vertices indicates the minimum weight edges connecting them to the set of vertices included
in MST.

Kruskal’s Minimum Spanning Tree (MST) Algorithm

A minimum spanning tree (MST) or minimum weight spanning tree for a weighted, connected,
undirected graph is a spanning tree with a weight less than or equal to the weight of every
other spanning tree. To learn more about Minimum Spanning Tree, refer to this article.

Introduction to Kruskal’s Algorithm:


Here we will discuss Kruskal’s algorithm to find the MST of a given weighted graph.
In Kruskal’s algorithm, sort all edges of the given graph in increasing order. Then it keeps on
adding new edges and nodes in the MST if the newly added edge does not form a cycle. It
picks the minimum weighted edge at first and the maximum weighted edge at last. Thus we
can say that it makes a locally optimal choice in each step in order to find the optimal solution.
Hence this is a Greedy Algorithm.
How to find MST using Kruskal’s algorithm?
Below are the steps for finding MST using Kruskal’s algorithm:
1. Sort all the edges in non-decreasing order of their weight.
2. Pick the smallest edge. Check if it forms a cycle with the spanning tree formed so far. If the
cycle is not formed, include this edge. Else, discard it.
3. Repeat step#2 until there are (V-1) edges in the spanning tree.
Step 2 uses the Union-Find algorithm to detect cycles.
So we recommend reading the following post as a prerequisite.
 Union-Find Algorithm | Set 1 (Detect Cycle in a Graph)
 Union-Find Algorithm | Set 2 (Union By Rank and Path Compression)

Kruskal’s algorithm to find the minimum cost spanning tree uses the greedy approach. The
Greedy Choice is to pick the smallest weight edge that does not cause a cycle in the MST
constructed so far. Let us understand it with an example:
Illustration:
Below is the illustration of the above approach:
Input Graph:

The graph contains 9 vertices and 14 edges. So, the minimum spanning tree formed will be
having (9 – 1) =8edges.
After sorting:

We So Dest
igh urc inati
t e on

1 7 6

2 8 2

2 6 5

4 0 1

4 2 5

6 8 6

7 2 3

7 7 8

8 0 7

8 1 2
We So Dest
igh urc inati
t e on

9 3 4

10 5 4

11 1 7

14 3 5
Now pick all edges one by one from the sorted list of edges
Step 1: Pick edge 7-6. No cycle is formed, include it.

Add edge 7-6 in the MST

Step 2: Pick edge 8-2. No cycle is formed, include it.

Add edge 8-2 in the MST

Step 3: Pick edge 6-5. No cycle is formed, include it.

Add edge 6-5 in the MST

Step 4: Pick edge 0-1. No cycle is formed, include it.


Add edge 0-1 in the MST

Step 5: Pick edge 2-5. No cycle is formed, include it.

Add edge 2-5 in the MST

Step 6: Pick edge 8-6. Since including this edge results in the cycle, discard it. Pick edge 2-
3: No cycle is formed, include it.

Add edge 2-3 in the MST

Step 7: Pick edge 7-8. Since including this edge results in the cycle, discard it. Pick edge 0-
7. No cycle is formed, include it.

Add edge 0-7 in MST

Step 8: Pick edge 1-2. Since including this edge results in the cycle, discard it. Pick edge 3-
4. No cycle is formed, include it.
DIJKSTRA’S ALGORITHM

Shortest path problem:

For a given weighted and directed graph G= (V, E), the shortest path problem is the
problem of finding a shortest path between any two vertex v ϵ V in graph G. The property
of the shortest path is such that a shortest path between two vertices contains other
shortest path within it i.e any other sub-path of a shortest path is also a shortest path.

Single source shortest path problem:


In a single source shortest path problem , there is only one source vertex S in the vertex
set V of graph G=(V, E). Now this single source shortest path problem finds out the shortest
path from the source vertex S to any other vertex in v € V.

Optimal substructure of a shortest path:

Optimal substructure of a shortest path can be stated that any other sub-path of a
shortest path is also a shortest path. Here is the lemma-

Lemma:

Given a weighted directed graph G=(V, E) with weight function w: E -> R, let p = ( v1,v2
, vk ) be a shortest path from vertex v1 to vertex vk, and for any i and j such that 1 ≤
i ≤ j ≤ k, let Pij = ( v1,vi+1, ----,vj ) be the sub-path of P from vertex vi to vertex vj. Then Pij is a
shortest path from vi to vj.

Dijsktra algorithm solves the single source shortest path problem. But the algorithm works
only on a directed and positive weighted graph. Positive weighted graph means where
weights of all edges are non negative i.e G=(V, E) is a positive weighted graph then w( u,
v) ≥ 0. Dijsktra algorithm is a greedy algorithm.
Dijkstra algorithm is as follows-

For a given graph G=(V, E) and a source vertex s, it maintains a set F of vertices .Initially no
vertex is in F. For a vertex u ϵ V - F, (i.e for a vertex which is in V, but is not in F) if it has
minimum shortest distance from source s to u then u is added to F. This process is continue
till V - F is not equal to null.

DIJKSTRA( G, w, s)
1. INITIALIZE_SINGLE_SOURCE(G,s)
for each vertex v € V[G]
do d [v] = ∞
Π [v] = NIL 1.4 d [s] = 0
2. F=Ø
3. Q = V [G]
4. while Q != Ø
5. do u = EXTRACT_MIN(Q)
6. F = F U {u}
7. for each vertex v € Adj [u]
8. do RELAX (u, v, w)
8.1 if d [v] > (d [u] + w(u, v))
8.2 then d [v] = d [u] + w(u, v)
8.3 Π [v] = u;

In line 1 (from line 1.1 to 1.4 ) initialize the value of d and π . Here d [v] means distance
from source to vertex v and π [v] means parent of vertex v. Initially source to source
distance is 0. So, d [s] = 0 . Also for all vertices v € V , d[v] is set as ∞ and π [v] as NIL.
In line 2 it initializes set F to empty set as initially no vertex is added to it.
In line 3 Q is a min-priority queue and initially it contains all vertices set V[G]
of graph G.
In line 4 the while loop of line 4-8 will continue until the min-priority queue Q become
empty.
In line 5 it extracts the minimum distance vertex u from source s i.e u € V - F for which d [u]
is minimum.
In line 6 u is added to F.

In line 7-8 ( from line 8.1 to 8.3) for all vertex v which is adjacent to u , calculate the
distance to v through vertex u. If this value is less than d[v] then update d [v] to this new
value. Make parent of v, π [v] = u.
Example: Apply dijkstra algorithm for the following graph G.

Dijkstra algorithm applied on G Initially d[s]=0; [s]=NIL; And distance of all other vertices
set as ∞.
d[a]= ∞, π [a]=NIL;
d[b]= ∞, π [b]=NIL;
d[c]= ∞, π [c]=NIL;
d[d]= ∞, π [d]=NIL; s is added to F.
F={s}

In 2nd iteration Adj[s]={a, b} and they are not in F. Now d[a]=d[s]+w(s,a)


=0+10
=10
This new d[a] value is less than previous d[a] value i.e 10<∞ Hence d[a] is updated to 10
d[a]=10; π [a]=s; Similarly d[b]=d[s]+w(s,b)
=0+5
=5<previous d1[b]
=5<∞ d[b]=5; π [b]=s; and d[c]=∞, π [c]=NIL; d[d]=∞, π [d]=NIL;
Now, among these d[a],d[b],d[c],d[d] minimum value is d[b]i.e distance of vertex b is
minimum from source. Hence b is added to F F={s,b}

In 3rd iteration, Adj[b]={a,c,d} and they are not in F For vertex a, d[a]=d[b]+w(b,a)
=5+3
=8< previous d[a]
=8<10
Hence, d[a]=8; π [a]=b; For vertex c d[c]=d[b]+w(b,c)
=5+9
=14< previous d[c]
=14<∞ d[c]=14; π [c]=b;

For vertex d d[d]=d[b]+w(b,d)


=5+2
=7< previous d[d]
=7<∞ d[d]=7; π [d]=b;
Now among d[a],d[c],d[d] the minimum d value is d[d] So, vertex d is added to F.
F={s,b,d}

In 4th iteration Adj[d]={c} and c is not in F Now d[c]=d[d]+w(d,c) = 7+6


=13< previous d[c]
=13<14 d[c]=13; π [c]=d; And d[a]=8; π [a]=s;

Now the minimum of d[a] and d[c] is d[a]. So, vertex a is added to F F={s,b,d,a}

In 5th iteration last added vertex is a. Adj[a]={b,c}, Here c is not in F. But b is in F i.e b is

already selected. So, we will consider vertex c only. d[c]=d[a]+w(a,c)

=8+1
=9< previous d[c] So, d[c]=9 π [c]=a
Hence c is added to F F={s,b,d,a,c}

There is no vertex to added in F.So, the algorithm terminate here.

You might also like