Unit 1 Greedy Techniques: Structure Page Nos
Unit 1 Greedy Techniques: Structure Page Nos
Greedy
Techniques
UNIT 1 GREEDY TECHNIQUES Techniques
1.0 Introduction 5
1.1 Objectives 6
1.2 Some Examples to understand Greedy Techniques 6
1.3 Formalization of Greedy Techniques 9
1.4 Knapsack (fractional) problem 11
1.5 Minimum Cost Spanning Tree (MCST) problem 13
1.5.1: Kruskal’s Algorithm
1.5.2: Prim’s algorithm
1.6: Single-Source Shortest Path Problem 22
1.6.1: Bellman-Ford algorithm
1.6.2: Dijkstra’s Algorithm
1.7 Summary 35
1.8 Solutions/Answers 37
1.9 Further Readings 41
1.0 INTRODUCTION
5
Design Techniques Characteristics of greedy algorithm
In this unit, we will discuss those problems for which greedy algorithm gives
an optimal solution such as Knapsack problem, Minimum cost spanning tree
(MCST) problem and Single source shortest path problem.
1.1 OBJECTIVES
In order to better understand the greedy algorithms, let us consider some examples:
Suppose we are given Indian currency notes of all denominations, e.g.
{1,2,5,10,20,50,100,500,1000}. The problem is to find the minimum number of
currency notes to make the required amount A, for payment. Further, it is assumed
that currency notes of each denomination are available in sufficient numbers, so that
one may choose as many notes of the same denomination as are required for the
purpose of using the minimum number of notes to make the amount A.
Now in the following examples we will notice that for a problem (discussed above)
the greedy algorithm provides a solution (see example-1), some other cases, greedy
algorithm does not provides a solution, even when a solution by some other method
exist (see example-2) and sometimes greedy algorithm does not provides an optimal
solution Example-3).
Example 2
i) D ≤ 289 and
ii) if D1 is another denomination of a note such that D1 ≤ 289, then D1 ≤ D.
6
In other words, the picked-up note’s denomination D is the largest among all the Greedy Techniques
denominations satisfying condition (i) above.
To deliver Rs. 289 with minimum number of currency notes, the notes of different
denominations are chosen and rejected as shown below:
Example 2
Next, we consider an example in which for a given amount A and a set of available
denominations, the greedy algorithm does not provide a solution, even when a
solution by some other method exists.
Let us consider a hypothetical country in which notes available are of only the
denominations 20, 30 and 50. We are required to collect an amount of 90.
7
Design Techniques i) First, pick up a note of denomination 50, because 50 ≤ 90. The amount obtained
by adding denominations of all notes picked up so far is 50.
ii) Next, we can not pick up a note of denomination 50 again. However, if we pick
up another note of denomination 50, then the amount of the picked-up notes
becomes 100, which is greater than 90. Therefore, we do not pick up any note of
denomination 50 or above.
iii) Therefore, we pick up a note of next denomination, viz., of 30. The amount
made up by the sum of the denominations 50 and 30 is 80, which is less then 90.
Therefore, we accept a note of denomination 30.
iv) Again, we can not pick up another note of denomination 30, because otherwise
the sum of denominations of picked up notes, becomes 80+30=110, which is
more than 90. Therefore, we do not pick up only note of denomination 30 or
above.
v) Next, we attempt to pick up a note of next denomination, viz., 20. But, in that
case the sum of the denomination of the picked up notes becomes 80+20=100,
which is again greater than 90. Therefore, we do not pick up only note of
denomination 20 or above.
vi) Next, we attempt to pick up a note of still next lesser denomination. However,
there are no more lesser denominations available.
Thus, we get 90 and it can be easily seen that at least 3 notes are required to make an
amount of 90. Another alternative solution is to pick up 3 notes each of denomination
30.
Example 3
Next, we consider an example in which the greedy technique, of course, leads to a
solution, but the solution yielded by greedy technique is not optimal.
Again, we consider a hypothetical country in which notes available are of the only
denominations 10, 40 and 60. We are required to collect an amount of 80.
Using the greedy technique, to make an amount of 80, first, we use a note of
denomination 60. For the remaining amount of 20, we can choose note of only
denomination 10. And , finally, for the remaining amount, we choose another note of
denomination 10. Thus, greedy technique suggests the following solution using 3
notes: 80 = 60 + 10 + 10.
In order to solve optimization problem using greedy technique, we need the following
data structures and functions:
1) A candidate set from which a solution is created. It may be set of nodes, edges in
a graph etc. call this set as:
C: Set of given values or set of candidates
3) A function (say solution) to test whether a given set of candidates give a solution
(not necessarily optimal).
4) A selection function (say select) which chooses the best candidate form C to be
added to the solution set S,
To better understanding of all above mentioned data structure and functions, consider
the minimum number of notes problem of example1. In that problem:
2) Suppose we want to collect an amount of Rs. 283 (with minimum no. of notes). If
we allow a multi-set rather than set in the sense that values may be repeated, then
S={100,100,50,20,10,2,1}
4) A function select finds the “best” candidate value (say x) from C, then this value x
is tried to add to the set S. At any stage, value x is added to the set S, if its
addition leads to a partial (feasible) solution. Otherwise, x is rejected. For
example, In case of minimum number of notes problem, for collecting Rs. 283, at
the stage when S={100, 100,50}, then first the function select try to add the Rs 50
to S. But by using a function solution, we can found that the addition of Rs. 50 to
S will lead us a infeasible solution, since the total value now becomes 300 which
exceeds Rs. 283. So the value 50 is rejected. Next, the function select attempts the
next lower denomination 20. The value 20 is added to the set S, since after adding
20, total sum in S is 270, which is less than Rs. 283. Hence, the value 20 is
returned by the function select.
9
Design Techniques 5) When we select a new value (say x) using select function from set C, then before
adding x to S we check its feasibility. If its addition gives a partial solution, then
this value is added to S. Otherwise it is rejected. The feasibility checking of new
selected value is done by the function feasible. For example, In case of minimum
number of notes problem, for collecting Rs. 283, at the stage when S={100,
100,50}, then first the function select try to add the Rs 50 to S. But by using a
function solution, we can found that the addition of Rs. 50 to S will lead us an
infeasible solution, since the total value now becomes 300 which exceeds Rs.
283. So the value 50 is rejected. Next, the function select attempts the next lower
denomination 20. The value 20 is added to the set S, since after adding 20, total
sum in S is 270, which is less than Rs. 283. Hence feasible.
6) The objective function (say ObjF), gives the value of the solution. For example,
In case of minimum number of notes problem, for collecting Rs. 283; and when
S={100,100,50,20,10,2,1}, then the sum of values in S equals to the required
amount 283; the function ObjF returns the number of notes in S, i.e., the number
7.
Algorithm Greedy(C, n)
/* Input: A input domain (or Candidate set ) C of size n, from which solution is to be
Obtained. */
}
If (solution (S))
return S;
else
return “ No Solution”
} // end of while
Now in the following sections, we apply greedy method to solve some optimization
problem such as knapsack (fractional) problem, Minimum Spanning tree and Single
source shortest path problem etc.
10
Greedy Techniques
1.4 KNAPSACK (FRACTIONAL) PROBLEM
The problem (or Objective) is to fill a knapsack (up to its maximum capacity M)
which maximizes the total profit earned.
Mathematically:
Note that the value of will be any value between 0 and 1 (inclusive). If any object
is completely placed into a knapsack then its value is 1 , if we do not pick
(or select) that object to fill into a knapsack then its value is 0 . Otherwise
if we take a fraction of any object then its value will be any value between 0 and 1.
To solve this problem, Greedy method may apply any one of the following strategies:
From the remaining objects, select the object with maximum profit that fit
into the knapsack.
From the remaining objects, select the object that has minimum weight and
also fits into knapsack.
From the remaining objects, select the object with maximum that fits
into the knapsack.
Approach
1
18+2+0=20 28.2
2
0+10+10=20 31.0
3
0+15+5=20 31.5
11
Design Techniques Approach 1: (selection of object in decreasing order of profit):
In this approach, we select those object first which has maximum profit, then next
maximum profit and so on. Thus we select 1st object (since its profit is 25, which is
maximum among all profits) first to fill into a knapsack, now after filling this object
( into knapsack remaining capacity is now 2 (i.e. 20-18=2). Next we select
nd
the 2 object, but its weight =15, so we take a fraction of this object
(i.e. = ). Now knapsack is full (i.e. ) so 3rd object is not selected.
Hence we get total profit = 28 units and the solution set
12
Running time of Knapsack (fractional) problem: Greedy Techniques
Example: 1: Find an optimal solution for the knapsack instance n=7 and M=15 ,
Solution:
Greedy algorithm gives a optimal solution for knapsack problem if you select the
object in decreasing order of the ratio . That is we select those object first
which has maximum value of the ratio This ratio is also
called profit per unit weight .
Approach
Selection of
object in 6+10+18+15+3+3.33
decreasing 1+2+4+5+1+2 =55.33
order of the =15
ratio
10 10 10 10 10
a b 7 a b a b 7 a b 7 a b 7
12 9 e 12 e 12 e 12 11 e 9 e
c 11 c 11 c c c
8 d 8 d
8
d
8
d
8
d
2
Figure-1 (b) (c) (d)
(a)
A sum of the weights of the edges in and is: 41, 37, 38 and 34 (some
other spanning trees are also possible). We are interested to find that
spanning tree, out of all possible spanning trees , ……. ; whose sum of
weights of all its edges are minimum. For a given graph G is the MCST, since
weight of all its edges is minimum among all possible spanning trees of G.
1. Kruskal’s algorithm
2. Prim’s algorithm
These two algorithms use Greedy approach. A greedy algorithm selects the edges one-
by-one in some given order. The next edge to include is chosen according to some
optimization criteria. The simplest such criteria would be to choose an edge (u, v) that
results in a minimum increase in the sum of the costs (or weights) of the edges so for
included.
14
In General for constructing a MCST: Greedy Techniques
GENERIC_MCST(G, w)
{
}
return A
}
A main difference between kruskal’s and Prim’s algorithm to solve MCST problem
is that the order in which the edges are selected.
For solving MCST problem using Greedy algorithm, we use the following data
structure and functions, as mentioned earlier:
i) C: The set of candidates (or given values): Here C=E, the set of edges of
ii) S: Set of selected candidates (or input) which is used to give optimal
solution. Here the subset of edges, is a solution, if the graph
is a spanning tree of .
iii) In case of MCST problem, the function Solution checks whether a solution is
reached or not. This function basically checks :
a) All the edges in S form a tree.
b) The set of vertices of the edges in S equal to V.
c) The sum of the weights of the edges in S is minimum possible of the edges
which satisfy (a) and (b) above.
15
Design Techniques 1) ff selection function (say select) which chooses the best candidate form C
to be added to the solution set S,
iv) The select function chooses the best candidate from C. In case of Kruskal’s
algorithm, it selects an edge, whose length is smallest (from the remaining
candidates). But in case of Prim’s algorithm, it select a vertex, which is added to
the already selected vertices, to minimize the cost of the spanning tree.
v) A function feasible checks the feasibility of the newly selected candidate (i.e.
edge (u,v)). It checks whether a newly selected edge (u, v) form a cycle with the
earlier selected edges. If answer is “yes” then the edge (u,v) is rejected,
otherwise an edge (u,v) is added to the solution set S.
vi) Here the objective function ObjF gives the sum of the edge lengths in a
Solution.
KRUSKAL_MCST(G, w)
/* Input: A undirected connected weighted graph G=(V,E).
/* Output: A minimum cost spanning tree T(V, E’) of G
{
1. Sort the edges of E in order of increasing weight
2.
3. for (each vertex )
4. do MAKE_SET(v)
5. for (each edge (u, v)∊ E, taken in increasing order of weight
{
6.
7.
8.
}
9. return A
}
16
In line 5-8: An edge (u, v)∊ E, of minimum weight is added to the set A, if Greedy Techniques
and only if it joins two nodes which belongs to different components (to
check this we use a function, which returns a same integer
value, if u and v belongs to same components (In this case adding (u,v) to A
creates a cycle), otherwise it returns a different integer value)
If an edge added to A then the two components containing its end points are
merged into a single component.
Finally the algorithm stops, when there is just a single component.
Let and
tree (MCST).
Solution: First, we sorts the edges of G=(V,E) in order of increasing weights as:
Edges
weights 2 3 4 4 5 5 5 6 7 8 8 9
2 3
1 2 3
6
5 7
5 8
4 9
4 5 6
8
5 4
7
17
Design Techniques
1. (1, 2) {1, 2},{3},{4},{5},{6},{7} (1) – (2) (3) (4) (5) (6) (7)
(4)–(5)
(4)–(5) (6)
(7)
(4)–(5) (6)
(7)
(4)–(5) (6)
(7)
18
1.5.2 Prim’s Algorithm Greedy Techniques
PRIM’s algorithm has the property that the edges in the set A (this set A contains the
edges of the minimum spanning tree, when algorithm proceed step-by step) always
form a single tree, i.e. at each step we have only one connected component.
This process is repeated until , i.e. until all the vertices are not in the set A.
PRIMS_MCST(G, w)
/* Input: A undirected connected weighted graph G=(V,E).
/* Output: A minimum cost spanning tree T(V, E’) of G
{
1. // T contains the edges of the MST
2.
3.
{
4. u∊ V-A and v∊ A
5.
6.
}
7. return T
}
1) Initially the set A of nodes contains a single arbitrary node (i.e. starting vertex)
and the set T of edges are empty.
2) At each step PRIM’s algorithm looks for the shortest possible edge such
that
u∊ V-A and v∊ A
3) In this way the edges in T form at any instance a minimal spanning tree for the
nodes in A. We repeat this process until .
19
Design Techniques Example2: Apply PRIM’s algorithm on the following graph to find minimum-cost-
spanning – tree (MCST).
2 3
1 2 3
7 6
5 5 8
4 9
4 5 6
8
5 4
7
(4)
(4)−(5)
(4)−(5)
(7)
6 (6,7) {1,2,3,4,5,6,7} (1)−(2)−(3)
(4)−(5) (6)
(7)
20
Check Your Progress 1 Greedy Techniques
Q2: The running time of KRUSKAL’s algorithm, where |E| is the number of edges
and |V| is the number of nodes in a graph:
O( E ) O( E log E ) O( E log V ) O( V log V )
a) b) c) d)
Q3: The running time of PRIM’s algorithm, where |E| is the number of edges and
|V| is the number of nodes in a graph:
2 2
O( E ) O( V ) O( E log V ) O( V log V )
a) b) c) d)
Q.4: The optimal solution to the knapsack instance n=3, M=15,
( P1 , P2 , P3 ) (25,24,15) 1 (W ,W2 ,W3 ) (18,15,10)
and is:
a) 28.2 b) 31.0 c) 31.5 d) 41.5
Q.6: Total number of spanning tree in a complete graph with 5 nodes are
2 3
a) 5 b) 5 c) 10 d) 100
Q.7: Let (i, j , C ) , where i and j indicates vertices of a graph & C denotes cost
between edges. Consider the following edges & cost in order of increasing
length: (b,e,3),(a,c,4),(e,f,4), (b,c,5),(f,g,5),(a,b,6), (c,d,,6),(e,f,6), (b,d,7),
(d,e,7),(d,f,7),(c,f,7). Which of the following is NOT the sequence of edges
added to the minimum spanning tree using Kruskal’s algorithm?
Q.8: Consider a question given in Q.7. Applying Kruskal’s algorithm to find total cost
of a Minimum spanning tree.
Q.9: State whether the following Statements are TRUE or FALSE. Justify your
answer:
c) If edge weights of a connected weighted graph are all distinct, the graph must have
exactly one minimum spanning tree.
21
Design Techniques
d) If edge weights of a connected weighted graph are not all distinct, the graph must
have more than one minimum spanning tree.
e) If edge weights of a connected weighted graph are not all distinct, the minimum
cost of each one minimum spanning tree is same.
Q.11: Differentiate between Kruskal’s and Prim’s algorithm to find a Minimum cost
of a spanning tree of a graph G..
Q.12: Are the Minimum spanning tree of any graph is unique? Apply PRIM’s
algorithm to find a minimum cost spanning tree for the following.
Graph following using Prim’s Algorithm. ( a is a starting vertex).
8 7
b c d
4 9
2
11 7 i 4 14 e
a
8 6 10
h g f
1 2
Q.13: Find the optimal solution to the knapsack instance n=5, M=10,
( P1 , P2 ,....,P5 ) (12,32,40,30,50)
(W1 ,W2 ,.....,W5 ) (4,8,2,6,1) .
SSSP Problem can also be used to solve some other related problems:
All pair shortest path problem (APSPP): Find a shortest path between
every pair of vertices and . One technique is to use SSSP for each vertex,
but there are some more efficient algorithm (known as Floyed-warshall’s
algorithm).
1. Bellman-Ford algorithm
2. Dijkstra’s algorithm
Bellman-ford algorithm, allow negative weight edges in the input graph. This
algorithm either finds a shortest path from source vertex to every other
vertex or detect a negative weight cycles in G, hence no solution. If
there is no negative weight cycles are reachable (or exist) from source vertex
s, then we can find a shortest path form source vertex to every other
vertex . If there exist a negative weight cycles in the input graph, then
the algorithm can detect it, and hence “No solution”.
Dijkstra’s algorithm allows only positive weight edges in the input graph
and finds a shortest path from source vertex to every other vertex .
Case1: Shortest-path cannot contain a cycle; it is just a simple path (i.e. no repeated
vertex):
If some path from to contains a negative cost cycle, then there does not
exist a
shortest path. Otherwise there exists a shortest path (i.e. a simple path) from
.
23
Design Techniques
For example: consider a graph with negative weight cycle:
If there is a negatives weight cycle on some path from s to v,
we define .
b
a
3 -1
3 4
6
c d
5
S 0 5 11 -∞ g
-3
2 3
e f 7
-∞ -∞
-6
There are infinitely many paths from s to c: 〈s, c〉, 〈s, c, d, c〉, 〈s, c, d, c, d, c〉
, and so on. Because the cycle 〈c, d, c〉 has weight 6 + (-3) = 3 > 0, the shortest path
from s to c is 〈s, c〉, with weight δ(s, c) = 5. Similarly, the shortest path from s to d
is 〈s, c, d〉, with weight δ(s, d) = w(s, c) + w(c, d) = 11. Analogously, there are
infinitely many paths from s to e: 〈s, e〉, 〈s, e, f, e〉, 〈s, e, f, e, f, e〉, and so on.
Since the cycle 〈e, f, e〉 has weight 3 + (-6) = -3 < 0, however, there is no shortest
path from s to e. By traversing the negative-weight cycle 〈e, f, e〉 arbitrarily many
times, we can find paths from s to e with arbitrarily large negative weights, and so δ(s,
e) = -∞. Similarly, δ(s, f) = -∞. Because g is reachable from f , we can also find paths
with arbitrarily large negative weights from s to g, and δ(s, g) = -∞. Vertices h, i, and j
also form a negative-weight cycle. They are not reachable from s, however, and so
δ(s, h) = δ(s, i) = δ(s, j) = ∞.
Some shortest-paths algorithms, such as Dijkstra's algorithm, assume that all edge
weights in the input graph are non negative, as in the road-map example. Others, such
as the Bellman-Ford algorithm, allow negative-weight edges in the input graph and
produce a correct answer as long as no negative-weight cycles are reachable from the
source. Typically, if there is such a negative-weight cycle, the algorithm can detect
and report its existence.
P2
24
Greedy Techniques
v w
Negative weight cycles are not allowed when it is reachable from source vertex s,
since in this case there is no shortest path.
If Positive –weight cycles are there then by removing the cycle, we can get a
shorter path.
2. The
value of is either a another vertex or NIL
Initialization:
INITIALIZE_SIGLE_SOURCE(V,s)
1.
2. do
3.
4.
Relaxing an edge :
The SSSP algorithms are based on the technique known as edge relaxation. The
process of relaxing an edge consists of testing whether we can improve (or
reduce) the shortest path to found so far (i.e by going through and taking
and, if so, update and This is accomplished by the following
procedure:
25
Design Techniques
RELAX(u,v,w)
1.
2. then
3.
u v u v
2 2
5 9 5 6
2 2
5 7 5 6
Note:
In Dijkstra’s algorithm and the other shortest-path algorithm for directed acyclic
graph, each edge is relaxed exactly once. In a Bellman-Ford algorithm, each edge
is relaxed several times
C<O
26
Greedy Techniques
BELLMAN_FORD(G,w,s)
1. INITIALIZE_SIGLE_SOURCE(G,s)
2.
3.
4. do
5.
6. do
7. then // we detect a negative weight cycle
exist
7.
2. For loop at line-2 executed times which Relaxes all the E edges, so
line 2-4 requires .
3. For loop at line 5 checks negative weight cycle for all the E edges, which
requires O(E) time.
27
Design Techniques Order of edge: (B,E), (D,B), (B,D), (A,C), (D,C), (B,C), (E,D)
B
2
-1 2
3 1 A B C D E
0
0 A E
4 C D -3
5
B
2
-1 2 A B C D E
3 1
0
0 A E 0 -1
4 C D -3
5
A B C D E
B 0
2 0 -1
-1
2
3 1 0 -1 4
0 A E
4 C D -3
5
A B C D E
0
0 -1
0 -1 4
0 -1 2
B
2
-1
2
3 1
A B C D E
0 A E
-3 1 0
4 C D
5 0 -1
0 -1 4
0 -1 2
0 -1 2 1
28
Greedy Techniques
A B C D E
B
0
-1
2 0 -1
2
3 1 0 -1 4
0 A E
0 -1 2
-3 1
4 C
5
D 0 -1 2 1
0 -1 2 1 1
A B C D E
B
0
-1 2 0 -1
2
3 1 0 -1 4
0 A E 0 -1 2
-3 1
4 C
5
D 0 -1 2 1
0 -1 2 1 1
0 -1 2 -2 1
A B C D E
B
0
2
-1
2
0 -1
3 1 0 -1 4
0 A E 0 -1 2
4 -3 1
C D 0 -1 2 1
5
0 -1 2 1 1
0 -1 2 -2 1
Dijkstra’s algorithm, named after its discoverer, Dutch computer scientist Edsger
Dijkstra, is a greedy algorithm that solves the single-source shortest path problem for
a directed graph G=(V,E) with non-negative edge weights i.e. we assume that w (u,v)
≥ 0 for each edge (u, v) ∈ E.
DIJKSTRA(G, w, s)
29
Design Techniques 1 INITIALIZE-SINGLE-SOURCE(G, s)
2 S←Ø
3 Q ← V[G]
4 while Q ≠ Ø
5 do u ← EXTRACT-MIN(Q)
6 S ← S ∪{u}
7 for each vertex v ∈ Adj[u]
8 do RELAX(u, v, w)
Because Dijkstra’s algorithm always choose the “lightest” or “closest” vertex in V-S
to insert into set S, we say that it uses a greedy strategy.
Dijkstra’s algorithm bears some similarly to both breadth-first search and Prim’s
algorithm for computing minimum spanning trees. It is like breadth-first search in
that set S corresponds to the set of black vertices in a breadth-first search; just as
vertices in S have their final shortest-path weights, so do black vertices in a breadth-
first search have their correct breadth- first distances.
Dijkstra’s algorithm is like prim’s algorithm in that both algorithms use a min-priority
queue to find the “lightest” vertex outside a given set (the set S in Dijkstra’s algorithm
and the tree being grown in prim’s algorithm), add this vertex into the set, and adjust
the weights of the remaining vertices outside the set accordingly.
The running time of Dijkstra’s algorithm on a graph with edges E and vertices V can
be expressed as function of |E| and |V| using the Big-O notation. The simplest
implementation of the Dijkstra’s algorithm stores vertices of set Q an ordinary linked
list or array, and operation Extract-Min (Q) is simply a linear search through all
vertices in Q.
For sparse graphs, that is, graphs with many fewer than |V|2 edges, Dijkstra’s
algorithm can be implemented more efficiently, storing the graph in the form of
adjacency lists and using a binary heap or Fibonacci heap as a priority queue to
implement the Extract-Min function. With a binary heap, the algorithm requires O((|E|
+ |V|) time (which is dominated by O|E|log|V|) assuming every vertex is connected,
and the Fibonacci heap improves this to O|E| + |V|log|V|).
Example1:
Apply Dijkstra’s algorithm to find shortest path from source vertex A to each of the
other vertices of the following directed graph.
2
B D
10
8 7 9
A 1 4
3
C 2 E
Solution:
30
vertex u ∈V-S with the minimum shortest-path estimate, inserts u into S and relaxes Greedy Techniques
all edges leaving u. We maintain a min-priority queue Q that contains all the vertices
in keyed by their d values.
Initialize:
B
2 Q: A B C D E
D
10 0
0 8 7 9
A 1 4
3
C 2 E
S={}
“A” ← EXTRACT-MIN(Q)
2
B D Q: A B C D E
10
0
0 8 7 9
A 1 4
3
C 2 E
S:{A}
Relax all edges leaving A:
10
2
B D
10 Q: A B C D E
0
8 9
0 A 1 4 7 10 3 - -
3
C 2 E
3
S:{A}
“C” ← EXTRACT-MIN(Q)
10
2 Q: A B C D E
B D
10 0
10 3 - -
8 7 9
0 A 1 4
3
C 2 E
3
S:{A,C}
31
Design Techniques
Relax all edges leaving C:
7 11
Q: A 2 B C D E
B D
10 0
10 3
8 7 9
0 A 1 4 7 11 5
3
C 2 E
3 5
S:{A,C}
“E”← EXTRAXT-MIN(Q)
7 11
2
B D Q: A B C D E
10
0
8 7 9
10 3 - -
0 A 1 4
7 11 5
3
C 2 E
3 5
S:{A,C,E}
7 11
B
2
D
Q: A B C D E
10 0
8
10 3
0 A 1 4 7 9
7 11 5
3
7 11
C 2 E
3 5
S:{A,C,E}
“B” ← EXTRACT-MIN(Q):
7 11
2
B D Q: A B C D E
10
0
8 7 9
10 3
0 A 1 4
7 11 5
3 7 11
C 2 E
3 5
S:{A,C,E,B}
32
Greedy Techniques
Relax all edges leaving B:
7 9
2 Q: A B C D E
B D
10 0
10 3
8 7 9
0 A 1 4 7 11 5
7 11
3
C 2 E 9
3 5
S:{A,C,E,B}
“D” ←EXTRACT-MIN(Q):
7 9
2
B D Q: A B C D E
10 0
8 9
10 3
0 A 1 4 7
7 11 5
3 7 11
C 2 E 9
3 5
S:{A,C,E,B,D}
source
10 d = 10
1 6
2 20
10
1 6
2 20 2 6 5
9 d = 20
d=2
6
2 5
2 d= 3 4 d=6
10 12
3 4
4 (initial) S={1}
S={ }
10
10 1 6 d=10
1 6 d=10 2
20
2 20
d=2 2 5 d=18
d=2 2 6
6 5 d=20
10
10 12
d=10 3 4 d=6
d=12 3 4 d=6
4
(2) S={1, 2, 4}
(1) S= {1, 2}
33
Design Techniques
10
1 6 d=10
2 10
20 1 6 d=10
2
d=2 2 6 9
5 d=12
d=2 2 6
5 d=12
2
12
2
3 12
d=10 4 d=6
4
d=10 3 4 d=6
4
(3) S= {1, 2, 4, 3}
(4) S= {1, 2, 4, 3, 6}
10
5 5 d=10
2
6
d=2 5 5 d=12
2
12
d=10 5 5 d=6
4
(5) S={1, 2, 4, 3, 6, 5}
Initial {} 0 ∞ ∞ ∞ ∞ ∞ 1
1 (1} [0] 2 ∞ 6 20 10 2
2 {1,2} [2] 12 6 20 10 4
3 (1,2,4} 10 [6] 18 10 3
4 {1,2,4,3} [10] 18 10 6
5 {1,2,4,3,6} 12 [10] 5
{1,2,4,3,6,5} [12]
34
Greedy Techniques
Q.1: Dijkstra’s algorithm running time, where n is the number of nodes in a graph is:
2 3
a) O ( n ) b) O ( n ) c) O (n) d) O(n log n)
Q2: This of the following algorithm allows negative edge weight in a graph to find
shortest path?
Q.4: Consider a weighted undirected graph with positive edge weights and let (u, v)
be an edge in the graph. It is known that the shortest path from source vertex s to u
has weight 60 and the shortest path from source vertex s to v has weight 75. which
statement is always true?
Q.5: Which data structure is used to maintained the distance of each node in a
Dijkstras’s algorithm.
a) Stack b) Queue c) Priority Queue d) Tree
Q.7: Find the minimum distance of each station from New York (NY) using
Dijkstra’s algorithm. Show all the steps.
Boston
1 LA
10 2 4
NY 1 2 3 9 4 6
5 7
3 5
CK 2 WN
35
Design Techniques
1.7 SUMMARY
Generally an optimization problem has n inputs (call this set as input domain or
Candidate set, C), we are required to obtain a subset of C (call it solution set, S
where S ) that satisfies the given constraints or conditions. Any subset S ,
which satisfies the given constraints, is called a feasible solution. We need to find
a feasible solution that maximizes or minimizes a given objective function. The
feasible solution that does this is called a optimal solution.
Greedy algorithm always makes the choice that looks best at the moment. That is,
it makes a locally optimal choice in the hope that this choice will lead to a
overall globally optimal solution.
Greedy algorithm does not always yield an optimal solution; but for many
problems they do.
The (fractional) Knapsack problem is to fill a knapsack or bag (up to its maximum
capacity M) with the given which maximizes the total profit earned.
There are two algorithm to find a MCST of a given directed graph G, namely
Kruskal’s algorithm and Prim’s algorithm.
The basic difference between Kruskal’s and Prim’s algorithm is that in kruskal’s
algorithm it is not necessary to choose adjacent vertices of already selected
vertices (in any successive steps). Thus At intermediate step of algorithm, there
are may be more than one connected components of trees are possible. But in case
of Prim’s algorithm it is necessary to select an adjacent vertex of already selected
vertices (in any successive steps). Thus at intermediate step of algorithm, there
will be only one connected components are possible.
36
Single-source-shortest path problem (SSSPP) problem is to find a shortest Greedy Techniques
path from source vertex to every other vertex in a given
graph
1.8 SOLUTIONS/ANSWERS
Solution 8:
3 e
b
2
5 d 4 e
a
4 6 5
c f
Solution 9:
(a) FALSE, since edge with the smallest weight will be part of every minimum
spanning tree.
(b) TRUE: edge with the smallest weight will be part of every minimum spanning
tree.
(c) TRUE:
(d) TRUE: Since more than one edges in a Graph may have the same weight.
(e) TRUE: In a connected weighted graph in which edge weights are not all distinct,
then the graph must have more than one spanning tree but the minimum cost of those
spanning tree will be same.
Solution 10:
Algorithm Greedy(C, n)
/* Input: A input domain (or Candidate set ) C of size n, from which solution is to be
Obtained. */
// function select (C: candidate_set) return an element (or candidate).
// function solution (S: candidate_set) return Boolean
// function feasible (S: candidate_set) return Boolean
/* Output: A solution set S, where S , which maximize or minimize the selection
criteria w. r. t. given constraints */
{
}
If (solution (S))
return S;
else
return “ No Solution”
} // end of while
Solution11:
A main difference between kruskal’s and Prim’s algorithm to solve MCST problem
is that the order in which the edges are selected.
38
Greedy Techniques
Solution 12: No, spanning tree of a graph is not unique in general, because more than
one edges of a graph may have the same weight.
8 7
b c d
4 9
2
i 5 e
a
h g f
1 2
Solution 13:
Thus the item which has maximum value will be placed into a knapsack first,
That is 5th item first, then 3rd item then 4th item then 2nd and then 1st item (if capacity
of knapsack is remaining). The following table shows a solution of this knapsack
problem.
S.No Solution Set
1
158
Solution 14:
The item which has maximum value will be placed into a knapsack first. Thus
the sequence of items placed into a knapsack is: 6th , 5th ,1st ,2nd, 3rd , 4th and then 7th
item. The following table shows a solution of this knapsack problem.
S.No Solution Set
1
49.4
39
Design Techniques Check Your Progress 2
Solution 6:
Bellman-ford algorithm, allow negative weight edges in the input graph. This
algorithm either finds a shortest path from source vertex to every other
vertex or detect a negative weight cycles in G, hence no solution. If
there is no negative weight cycles are reachable (or exist) from source vertex
s, then we can find a shortest path form source vertex to every other
vertex . If there exist a negative weight cycles in the input graph, then
the algorithm can detect it, and hence “No solution”.
Dijkstra’s algorithm allows only positive weight edges in the input graph
and finds a shortest path from source vertex to every other vertex .
Solution 7:
Following Table summarizes the Computation of Dijkstra’s algorithm for the given
digraph of Question 7.
Initial {} 0 ∞ ∞ ∞ ∞ 1
1 (1} [0] 10 5 ∞ ∞ 3
2 {1,3} 8 [5] 14 7 2
3 (1,3,2} [8] 14 7 5
4 {1,3,2,5} 13 [7] 4
5 {1,3,2,5,4) [13]
Solution 8: The running time of Dijkstra’s algorithm on a graph with edges E and
vertices V can be expressed as function of |E| and |V| using the Big-O notation. The
simplest implementation of the Dijkstra’s algorithm stores vertices of set Q an
ordinary linked list or array, and operation Extract-Min (Q) is simply a linear search
through all vertices in Q. in this case, the running time is O(|V|2 + |E| )= O(V2).
40
Greedy Techniques
3.9 FURTHER READING
41