0% found this document useful (0 votes)
40 views

Unit 3 Daa

The document provides information about greedy algorithms and dynamic programming. It defines greedy algorithms as finding the locally optimal choice at each step and dynamic programming as breaking problems into overlapping subproblems and storing solutions in a table to avoid recomputing them. The key aspects of each approach are described, including properties like optimal substructure for dynamic programming. Examples of problems that can be solved with each method are also listed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Unit 3 Daa

The document provides information about greedy algorithms and dynamic programming. It defines greedy algorithms as finding the locally optimal choice at each step and dynamic programming as breaking problems into overlapping subproblems and storing solutions in a table to avoid recomputing them. The key aspects of each approach are described, including properties like optimal substructure for dynamic programming. Examples of problems that can be solved with each method are also listed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

UNIT 3

Greedy & Dynamic


Programming

Session Outcome:
Idea to apply Both Dynamic Programming and Greedy algorithm to
solve optimization problems.

Greedy Algorithm

Greedy Method
• "Greedy Method finds out of many options, but you have to choose the best
option.“
• Greedy Algorithm solves problems by making the best choice that seems
best at the particular moment. Many optimization problems can be
determined using a greedy algorithm. Some issues have no efficient solution,
but a greedy algorithm may provide a solution that is close to optimal. A
greedy algorithm works if a problem exhibits the following two properties:
• Greedy Choice Property: A globally optimal solution can be reached at by
creating a locally optimal solution. In other words, an optimal solution can
be obtained by creating "greedy" choices.
• Optimal substructure: Optimal solutions contain optimal subsolutions. In
other words, answers to subproblems of an optimal solution are optimal.

10-02-2021 18CSC205J Operating Systems- Memory Management 4


Greedy Method (Cont.)
Steps for achieving a Greedy Algorithm are:
•Feasible: Here we check whether it satisfies all possible constraints or
not, to obtain at least one solution to our problems.
•Local Optimal Choice: In this, the choice should be the optimum
which is selected from the currently available
•Unalterable: Once the decision is made, at any subsequence step that
option is not altered.
10-02-2021 18CSC205J Operating Systems- Memory Management 5

Greedy Method (Cont.)

Example:

•Machine scheduling
• Fractional Knapsack Problem
•Minimum Spanning Tree
•Huffman Code
•Job Sequencing
•Activity Selection Problem

10-02-2021 18CSC205J Operating Systems- Memory Management 6

Dynamic Programming
10-02-2021 18CSC205J Operating Systems- Memory Management 7

Dynamic
Programming
•Dynamic Programming is the most powerful design technique for
solving optimization problems.
•Divide & Conquer algorithm partition the problem into disjoint
sub-problems solve the sub-problems recursively and then combine
their solution to solve the original problems.
•Dynamic Programming is used when the subproblems are not
independent, e.g. when they share the same subproblems. In this case,
divide and conquer may do more work than necessary, because it
solves the same sub problem multiple times.
10-02-2021 18CSC205J Operating Systems- Memory Management 8

Dynamic (Cont.)
Programming

•Dynamic Programming solves each subproblems just once and stores


the result in a table so that it can be repeatedly retrieved if needed
again.
•Dynamic Programming is a Bottom-up approach- we solve all
possible small problems and then combine to obtain solutions for
bigger problems.
•Dynamic Programming is a paradigm of algorithm design in which an
optimization problem is solved by a combination of achieving
sub-problem solutions and appearing to the "principle of optimality".

10-02-2021 18CSC205J Operating Systems- Memory Management 9

Characteristics of Dynamic Programming

Dynamic Programming works when a problem has the following


features:-
•Optimal Substructure: If an optimal solution contains optimal sub
solutions then a problem exhibits optimal substructure.
•Overlapping sub-problems: When a recursive algorithm would visit
the same sub-problems repeatedly, then a problem has overlapping
sub-problems.

10-02-2021 18CSC205J Operating Systems- Memory Management 10


Characteristics of Dynamic Programming

•If a problem has optimal substructure, then we can recursively define


an optimal solution. If a problem has overlapping sub-problems, then
we can improve on a recursive implementation by computing each
sub-problem only once.
•If a problem doesn't have optimal substructure, there is no basis for
defining a recursive algorithm to find the optimal solutions. If a
problem doesn't have overlapping sub problems, we don't have
anything to gain by using dynamic programming.
•If the space of sub-problems is enough (i.e. polynomial in the size of
the input), dynamic programming can be much more efficient than
recursion.

10-02-2021 18CSC205J Operating Systems- Memory Management 11

Elements of Dynamic
Programming

• Substructure: Decompose the given problem into smaller sub-problems. Express


the solution of the original problem in terms of the solution for smaller problems. •
Table Structure: After solving the sub-problems, store the results to the sub
problems in a table. This is done because sub-problem solutions are reused many
times, and we do not want to repeatedly solve the same problem over and over again.
• Bottom-up Computation: Using table, combine the solution of smaller
sub-problems to solve larger sub-problems and eventually arrives at a solution to
complete problem.
Note: Bottom-up means:-
• Start with smallest sub-problems.
• Combining their solutions obtain the solution to sub-problems of increasing size.
• Until solving at the solution of the original problem.

10-02-2021 18CSC205J Operating Systems- Memory Management 12

Components of Dynamic Programming

1. Stages: The problem can be divided into several sub-problems, which are called stages. A stage is a
small portion of a given problem. For example, in the shortest path problem, they were defined by
the structure of the graph.
2. States: Each stage has several states associated with it. The states for the shortest path problem was
the node reached.
3. Decision: At each stage, there can be multiple choices out of which one of the best decisions should
be taken. The decision taken at every stage should be optimal; this is called a stage decision.
4. Optimal policy: It is a rule which determines the decision at each stage; a policy is called an optimal
policy if it is globally optimal. This is known as Bellman principle of optimality. 5. Given the current
state, the optimal choices for each of the remaining states does not depend on the previous states or
decisions. In the shortest path problem, it was not necessary to know how we got a node only that we
did.
6. There exist a recursive relationship that identify the optimal decisions for stage j, given that stage j+1,
has already been solved.
7. The final stage must be solved by itself.

10-02-2021 18CSC205J Operating Systems- Memory Management 13


Development of Dynamic Programming
Algorithm
It can be broken into four steps:
1. Characterize the structure of an optimal solution.
2. Recursively defined the value of the optimal solution. Like Divide and
Conquer, divide the problem into two or more optimal parts
recursively. This helps to determine what the solution will look like.
3. Compute the value of the optimal solution from the bottom up
(starting with the smallest sub-problems)
4. Construct the optimal solution for the entire problem form the
computed values of smaller sub-problems.
10-02-2021 18CSC205J Operating Systems- Memory Management 14

Applications of Dynamic Programming


1. 0/1 knapsack problem
2. Mathematical optimization problem
3. All pair Shortest path problem
4. Reliability design problem
5. Longest common subsequence (LCS)
6. Flight control and robotics control
7. Time-sharing: It schedules the job to maximize CPU usage 10-02-2021 18CSC205J

Operating Systems- Memory Management 15

Divide & Conquer Method vs Dynamic


Programming
Divide & Conquer Method Dynamic Programming
subproblems into the solution for
1.It deals (involves) three steps original subproblems.
at each level of recursion: •1.It involves the sequence of four
Divide the problem into a steps:Characterize the structure
number of subproblems. of optimal solutions.
Conquer the subproblems by •Recursively defines the values of
solving them recursively. optimal solutions.
Combine the solution to the •Compute the value of optimal
solutions in a Bottom-up •Construct an Optimal Solution
minimum. from computed information.
2. It is Recursive. 2. It is non Recursive.
3. It solves subproblems only
3. It does more work on
once and then stores in the table.
subproblems and hence has more
time consumption.

4. It is a top-down approach. 4. It is a Bottom-up approach.


5. In this subproblems are
5. In this subproblems are
interdependent.
independent of each other.
6. For example: Matrix
6. For example: Merge Sort & Multiplication.
Binary Search etc.

10-02-2021 18CSC205J Operating Systems- Memory Management 16

Greedy vs Dynamic
Feature Greedy method Dynamic programming
In a greedy Algorithm, we make whatever choice seems best In Dynamic Programming we make decision at each step
at the moment in the hope that it will lead to global optimal considering current problem and solution to previously
Feasibility solution. solved sub problem to calculate optimal solution .
The greedy method computes its solution by making its
Optimality In Greedy Method, sometimes there is no such guarantee of getting Optimal It is guaranteed that Dynamic Programming will generate an optimal solution as it generally
Solution. considers all possible cases and then choose the best.

A Dynamic programming is an algorithmic technique which is usually based on a recurrent


formula that uses some previously calculated states.
Recursion A greedy method follows the problem solving heuristic of making the locally
optimal choice at each stage. It requires dp table for memorization and it increases it’s memory complexity.

Dynamic Programming is generally slower. For example, Bellman Ford algorithm takes
Memorization It is more efficient in terms of memory as it never look back or revise O(VE) time.
previous choices
Dynamic programming computes its solution bottom up or
Time complexity Greedy methods are generally faster. For example, Dijkstra’s shortest path
algorithm takes O(ELogV + VLogV) time.
Fashion revising previous choices. solutions.
choices in a serial forward fashion, never looking back or top down by synthesizing them from smaller optimal sub

Example Fractional knapsack . 0/1 knapsack problem 10-02-2021 18CSC205J Operating Systems- Memory Management 17

Course Learning Outcome :


Apply Greedy approach to solve Huffman coding
Comparison of Brute force and Huffman method of
Encoding
Topics

•Huffman Problem

• Problem Analysis (Real Time


Example) •Binary coding techniques
• Prefix codes
•Algorithm of Huffman Coding Problem
•Time Complexity
•Analysis and Greedy Algorithm
•Conclusion

Huffman Coding using


Greedy Algorithm
Using ASCII Code: using Text Encoding • Our
objective is to develop a code that represents a given text as compactly as
possible.
• A standard encoding is ASCII, which represents every
character using 7 bits
Example
Represent “An English sentence” using ASCII code –
1000001 (A) 1101110 (n) 0100000 ( ) 1000101 (E)
1101110 (n) 1100111 (g) 1101100 (l) 1101001 (i)
1110011 (s) 1101000 (h) 0100000 ( ) 1110011 (s)
1100101 (e) 1101110 (n) 1110100 (t) 1100101 (e)
1101110 (n) 1100011 (c) 1100101 (e)
= 133 bits ≈ 17 bytes
Refinement in Text Encoding • Now a better code is
given by the following encoding: ‹space› = 000, A = 0010, E = 0011, s = 010, c
= 0110, g = 0111, h = 1000, i = 1001, l = 1010, t = 1011, e = 110, n = 111
• Then we encode the phrase as
0010 (A) 111 (n) 000 ( ) 0011 (E) 111 (n) 0111 (g) 1010
(l) 1001 (i) 010 (s) 1000 (h) 000 ( ) 010 (s) 110 (e) 111
(n) 1011 (t) 110 (e) 111 (n) 0110 (c) 110 (e)
• This requires 65 bits ≈ 9 bytes. Much improvement. •
The technique behind this improvement, i.e., Huffman
coding which we will discuss later on.
Major Types of Binary Coding There are many
ways to represent a file of information. Binary Character Code (or Code)
• each character represented by a unique binary string.
• Fixed-Length Code
• If Σ = {0, 1} then
• All possible combinations of two bit strings
Σ x Σ = {00, 01, 10, 11}
• If there are less than four characters then two bit
strings enough
• If there are less than three characters then two bit
strings not economical
Fixed Length Code • Fixed-Length Code
• All possible combinations of three bit strings Σ x Σ x
Σ = {000, 001, 010, 011, 100, 101, 110, 111} • If there
are less than nine characters then three bit strings
enough
• If there are less than five characters then three bit
strings not economical and can be considered two bit
strings
• If there are six characters then needs 3 bits to
represent, following could be one representation. a =
000, b = 001, c = 010, d = 011, e = 100, f = 101
Variable Length Code • Variable-Length Code
• better than a fixed-length code
• It gives short code-words for frequent characters and
• long code-words for infrequent characters • Assigning
variable code requires some skill
• Before we use variable codes we have to discuss prefix
codes to assign variable codes to set of given
characters
Prefix Code (Variable Length Code) • A prefix code is
a code typically a variable length code, with the “prefix property”
• Prefix property is defined as no codeword is a prefix of
any other code word in the set.
Examples
1. Code words {0,10,11} has prefix property 2. A code
consisting of {0, 1, 10, 11} does not have, because “1”
is a prefix of both “10” and “11”.
Other names
• Prefix codes are also known as prefix-free codes, prefix
condition codes, comma-free codes, and instantaneous
codes etc.
Why are prefix codes? • Encoding simple for any
binary character code; • Decoding also easy in prefix codes. This is because no
codeword is a prefix of any other.
Example 1
• If a = 0, b = 101, and c = 100 in prefix code then the
string: 0101100 is coded as 0·101·100
Example 2
• In code words: {0, 1, 10, 11}, receiver reading “1” at the
start of a code word would not know whether – that was
complete code word “1”, or
– prefix of the code word “10” or of “11”
Prefix codes and binary trees Tree representation of prefix
codes

0
1
A 00
B 010 C 0 E 10
0 D 0111 F 11
0110 1 1
A

1
B
0 0 EF
1

CD

Huffman Codes
• In Huffman coding, variable length code is used
• Data considered to be a sequence of characters.
• Huffman codes are a widely used and very effective technique for
compressing data
• Savings of 20% to 90% are typical, depending on the
characteristics of the data being compressed.
• Huffman’s greedy algorithm uses a table of the frequencies of
occurrence of the characters to build up an optimal way of
representing each character as a binary string.
• Now let us see an example to understand the concepts used in
Huffman coding
Example: Huffman Codes abcdef
Frequency (in thousands) 45 13 12 16 9 5 Fixed-length codeword
000 001 010 011 100 101 Variable-length codeword 0 101 100 111
1101 1100
A data file of 100,000 characters contains only the characters a–f,
with the frequencies indicated above.

• If each character is assigned a 3-bit fixed-length


codeword, the file can be encoded in 300,000 bits.
• Using the variable-length code
(45 · 1 + 13 · 3 + 12 · 3 + 16 · 3 + 9 · 4 + 5 · 4) · 1,000
= 224,000 bits
which shows a savings of approximately 25%.
Binary Tree: Variable Length Codeword 1
Frequency ngth 0
codeword 01
(in thousands)
Variable-le
0
5
a 45 0 b 13 101 c 12 100 01
2
a:45 5 3
5 0
d 16 111 e 9 1101
1
0101 c:12 b:13 d:16
4
0
1
f 5 1100
f:5 e:9

The tree corresponding to the variable-length code is


shown for the data in table.
Cost of Tree Corresponding to Prefix Code • Given a tree
T corresponding to a prefix code. For each character c in the alphabet C,
• let f (c) denote the frequency of c in the file and • let
dT(c) denote the depth of c’s leaf in the tree. • dT(c) is
also the length of the codeword for character c. • The
number of bits required to encode a file is

• which we define as the cost of the tree T.


Algorithm: Constructing a Huffman Codes Huffman (C)
1 n ← |C|
2Q←C
3 for i ← 1 to n - 1
4 do allocate a new node z
5 left[z] ← x ← Extract-Min (Q)
6 right[z] ← y ← Extract-Min (Q) 7 f [z]
← f [x] + f [y]
8 Insert (Q, z)
9 return Extract-Min(Q) Return root of the tree.
Example: Constructing a Huffman Codes
f:5 e:9 c:12 b:13 d:16 a:45

Q:

The initial set of n = 6 nodes, one for each letter.


Number of iterations of loop are 1 to n-1 (6-1 = 5)
Constructing a Huffman Codes

1
Q: c:12 b:13 d:16 a:45
0 4 1
f:5 e:9

for i ← 1

Allocate a new node z


left[z] ← x ← Extract-Min (Q) = f:5
right[z] ← y ← Extract-Min (Q) = e:9
f [z] ← f [x] + f [y] (5 + 9 = 14) Insert
(Q, z)
Constructing a Huffman Codes
1 2
Q: d:16 a:45
0 4 5 1
10
c:12 b:13 3
f:5 e:9

2
Q: a:45
0 0
1
5 01

c:12 b:13 for i ← 3


1
4 0
1
d:16
f:5 e:9

Allocate a new node z


left[z] ← x ← Extract-Min (Q) = z:14
right[z] ← y ← Extract-Min (Q) = d:16
f [z] ← f [x] + f [y] (14 + 16 = 30) Insert
(Q, z)
Constructing a Huffman Codes
2 3
Q: a:45
0 0
1
5 01

c:12 b:13 14
d:16
0
1
f:5 e:9
5
5
Q: a:45 01
for i ← 4
2
3
Allocate a new node z
5
0
left[z] ← x ← Extract-Min 01011

(Q) = z:25
c:12 b:13 d:16
right[z] ← y ← Extract-Min (Q) = z:30
4
0
1
f [z] ← f [x] + f [y] (25 + f:5 e:9
30 = 55) Insert (Q, z)
Constructing a Huffman Codes
for i ← 5
Allocate a new node z
right[z] ← y ← 5
Q: 01

1
Extract-Min (Q) = z:55
3
a:45
0 0
0
01
2 f [z] ← f [x] + f [y] (45 +
left[z] ← x ← 5
5 55 = 100) Insert (Q, z)
Extract-Min (Q) = a:45
0101

1
c:12 b:13 d:16
0 4 1

f:5 e:9

Lemma 1: Greedy Choice There exists an optimal


prefix code such that the two characters with smallest frequency are siblings
and have maximal depth in T.
Proof: code. • Let a and b be two sibling
• Let x and y be two such leaves of maximal depth in T, and
characters, and let T be a tree assume with out loss of generality
representing an optimal prefix that f(x) ≤ f(y) and f(a) ≤ f(b).
T'
T

xyab
obtained by y
• This implies that f(x) exchanging a and x b
≤ f(a) and f(y) ≤ f(b). • and b and y. x
Let T' be the tree a
Conclusion • Huffman
Algorithm is analyzed
• Design of algorithm is discussed
• Computational time is given
• Applications can be observed in various domains e.g.
data compression, defining unique codes, etc.
generalized algorithm can used for other optimization
problems as well
efficiently ?
Two questions

•Why does the algorithm produce the

best tree ? •How do you implement it


Knapsack problem using greedy approach 42
Session Learning
Outcome-SLO

Greedy is used in optimization problems. The algorithm makes the


optimal choice at each step as it attempts to find the overall optimal way
to solve the entire problem
43

INTRODUCTION
GREEDY
Greedy algorithms are simple and

straightforward. They are shortsighted


in their approach at each stage,

A greedy algorithm is similar to a instead a "greedy" choice can be made


dynamic programming algorithm, but of what looks best for the moment.
the difference is that solutions to the
sub problems do not have to be known
optimization problems in that they
attempt to find an optimal solution
GREEDY among many possible candidate
INTRODUCTION • solutions.

Many real-world problems are

• An optimization problem is one in which you want to find, not just


a solution, but the best solution
• A “greedy algorithm” sometimes works well for optimization
problems
• A greedy algorithm works in phases. At each phase: You take the
best you can get right now, without regard for future
consequences. It hope that by choosing a local optimum at each
step, you will end up at a global optimum.

45

APPLICATIONS OF GREEDY
APPROACH
46
47

48
49

50
51

52
53

54
55

56
57

Home assignment /Questions


58
Tree Traversals
59
Session Learning
Outcome-SLO

These usage patterns can be divided into the three ways that we access
the nodes of the tree.
60

Tree Traversal
•Traversal is the process of visiting every node once
•Visiting a node entails doing some processing at that node, but when
describing a traversal strategy, we need not concern ourselves with
what that processing is

61
Binary Tree Traversal Techniques
•Three recursive techniques for binary tree traversal
•In each technique, the left subtree is traversed recursively, the right
subtree is traversed recursively, and the root is visited
•What distinguishes the techniques from one another is the order of
those 3 tasks
62

Preoder, Inorder,
Postorder
the subtrees traversals •In
•In Preorder, the root Inorder, the root is visited
is visited before (pre) in-between left and right
subtree traversal •In subtree

Preorder, the root


Inorder Traversal:
is visited after (pre) 1. Traverse left subtree 2. Visit the root
3. Traverse right subtree
the subtrees traversals
Preorder Traversal: Postorder Traversal:
1. Visit the root 1. Traverse left subtree 2. Traverse right
2. Traverse left subtree 3. Traverse right subtree 3. Visit the root

63

Illustrations for Traversals


node is printing its label
•Assume: visiting a
1 12 • Postorder:
5
3 4 6 5 3 8 11 12 10 9 7
7 46 1
• Preorder: 11
8 9 10
1 3 5 4 6 7 8 9 10 11 12
12 •Inorder: 4 5 6 3 1 8 7 9 11 10

64
Illustrations for Traversals (Contd.)
• Preorder: 15 815
•Assume: 27
2637 8
visiting a node
11
is printing its 20
data 2
11 10 12 14 6 10
12 22 30
20 27 22 30
•Inorder: 2 3 6 7 8 Postorder: 3 7 6 2
37
10 11 12 14 15 20 10 14 12 11 8 22 14
22 27 30 • 30 27 20 15
65

Code for the Traversal Techniques


•The code for visit is up to you data part of its input
to node
void preOrder(Tree *tree){
provide, depending on the if (tree->isEmpty( )) return;
visit(tree->getRoot( ));
application •A typical example preOrder(tree->getLeftSubtree());
for visit(…) is to print out the preOrder(tree->getRightSubtree()); }
void inOrder(Tree *tree){
if (tree->isEmpty( )) return; if (tree->isEmpty( )) return;
inOrder(tree->getLeftSubtree( )); postOrder(tree->getLeftSubtree( ));
visit(tree->getRoot( )); postOrder(tree->getRightSubtree( ));
inOrder(tree->getRightSubtree( )); } visit(tree->getRoot( ));
void postOrder(Tree *tree){ }
66

Home assignment /Questions

Write the preorder, inorder and postorder traversals of the binary tree shown
below.
Minimum Spanning Tree
• Prim’s Algorithm
•Kruskal’s Algorithm
• Summary
•Assignment
• Spanning Tree
•Minimum Spanning Tree
Agenda
Session Outcomes
•Apply and analyse different problems using greedy techniques
•Evaluate the real time problems
•Analyse the different trees and find the low cost spanning tree

Spanning Tree
•Given an undirected and connected graph G=(V,E), a spanning tree
of the graph G is a tree that spans G (that is, it includes every vertex
of G) and is a subgraph of G (every edge in the tree belongs to G)

Undirected

Graph For of the spanning trees of the graph

Spanning Tree
Minimum
• The cost of the spanning tree is the sum of the weights of all the edges in the
tree. There can be many spanning trees. Minimum spanning tree is the spanning
tree where the cost is minimum among all the spanning trees. There also can be
many minimum spanning trees.
1
3 3
24 25
3
24 5
5
4 Undirected
1 1

GraphSpanning Tree Cost=11


(4+5+2)Minimum Spanning Tree
Cost = 7 (4+1+2)
Applications –
MST

•Minimum spanning tree has direct application in the design of


networks. It is used in algorithms approximating the travelling
salesman problem, multi-terminal minimum cut problem and
minimum-cost weighted perfect matching. Other practical applications
are:
•Cluster Analysis
•Handwriting recognition
•Image segmentation

Prim’s Algorithm
• Prim’s Algorithm also use Greedy approach to find the minimum
spanning tree. In Prim’s Algorithm we grow the spanning tree from a
starting position. Unlike an edge in Kruskal's, we add vertex to the
growing spanning tree in Prim's.
• Prim’s Algorithm is preferred when-
•The graph is dense.
•There are large number of edges in the graph like E = O(V2).
Step 2: Select the shortest edge
Steps for finding MST - connected to that vertex
Prim’s Algorithm

Step 1: Select any vertex

Step 3: Select the shortest edge connected to any vertex already


connected
Step 4: Repeat step 3 until all vertices have been connected
VT ← {v0}
Prim’s Algorithm
Algorthm Prim(G)

ET ← {} // empty set
for i ← 1 to |V| do
find the minimum weight edge, e* = (v*, u*) such that v* is in VT and
u is in V- VT
VT ← VT union {u*}
ET ← ET union {e*}
return ET

Time Complexity
–Prim’s Algorithm

• If adjacency list is used to represent the graph, then using breadth first search, all the vertices can be
traversed in O(V + E) time.
• We traverse all the vertices of graph using breadth first search and use a min heap for storing the
vertices not yet included in the MST.
• To get the minimum weight edge, we use min heap as a priority queue.
• Min heap operations like extracting minimum element and decreasing key value takes O(logV) time.

• So, overall time complexity


• = O(E + V) x O(logV)
• = O((E + V)logV)
• = O(ElogV)

This time complexity can be improved and reduced to O(E + VlogV) using Fibonacci heap.

Prim’s Algorithm - Example

a
7
8
c
b
5
7 Consider vertex ‘d’
9 5

11
d
8
15 6
e f
g
9
7

a
b

c
5
7
9 5
11
d
8
15 6
e f
g
9

8
a
7
b

c
5
7
9 5
e
11
d
8
15

f 9

6 g

You might also like