Unit 3 Daa
Unit 3 Daa
Session Outcome:
Idea to apply Both Dynamic Programming and Greedy algorithm to
solve optimization problems.
Greedy Algorithm
Greedy Method
• "Greedy Method finds out of many options, but you have to choose the best
option.“
• Greedy Algorithm solves problems by making the best choice that seems
best at the particular moment. Many optimization problems can be
determined using a greedy algorithm. Some issues have no efficient solution,
but a greedy algorithm may provide a solution that is close to optimal. A
greedy algorithm works if a problem exhibits the following two properties:
• Greedy Choice Property: A globally optimal solution can be reached at by
creating a locally optimal solution. In other words, an optimal solution can
be obtained by creating "greedy" choices.
• Optimal substructure: Optimal solutions contain optimal subsolutions. In
other words, answers to subproblems of an optimal solution are optimal.
Example:
•Machine scheduling
• Fractional Knapsack Problem
•Minimum Spanning Tree
•Huffman Code
•Job Sequencing
•Activity Selection Problem
Dynamic Programming
10-02-2021 18CSC205J Operating Systems- Memory Management 7
Dynamic
Programming
•Dynamic Programming is the most powerful design technique for
solving optimization problems.
•Divide & Conquer algorithm partition the problem into disjoint
sub-problems solve the sub-problems recursively and then combine
their solution to solve the original problems.
•Dynamic Programming is used when the subproblems are not
independent, e.g. when they share the same subproblems. In this case,
divide and conquer may do more work than necessary, because it
solves the same sub problem multiple times.
10-02-2021 18CSC205J Operating Systems- Memory Management 8
Dynamic (Cont.)
Programming
Elements of Dynamic
Programming
1. Stages: The problem can be divided into several sub-problems, which are called stages. A stage is a
small portion of a given problem. For example, in the shortest path problem, they were defined by
the structure of the graph.
2. States: Each stage has several states associated with it. The states for the shortest path problem was
the node reached.
3. Decision: At each stage, there can be multiple choices out of which one of the best decisions should
be taken. The decision taken at every stage should be optimal; this is called a stage decision.
4. Optimal policy: It is a rule which determines the decision at each stage; a policy is called an optimal
policy if it is globally optimal. This is known as Bellman principle of optimality. 5. Given the current
state, the optimal choices for each of the remaining states does not depend on the previous states or
decisions. In the shortest path problem, it was not necessary to know how we got a node only that we
did.
6. There exist a recursive relationship that identify the optimal decisions for stage j, given that stage j+1,
has already been solved.
7. The final stage must be solved by itself.
Greedy vs Dynamic
Feature Greedy method Dynamic programming
In a greedy Algorithm, we make whatever choice seems best In Dynamic Programming we make decision at each step
at the moment in the hope that it will lead to global optimal considering current problem and solution to previously
Feasibility solution. solved sub problem to calculate optimal solution .
The greedy method computes its solution by making its
Optimality In Greedy Method, sometimes there is no such guarantee of getting Optimal It is guaranteed that Dynamic Programming will generate an optimal solution as it generally
Solution. considers all possible cases and then choose the best.
Dynamic Programming is generally slower. For example, Bellman Ford algorithm takes
Memorization It is more efficient in terms of memory as it never look back or revise O(VE) time.
previous choices
Dynamic programming computes its solution bottom up or
Time complexity Greedy methods are generally faster. For example, Dijkstra’s shortest path
algorithm takes O(ELogV + VLogV) time.
Fashion revising previous choices. solutions.
choices in a serial forward fashion, never looking back or top down by synthesizing them from smaller optimal sub
Example Fractional knapsack . 0/1 knapsack problem 10-02-2021 18CSC205J Operating Systems- Memory Management 17
•Huffman Problem
0
1
A 00
B 010 C 0 E 10
0 D 0111 F 11
0110 1 1
A
1
B
0 0 EF
1
CD
Huffman Codes
• In Huffman coding, variable length code is used
• Data considered to be a sequence of characters.
• Huffman codes are a widely used and very effective technique for
compressing data
• Savings of 20% to 90% are typical, depending on the
characteristics of the data being compressed.
• Huffman’s greedy algorithm uses a table of the frequencies of
occurrence of the characters to build up an optimal way of
representing each character as a binary string.
• Now let us see an example to understand the concepts used in
Huffman coding
Example: Huffman Codes abcdef
Frequency (in thousands) 45 13 12 16 9 5 Fixed-length codeword
000 001 010 011 100 101 Variable-length codeword 0 101 100 111
1101 1100
A data file of 100,000 characters contains only the characters a–f,
with the frequencies indicated above.
Q:
1
Q: c:12 b:13 d:16 a:45
0 4 1
f:5 e:9
for i ← 1
2
Q: a:45
0 0
1
5 01
c:12 b:13 14
d:16
0
1
f:5 e:9
5
5
Q: a:45 01
for i ← 4
2
3
Allocate a new node z
5
0
left[z] ← x ← Extract-Min 01011
(Q) = z:25
c:12 b:13 d:16
right[z] ← y ← Extract-Min (Q) = z:30
4
0
1
f [z] ← f [x] + f [y] (25 + f:5 e:9
30 = 55) Insert (Q, z)
Constructing a Huffman Codes
for i ← 5
Allocate a new node z
right[z] ← y ← 5
Q: 01
1
Extract-Min (Q) = z:55
3
a:45
0 0
0
01
2 f [z] ← f [x] + f [y] (45 +
left[z] ← x ← 5
5 55 = 100) Insert (Q, z)
Extract-Min (Q) = a:45
0101
1
c:12 b:13 d:16
0 4 1
f:5 e:9
xyab
obtained by y
• This implies that f(x) exchanging a and x b
≤ f(a) and f(y) ≤ f(b). • and b and y. x
Let T' be the tree a
Conclusion • Huffman
Algorithm is analyzed
• Design of algorithm is discussed
• Computational time is given
• Applications can be observed in various domains e.g.
data compression, defining unique codes, etc.
generalized algorithm can used for other optimization
problems as well
efficiently ?
Two questions
INTRODUCTION
GREEDY
Greedy algorithms are simple and
45
APPLICATIONS OF GREEDY
APPROACH
46
47
48
49
50
51
52
53
54
55
56
57
These usage patterns can be divided into the three ways that we access
the nodes of the tree.
60
Tree Traversal
•Traversal is the process of visiting every node once
•Visiting a node entails doing some processing at that node, but when
describing a traversal strategy, we need not concern ourselves with
what that processing is
61
Binary Tree Traversal Techniques
•Three recursive techniques for binary tree traversal
•In each technique, the left subtree is traversed recursively, the right
subtree is traversed recursively, and the root is visited
•What distinguishes the techniques from one another is the order of
those 3 tasks
62
Preoder, Inorder,
Postorder
the subtrees traversals •In
•In Preorder, the root Inorder, the root is visited
is visited before (pre) in-between left and right
subtree traversal •In subtree
63
64
Illustrations for Traversals (Contd.)
• Preorder: 15 815
•Assume: 27
2637 8
visiting a node
11
is printing its 20
data 2
11 10 12 14 6 10
12 22 30
20 27 22 30
•Inorder: 2 3 6 7 8 Postorder: 3 7 6 2
37
10 11 12 14 15 20 10 14 12 11 8 22 14
22 27 30 • 30 27 20 15
65
Write the preorder, inorder and postorder traversals of the binary tree shown
below.
Minimum Spanning Tree
• Prim’s Algorithm
•Kruskal’s Algorithm
• Summary
•Assignment
• Spanning Tree
•Minimum Spanning Tree
Agenda
Session Outcomes
•Apply and analyse different problems using greedy techniques
•Evaluate the real time problems
•Analyse the different trees and find the low cost spanning tree
Spanning Tree
•Given an undirected and connected graph G=(V,E), a spanning tree
of the graph G is a tree that spans G (that is, it includes every vertex
of G) and is a subgraph of G (every edge in the tree belongs to G)
Undirected
Spanning Tree
Minimum
• The cost of the spanning tree is the sum of the weights of all the edges in the
tree. There can be many spanning trees. Minimum spanning tree is the spanning
tree where the cost is minimum among all the spanning trees. There also can be
many minimum spanning trees.
1
3 3
24 25
3
24 5
5
4 Undirected
1 1
Prim’s Algorithm
• Prim’s Algorithm also use Greedy approach to find the minimum
spanning tree. In Prim’s Algorithm we grow the spanning tree from a
starting position. Unlike an edge in Kruskal's, we add vertex to the
growing spanning tree in Prim's.
• Prim’s Algorithm is preferred when-
•The graph is dense.
•There are large number of edges in the graph like E = O(V2).
Step 2: Select the shortest edge
Steps for finding MST - connected to that vertex
Prim’s Algorithm
ET ← {} // empty set
for i ← 1 to |V| do
find the minimum weight edge, e* = (v*, u*) such that v* is in VT and
u is in V- VT
VT ← VT union {u*}
ET ← ET union {e*}
return ET
Time Complexity
–Prim’s Algorithm
• If adjacency list is used to represent the graph, then using breadth first search, all the vertices can be
traversed in O(V + E) time.
• We traverse all the vertices of graph using breadth first search and use a min heap for storing the
vertices not yet included in the MST.
• To get the minimum weight edge, we use min heap as a priority queue.
• Min heap operations like extracting minimum element and decreasing key value takes O(logV) time.
This time complexity can be improved and reduced to O(E + VlogV) using Fibonacci heap.
a
7
8
c
b
5
7 Consider vertex ‘d’
9 5
11
d
8
15 6
e f
g
9
7
a
b
c
5
7
9 5
11
d
8
15 6
e f
g
9
8
a
7
b
c
5
7
9 5
e
11
d
8
15
f 9
6 g