0% found this document useful (0 votes)
7 views

ADA Module 4 - Full

Analysis and Design of Algorithms module 4

Uploaded by

rashmi gs
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

ADA Module 4 - Full

Analysis and Design of Algorithms module 4

Uploaded by

rashmi gs
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

DYNAMIC PROGRAMMING: Three basic

examples, The Knapsack Problem and


Memory Functions, Warshall’s and Floyd’s
Algorithms.

THE GREEDY METHOD: Prim’s Algorithm,


Kruskal’s Algorithm, Dijkstra’s Algorithm,
Huffman Trees and Codes.
The greedy technique in algorithms is a
problem-solving approach that builds a
solution piece by piece, always choosing the
next piece that offers the most immediate
benefit or is the most optimal according to
some criterion.

The goal is to find a globally optimal solution


by making a series of locally optimal choices.

On each step, it does a “greedy” grab of the


best alternative available.
Greedy: General method
A greedy algorithm is an algorithm that always tries to find the
best solution for each sub problems.
In this method, we have to find out the best method/option out of
many present ways.

This method may or may not give the best output.


• Greedy Algorithm solves problems by making the best choice that
seems best at the particular moment.
Feasible solution: Any solution that satisfy the given
constraint is called feasible solution.
Optimal solution: Best choice among all feasible solutions
called optimal solution.
In Greedy method, at each step, the choice made must be:

These requirements explain the technique’s name: on each


step, it suggests a “greedy” grab of the best alternative
available.
Minimum cost spanning tree algorithms:

1. Kruskal Algorithm

2. Prims Algorithms
Kruskal’s Algorithm

Kruskal's Algorithm is used to find the minimum spanning


tree for a connected weighted graph.

A minimum spanning tree has (V – 1) edges where V is the


number of vertices in the given graph.
Steps for finding MST using Kruskal’s algorithm

1. Sort all the edges in ascending order of their weight.


2. Pick the smallest edge. Check if it forms a cycle with the
spanning tree formed so far. If cycle is not formed,
include this edge. Else, discard it.

3. Repeat step 2 until there are (V-1) edges in the spanning


tree.
Obtain minimum spanning tree by applying kruskal’s algorithm for
the following graph.
1
a b

3
4
d c

7
e
Time complexity of Kruskal’s algorithm is: O(E log E)
Apply Kruskal’s algorithm to obtain minimum spanning tree
Prims algorithm
• Prim’s algorithm constructs a minimum spanning tree through a
sequence of expanding subtrees.

• The initial subtree in such a sequence consists of a single vertex


selected arbitrarily from the set V of the graph’s vertices.

• On each iteration,the algorithm expands the current tree in the


greedy manner by simply attaching to it the nearest vertex.

• The algorithm stops after all the graph’s vertices have been
included in the tree being constructed.
Time complexity of prims algorithm is: O(E log V)
Apply Prim’s algorithm to obtain minimum spanning tree
Apply both Kruskal’s and Prim’s algorithm to below graphs:

99 units
26
Dijkstra’s Algorithm
• It is also called as single-source shortest-path algorithm:
for a given vertex called the source in a weighted connected
graph, find shortest paths to all its other vertices

• Dijkstra’s algorithm is very similar to prims algorithm for


minimum spanning tree, it is used to find the shortest path
from one node to all other nodes.

• It is applicable for both directed and undirected graph.

• It is not applicable for a graph with negative weight.


Following steps are performed

1. source node ‘s’ will be chosen, dist(s) was initialized to 0 and


all other nodes are initialized to ꝏ.

2. In the next run, the next node with the smallest dist value is
chosen.

3. Update dist values of adjacent nodes of the current node as


 if dist(v) + weight(u,v) < dist(u), there is a new minimal
distance found for u, so update dist(u) to the new minimal
distance value;
 otherwise, no updates are made to dist(u).

4. Repeat steps 2-3 for all other remaining nodes


Source node
Source node
Obtain single source shortest path by applying Dijkstra’s
algorithm for the following graph(take initial vertex as ‘a’).
Source node
Tree Remaining Vertices
Vertices
a(-,0) b(a,1) 
c(-,ꝏ)
d(a,3) 1
a b

e(-,ꝏ)
3 5
b(a,1) c(b,6) 5+1
d(a,3)  d c

e(-,ꝏ) 7

d(a,3) c(b,6)  e

e(d,10) 7+3
c(b,6) e(d,10) 
e(d,10) ---
The shortest distances from node a to all other nodes are

ab=1
abc=6
ad=3
a  d  e = 10

The time complexity of Dijkstra’s Algorithm in


best and average case is Ꝋ(Elog2V)

The time complexity of Dijkstra’s Algorithm in


worst case is Ꝋ(V2)
Obtain single source shortest path by applying Dijkstra’s algorithm for the
following graph. Consider the source vertex as ‘s’.
Huffman Code and Huffman Tree
• Huffman coding is Developed by David Huffman in 1951.

• Huffman coding is used to encode the given data. (We encode


data usually during compression (to reduce file size) or for
data integrity purposes.

• Encoding can be fixed length or variable length encoding.

• It is also called as data compression technique.

• Tree constructed using Huffman algorithm is called Huffman


tree.
Steps to construct Huffman tree:

1. Calculate the frequency of characters (if not given).

2. Sort the characters in ascending orders of their frequency

3. Extract two characters with minimum frequency to construct a


binary tree

4. Create a new internal node with a frequency equal to the sum of


the two nodes frequencies.

5. Add this node to the existing table.

6. Repeat steps 2-3 for all characters with its frequencies.

7. The final constructed tree will be the Huffman tree


Construct the Huffman tree for the following data and obtain
its code.

Character A B C D E _

Probability (frequency ) 0.5 0.35 0.5 0.1 0.4 0.2

a) Encode the text BAD_AC

b) Decode the text whose encoding is 1100110110


Character A B C D E -
Probability 0.5 0.35 0.5 0.1 0.4 0.2

D_ B E A C 0.65
0.3
0.3 0.35 0.4 0.5 0.5

0.9 C 0.65
E A C D_B B
D _
0.35 0.5
0.1 0.2 0.4 0.5 0.5 0.65

E A B
0.3
C D_B EA 0.4 0.5 0.35
0.5 0.65 0.9
0.3
D _
0.1 0.2
D _
0.1 0.2
1.15 0.9
C D_B EA
0.5 0.65 0.9
E A
C
0.65 0.4 0.5
0.5

EA CD_B B
0.3
0.9 1.15 0.35

D _
0.1 0.2
Give numbering- 1) Encode the text BAD_AC
Left branch numbered with 0 and right 2) Decode the text whose encoding is 1100110110
2.05
branch numbered with 1
0 1

0.9 1.15

0 1 0 1

E A C
0.65
0.4 0.5 0.5
0 1
A B C D E _
01 111 10 1100 00 1101 B
0.3
0.35
0 1
BAD_AC: 11101110011010110
D _
1100110110: D_C 0.1 0.2

BAD_AC=48bits 11101110011010110=17 bits


Introduction to Dynamic Programming
General Method
Dynamic programming is a technique of
 breaking down a problem into smaller problems,
 solving each sub-problems once, storing the solutions of these sub-
problems, and
 eventually finding a solution to the original problem.

Dynamic Programming approaches


There are two dynamic programming methods of implementation.
1. Top-Down approach
2. Bottom-up approach
Top-Down approach
 This approach solves the bigger problem by recursively solving
smaller sub-problems.
 As sub-problems solved, their results are stored for later use.
 This way, we don’t need to solve the same sub-problem more
than once. This method of saving the intermediate
results is called Memoization (not memorization).
Bottom-Up Approach
 The bottom-up method is an iterative version of the top-down
approach.
 This approach starts with the smallest and works upwards to the
largest sub-problems.
 Thus when solving a particular sub-problem, we already have results
of smaller dependent sub-problems.
Three Basic Examples

EXAMPLE 1: Coin-row problem

EXAMPLE 2: Change-making problem

EXAMPLE 3: Coin-collecting problem


EXAMPLE 1: Coin-row problem

There is a row of n coins whose values are some positive integers


c1, c2, ... , cn, not necessarily distinct.

The goal is to pick up the maximum amount of money subject to the


constraint that no two coins adjacent in the initial row can be picked
up.

Let F(n) be the maximum amount that can be picked up from the row
of n coins.
To derive a recurrence for F (n), we partition all the allowed coin selections into

those that include the last coin

two groups:

those without it.

The largest amount we can get from the first group is equal to cn + F (n − 2) — the value
of the nth coin plus the maximum amount we can pick up from the first n − 2 coins.

The maximum amount we can get from the second group is equal to F (n − 1) by the
definition of F (n).

Thus we have:
The application of the
algorithm to the coin row
of denominations
5, 1, 2, 10, 6, 2 is shown
in figure beside:
Using the CoinRow to find F(n), the largest amount of money that can
be picked up takes (n) time and (n)space.

This is by far superior to the alternatives:


- the straightforward top down application of recurrence and
- solving the problem by exhaustive search
EXAMPLE 2: Change-making problem

Give change for amount n using the minimum number of coins of


denominations

d1 < d2 < ... < dm where d1 = 1.

Let F(n) be the minimum number of coins whose values add up to n.

The amount n can only be obtained by adding one coin of denomination


dj to the amount n − dj for j = 1, 2,...,m such that n ≥ dj
The application of the
algorithm to amount n = 6
and denominations 1, 3, 4 is
shown in figure beside.

The minimum-coin set for n


= 6 is two 3’s
EXAMPLE 3: Coin-collecting problem

Several coins are placed in cells of an n × m board, no more than one coin per cell.

A robot, located in the upper left cell of the board, needs to collect as many of the coins as
possible and bring them to the bottom right cell.

On each step, the robot can move either one cell to the right or one cell down from its current
location.

When the robot visits a cell with a coin, it always picks up that coin.

Design an algorithm to find the maximum number of coins the robot can collect and a path it
needs to follow to do this
Let F (i, j ) be the largest number of coins the robot can collect and bring to the cell
(i, j ) in the ith row and jth column of the board.

It can reach this cell either from the adjacent cell (i − 1, j) above it
or from the adjacent cell (i, j − 1) to the left of it.

The largest numbers of coins that can be brought to these cells are
F (i − 1,j) and F (i, j − 1), respec vely.

Base Case: There are no adjacent cells above the cells in the first row,
and there are no adjacent cells to the left of the cells in the first column.

For those cells, we assume that F (i − 1,j) and F (i, j − 1) are equal to 0
for their nonexistent neighbors.

Therefore, the largest number of coins the robot can bring to cell (i, j ) is the maximum
of these two numbers plus one possible coin at cell (i, j ) itself.

where cij = 1 if there is a coin in cell (i, j ), and cij = 0 otherwise


0/1Knapsack Problem
 Given n items of known weights w1,...,wn and values(profits) v1,...,vn
 a knapsack of capacity W,
 find the most valuable subset (optimal solution) of the items that fit
into the knapsack.
Dynamic programming algorithm works as follows
 Let an instance defined by the first i items, 1 ≤ i ≤ n, with weights
w1,...,wi, values v1,...,vi, and knapsack capacity j, 1 ≤ j ≤ W.
 Let V (i, j) be the value of an optimal solution to this instance, i.e., the
value of the most valuable subset of the first i items that fit into the
knapsack of capacity j.
 Subsets of the first i items that fit the knapsack of capacity j can be
divided into two categories:
0/1Knapsack Problem
Using recurrence equation, for 0 < i <= n(no. of items) and 0 < j <=
W(Knapsack capacity), value of ith row and the jth column, i.e V(i, j ) is
computed.
0 j - wi j W
0 0 0 0 0

i-1 0 V[i – 1, j – wi] V[i – 1, j]


wi, vi i V[i, j]
n 0 GOAL

Figure: Table for solving the knapsack problem by dynamic programming.


0/1Knapsack Problem
1. Among the subsets that do not include the ith item, the value of an
optimal subset in the table, i.e. V(i,j) = V (i − 1, j ), hence j – wi < 0.
2. Among the subsets that do include the ith item, an optimal subset is
made up of this item and an optimal subset of the first i − 1 items that
fits into the knapsack of capacity j − wi. The value of such an
optimal subset is vi + V(i − 1, j − wi), hence j − wi ≥ 0.
The above two observations lead to the following recurrence:

max{ V[i – 1, j], vi + V[i – 1, j – wi ] } if j − wi ≥ 0


V[i, j] = V[i – 1, j] if j – wi < 0

With initial conditions V (0, j) = 0 for j ≥ 0 and V (i, 0) = 0 for i ≥ 0.


while(i>=1 && j >=1)
{
if(v[i][j] != v[i-1][j] )
{
include item i in optimal solution
set j = j-w[i]; // remaining capacity
}
i = i-1; // check for next i – 1 item that
} // fits the remaining capacity.
// back tracing the table to find optimal solution.
0/1Knapsack Problem while(i>=1 && j >=1)
Example: Item Weight profit {
if(v[i][j] != v[i-1][j] )
n=5 1 4 15 {
2 3 10 include item i in optimal solution set
W = 10 j = j-w[i]; // remaining capacity
3 5 18
}
4 7 12 i = i-1; // check for next i – 1 item that
5 2 8 } // fits the remaining capacity.
j= 0 1 2 3 4 5 6 7 8 9 10
i=0
i=1
i=2
i=3
i=4
i=5

Optimal solution is { item 5, item 3, item 2 }


Maximum profit is 36
Efficiency of 0/1 Knapsack algorithm
 The time efficiency and space efficiency of this algorithm are both in
Ө(n*W).
 The time needed to find the items of an optimal solution set is in
O(n).

Knapsack using Memory Function


 Memory function uses the concept of both top-down and bottom-up
approach
 Memory functions are generally recursive in nature.
Knapsack using Memory Function
Tasks: DIY
Example of solving an instance of the knapsack problem by the dynamic programming algorithm.

Example of solving an instance of the knapsack problem by the memory function algorithm.
Introduction to Warshall’s algorithm
Transitive Closure
Definition: The transitive closure of a directed graph with n vertices
can be defined as the n × n boolean matrix T = {tij }, in which the
element in the ith row and the jth column is 1 if there exists a nontrivial
path (i.e., directed path of a positive length) from the ith vertex to the jth
vertex; otherwise, tij is 0.
Introduction to Warshall’s algorithm
Transitive Closure
Example:

(a) Digraph. (b) Its adjacency matrix. (c) Its transitive closure.
Warshall’s algorithm works as follows
 Warshall’s algorithm constructs the transitive closure through a
series of n × n Boolean matrices:
R(0) ,...,R(k−1) , R(k),...R(n).
 Each of these matrices provides certain information about directed
paths in the digraph.
Introduction to Warshall’s algorithm
Transitive Closure
 Specifically, the element R(k) ij in the ith row and jth column of matrix
R(k) (for i, j = 1, 2, . . . , n, and k = 0, 1,...,n) is equal to 1 if and only
if there exists a directed path of a positive length from the ith vertex
to the jth vertex with each intermediate vertex, if any, numbered not
higher than k.
 R(0), Boolean matrix which contains the information about paths
with no intermediate vertices in its paths.
 R(1) Boolean matrix which contains the information about paths that
can use the first vertex as intermediate, it may contain more 1’s
than R(0).
 R(2) Boolean matrix which contains the information about paths that
can use the first two vertices as intermediate, it may contain more
1’s than R(1).
Transitive Closure
 R(n) Boolean matrix which contains the information about paths that
can use the n vertices as intermediate, it may contain more 1’s
than R(n-1).
 Finding path from vertex i to j if there exist path via intermediate
vertex k, i.e. if R(k-1) ik = 1 and R(k-1) kj = 1 then R(k) ij = 1 as shown
in figure below

Figure: Rule for changing zeros to 1 in Warshall’s algorithm.


Transitive Closure
The recurrence equation for generating the elements of matrix R(k)
from the elements of matrix R(k-1) is

R(k)ij = R(k-1)ij or ( R(k-1)ik and R(k-1)kj)

The above formula implies the following rules to apply Warshall’s


algorithm.
1. If an element Rij is 1 in R(k−1) , it remains 1 in R(k).
2. If an element Rij is 0 in R(k−1) , it has to be changed to 1 in R(k) if
and only if the element in its row i and column k and the element in
its column j and row k are both 1’s in R(k−1).
Pseudocode of Warshall’s algorithm
Introduction to Warshall’s algorithm

Application of Warshall’s
algorithm to the digraph
shown. 1’s in bold are
changed from 0 (R(k-1)).
Dynamic Programming: Floyd’s algorithm
All-Pairs Shortest Paths problem
Finding the shortest distance path from each vertex to all other vertices
of a given directed or undirected graph.
Floyd’s algorithm for finding all-pairs shortest paths
 It is applicable to both undirected and directed weighted graphs
which do not contain a cycle of a negative length.
 Floyd’s algorithm computes the distance matrix of a weighted graph
with n vertices through a series of n × n matrices:
D(0) ,...,D(k−1) , D(k),...,D(n).
 Each of these matrices contains the lengths of shortest paths with
certain constraints i.e.
 for k = 0, matrix contains lengths of shortest paths with zero
intermediate vertex
 for k = 1, matrix contains lengths of shortest paths with one
intermediate vertex.
All-Pairs Shortest Paths problem
 The last matrix in the series, D(n), contains the lengths of the shortest
paths among all paths that can use all n vertices as intermediate
The recurrence equation for for generating the elements of matrix
D(k) from the elements of matrix D(k-1) is

The above equation tells that the element in row i and column j of the
current distance matrix D(k−1) is replaced by the sum of the elements in
the same row i and the column k and in the same column j and the row
k if and only if the sum is smaller than its current value (in D(k-1)).
All-Pairs Shortest Paths problem

Efficiency of the algorithm is Ө(n3).


Application of Floyd’s
algorithm to the digraph
shown. Updated elements
are shown in bold.
Solve using Floyd’s Algorithm
Solution:
THANK YOU

You might also like