Cse-IV-Design and Analysis of Algorithms (10cs43) - Notes
Cse-IV-Design and Analysis of Algorithms (10cs43) - Notes
com
10CS43
UNIT 1
INTRODUCTION
Page 1
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
An algorithm is composed of a finite set of steps, each of which may require one or more
operations. The possibility of a computer carrying out these operations necessitates that certain
constraints be placed on the type of operations an algorithm can include.
The fourth criterion for algorithms we assume in this book is that they terminate after a finite
number of operations.
Criterion 5 requires that each operation be effective; each step must be such that it can, at
least in principal, be done by a person using pencil and paper in a finite amount of time. Performing
arithmetic on integers is an example of effective operation, but arithmetic with real numbers is
not, since some values may be expressible only by infinitely long decimal expansion. Adding two
such numbers would violet the effectiveness property.
Algorithms that are definite and effective are also called computational procedures.
Example:
Problem:GCD of Two numbers m,n
Input specifiastion :Two inputs,nonnegative,not both zero
Euclids algorithm
-gcd(m,n)=gcd(n,m mod n)
Untill m mod n =0,since gcd(m,0) =m
Another way of representation of the same algorithm
Euclids algorithm
Step1:if n=0 return val of m & stop else proceed step 2
Step 2:Divide m by n & assign the value of remainder to r
Step 3:Assign the value of n to m,r to n,Go to step1.
Another algorithm to solve the same problem
Euclids algorithm
Step1:Assign the value of min(m,n) to t
Step 2:Divide m by t.if remainder is 0,go to step3 else goto step4
Step 3: Divide n by t.if the remainder is 0,return the value of t as the answer and
stop,otherwise proceed to step4
Step4 :Decrease the value of t by 1. go to step 2
Page 2
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Results may not be indicative of the running time on other inputs not included in
the experiment.
Total time taken by the algorithm is given as a function on its input size
Page 3
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
elements.
Units for Measuring Running Time: Count the number of times an algorithms basic operation
is executed. (Basic operation: The most important operation of the algorithm, the operation
contributing the most to the total running time.) For e.g., The basic operation is usually the
most time-consuming operation in the algorithms innermost loop.
Consider the following example:
ALGORITHM sum_of_numbers ( A[0 n-1] )
// Functionality : Finds the Sum
// Input : Array of n numbers
// Output : Sum of n numbers
i=0
sum=0
while i < n
sum=sum + A[i]
i=i + 1
return sum
Page 4
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Worst-case efficiency: Efficiency (number of times the basic operation will be executed) for
the worst case input of size n. i.e. The algorithm runs the longest among all possible inputs
of size n.
Best-case efficiency: Efficiency (number of times the basic operation will be executed) for
the best case input of size n. i.e. The algorithm runs the fastest among all possible inputs
of size n.
Average-case efficiency: Average time taken (number of times the basic operation will be
executed) to solve all the possible instances (random) of the input. NOTE: NOT the
average of worst and best case
Asymptotic Notations
Asymptotic notation is a way of comparing functions that ignores constant factors and small
input sizes. Three notations used to compare orders of growth of an algorithms basic operation
count are:
Big Oh- O notation
Definition:
A function t(n) is said to be in O(g(n)), denoted t(n) O(g(n)), if t(n) is bounded above by some
constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that
t(n) cg(n) for all n n0
Page 5
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Page 6
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
fast
constant
log n
logarithmic
linear
n log n
n log n
n2
quadratic
n3
cubic
2n
exponential
n!
factorial
Page 7
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Analysis:
1. Input size: number of elements = n (size of the array)
2. Basic operation:
a) Comparison
b) Assignment
3. NO best, worst, average cases.
4. Let C (n) denotes number of comparisons: Algorithm makes one comparison on
each execution of the loop, which is repeated for each value of the loops variable
i within the bound between 1 and n 1.
Page 8
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Analysis
1. Input size: number of elements = n (size of the array)
2. Basic operation: Comparison
3. Best, worst, average cases EXISTS.
Worst case input is an array giving largest comparisons.
Array with last two elements are the only pair of equal elements
4. Let C (n) denotes number of comparisons in worst case: Algorithm makes one
comparison for each repetition of the innermost loop i.e., for each value of the
loops variable j between its limits i + 1 and n 1; and this is repeated for each
value of the outer loop i.e, for each value of the loops variable i between its
limits 0 and n 2
Page 9
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Page 10
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
for n > 0
initial condition
= M (n 1) + 1
= [ M (n 2) + 1 ] + 1
= M (n 2) + 2
= [ M (n 3) + 1 ] + 3
= M (n 3) + 3
Page 11
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
M (n) (n)
Example: Find the number of binary digits in the binary representation of a positive
decimal integer
ALGORITHM BinRec (n)
//Input: A positive decimal integer n
//Output: The number of binary digits in ns binary representation
if n = = 1
return 1
else
return BinRec ( n/2 ) + 1
Analysis:
1. Input size: given number = n
2. Basic operation: addition
3. NO best, worst, average cases.
4. Let A (n) denotes number of additions.
A (n) = A ( n/2 ) + 1
A (1) = 0
for n > 1
initial condition
for n > 1
Page 12
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
= A (2k-i) + i
When i = k, we have
= A (2k-k) + k = A (20) + k
Since A (20) = 0
A (2k) = k
Since n = 2k, HENCE k = log2 n
A (n) = log2 n
A (n) ( log n)
Selection Sort
ALGORITHM SelectionSort(A[0..n - 1])
//The algorithm sorts a given array by selection sort
//Input: An array A[0..n - 1] of orderable elements
//Output: Array A[0..n - 1] sorted in ascending order
for i=0 to n - 2 do
Page 13
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
min=i
for j=i + 1 to n - 1 do
if A[j ]<A[min] min=j
swap A[i] and A[min]
Example:
Thus, selection sort is a O(n2) algorithm on all inputs. The number of key swaps is only
O(n) or, more precisely, n-1 (one for each repetition of the i loop).This property distinguishes
selection sort positively from many other sorting algorithms.
Bubble Sort
Compare adjacent elements of the list and exchange them if they are out of order.Then we
repeat the process,By doing it repeatedly, we end up bubbling up the largest element to the last
position on the list
ALGORITHM
Page 14
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
BubbleSort(A[0..n - 1])
//The algorithm sorts array A[0..n - 1] by bubble sort
//Input: An array A[0..n - 1] of orderable elements
//Output: Array A[0..n - 1] sorted in ascending order
for i=0 to n - 2 do
for j=0 to n - 2 - i do
if A[j + 1]<A[j ]
swap A[j ] and A[j + 1]
Example
The first 2 passes of bubble sort on the list 89, 45, 68, 90, 29, 34, 17. A new line is shown
after a swap of two elements is done. The elements to the right of the vertical bar are in their final
positions and are not considered in subsequent iterations of the algorithm
Bubble Sort the analysis
Clearly, the outer loop runs n times. The only complexity in this analysis in the inner loop.
If we think about a single time the inner loop runs, we can get a simple bound by noting that it can
never loop more than n times. Since the outer loop will make the inner loop complete n times, the
comparison can't happen more than O(n 2) times.
The number of key comparisons for the bubble sort version given above is the same for all
arrays of size n.
Page 15
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
The number of key swaps depends on the input. For the worst case of decreasing arrays, it
is the same as the number of key comparisons.
Observation: if a pass through the list makes no exchanges, the list has been sorted and we
can stop the algorithm Though the new version runs faster on some inputs, it is still in O(n2) in the
worst and average cases. Bubble sort is not very good for big set of input. How ever bubble sort is
very simple to code.
General Lesson From Brute Force Approach
A first application of the brute-force approach often results in an algorithm that can be
improved with a modest amount of effort. Compares successive elements of a given list with a
given search key until either a match is encountered (successful search) or the list is exhausted
without finding a match (unsuccessful search).
Page 16
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
text T
pattern P
001011
Text: 10010101101001100101111010
2. Pattern: happy
Text: It is never too late to have a happy childho
The algorithm shifts the pattern almost always after a single character comparison. in the
worst case, the algorithm may have to make all m comparisons before shifting the pattern, and this
can happen for each of the n - m + 1 tries. Thus, in the worst case, the algorithm is in (nm).
Page 17
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
UNIT - 2
DIVIDE & CONQUER
1.1 Divide and Conquer
Definition:
Divide & conquer is a general algorithm design strategy with a general plan as follows:
1. DIVIDE:
A problems instance is divided into several smaller instances of the same
problem, ideally of about the same size.
2. RECUR:
Solve the sub-problem recursively.
3. CONQUER:
If necessary, the solutions obtained for the smaller instances are combined to get a
solution to the original instance.
NOTE:
The base case for the recursion is sub-problem of constant size.
Advantages of Divide & Conquer technique:
For solving conceptually difficult problems like Tower Of Hanoi, divide &
conquer is a powerful tool
Results in efficient algorithms
Divide & Conquer algorithms are adapted foe execution in multi-processor
machines
Results in algorithms that use memory cache efficiently.
Limitations of divide & conquer technique:
Recursion is slow
Very simple problem may be more complicated than an iterative approach.
Example: adding n numbers etc
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Master theorem
Theorem: If f(n) (nd) with d 0 in recurrence equation
T(n) = aT(n/b) + f(n),
then
T(n) =
(nd)
(ndlog n)
(nlogba )
if a < bd
if a = bd
if a > bd
Example:
Let T(n) = 2T(n/2) + 1, solve using master theorem.
Solution:
Here: a = 2
b=2
f(n) = (1)
d=0
Therefore:
a > bd i.e., 2 > 20
Case 3 of master theorem holds good. Therefore:
T(n) (nlogba )
(nlog22 )
(n)
Algorithm:
Algorithm can be implemented as recursive or non-recursive algorithm.
ALGORITHM BinSrch ( A[0 n-1], key)
//implements non-recursive binary search
//i/p: Array A in ascending order, key k
//o/p: Returns position of the key matched else -1
l0
rn-1
while l r do
m( l + r) / 2
if key = = A[m]
Page 19
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
return m
else
if key < A[m]
rm-1
else
lm+1
return -1
Analysis:
Advantages:
Limitations:
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Definition:
Merge sort is a sort algorithm that splits the items to be sorted into two groups,
recursively sorts each group, and merges them into a final sorted sequence.
Features:
Is a comparison based algorithm
Is a stable algorithm
Is a perfect example of divide & conquer algorithm design strategy
It was invented by John Von Neumann
Algorithm:
ALGORITHM Mergesort ( A[0 n-1] )
//sorts array A by recursive mergesort
//i/p: array A
//o/p: sorted array A in ascending order
if n > 1
copy A[0 (n/2 -1)] to B[0 (n/2 -1)]
copy A[n/2 n -1)] to C[0 (n/2 -1)]
Mergesort ( B[0 (n/2 -1)] )
Mergesort ( C[0 (n/2 -1)] )
Merge ( B, C, A )
ALGORITHM Merge ( B[0 p-1], C[0 q-1], A[0 p+q-1] )
//merges two sorted arrays into one sorted array
//i/p: arrays B, C, both sorted
//o/p: Sorted array A of elements from B & C
I 0
j0
k0
while i < p and j < q do
if B[i] C[j]
A[k] B[i]
ii + 1
else
A[k] C[j]
jj + 1
kk + 1
if i == p
copy C [ j q-1 ] to A [ k (p+q-1) ]
else
copy B [ i p-1 ] to A [ k (p+q-1) ]
Example:
Apply merge sort for the following list of elements: 6, 3, 7, 8, 2, 4, 5, 1
Page 21
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Analysis:
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
= (n log n)
Advantages:
Limitations:
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
return j
Example: Sort by quick sort the following list: 5, 3, 1, 9, 8, 2, 4, 7, show recursion tree.
Analysis:
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
C(1) = 1
Best case:
C(n) = 2C(n/2) + (n)
NOTE:
The quick sort efficiency in average case is ( n log n) on random input.
Page 25
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
UNIT - 3
THE GREEDY METHOD
3.1 The General Method
Definition:
Greedy technique is a general algorithm design strategy, built on following elements:
configurations: different choices, values to find
objective function: some configurations to be either maximized or minimized
The method:
NOTE:
Greedy method works best when applied to problems with the greedy-choice
property
A globally-optimal solution can always be found by a series of local
improvements from a starting configuration.
Optimal solutions:
Change making
Minimum Spanning Tree (MST)
Single-source shortest paths
Huffman codes
Approximations:
Traveling Salesman Problem (TSP)
Fractional Knapsack problem
Page 26
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
0-1: If item j is removed from an optimal packing, the remaining packing is an optimal
packing with weight at most W-wj
Obeying a greedy strategy, take as much as possible of the item with the greatest value per
pound.
If the supply of that item is exhausted and there is still more room, take as much as
possible of the item with the next value per pound, and so forth until there is no more
room
Unable to fill the knapsack to capacity, and the empty space lowers the effective value per
pound of the packing
We must compare the solution to the sub-problem in which the item is included with the
solution to the sub-problem in which the item is excluded before we can make the choice
Page 27
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
A feasible solution is a subset of jobs J such that each job is completed by its deadline.
Page 28
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Consider the jobs in the non increasing order of profits subject to the constraint that the resulting
job sequence J is a feasible solution.
15
p3
10)
p2
(2
1 2
1)
d1 d d3 d2
J = { 1} is a feasible one
J = { 1, 4} is a feasible one with processing sequence ( 4,1)
J = { 1, 3, 4} is not feasible
J = { 1, 2, 4} is not feasible
J = { 1, 4} is optimal
J is a feasible solution iff the jobs in J can be processed in the order without violating any
deadly.
Proof:
By definition of the feasible solution if the jobs in J can be processed in the order without
violating any deadline then J is a feasible solution.
Page 29
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
So, we have only to prove that if J is a feasible one, then represents a possible order in which
the jobs may be processed.
1 j <k
Assume 1 . Then let a be the least index in which 1 and differ. i.e. a is such that ra ia.
Let rb = ia, so b > a (because for all indices j less than a rj = ij).
= (i1,i2, ia ib ik )
in i1,i2,,ik]
1 = (r1,r2, ra rb
rk )
So if we interchange ra and rb, the resulting permutation 11= (s1, sk) represents an order with
the least index in which 11 and differ is incremented by one.
Continuing in this way, 1 can be transformed into without violating any deadline.
J may be represented by
//
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
by their deadlines
1 r k+1
then J JU{I}
end if
repeat
end greedy-job
Procedure JS(D,J,n,k)
// D(i) 1, 1 i n are the deadlines //
// the jobs are ordered such that //
// p1 p2 . pn //
// in the optimal solution ,D(J(i) D(J(i+1)) //
// 1 i k //
integer D(o:n), J(o:n), i, k, n, r
D(0) J(0) 0
// J(0) is a fictious job with D(0) = 0 //
K1; J(1) 1 // job one is inserted into J //
for i 2 to do // consider jobs in non increasing order of pi //
// find the position of i and check feasibility of insertion //
r k // r and k are indices for existing job in J //
// find r such that i can be inserted after r //
while D(J(r)) > D(i) and D(i) r do
// job r can be processed after i and //
// deadline of job r is not exactly r //
r r-1 // consider whether job r-1 can be processed after i //
repeat
if D(J(r)) d(i) and D(i) > r then
// the new job i can come after existing job r; insert i into J at position r+1 //
for I k to r+1 by 1 do
J(I+1) J(l) // shift jobs( r+1) to k right by//
//one position //
repeat
Page 31
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Let n be the number of jobs and s be the number of jobs included in the solution.
The time needed by the algorithm is 0(sn) s n so the worst case time is 0(n2).
If di = n - i+1 1 i n, JS takes (n2) time
D and J need (s) amount of space.
a
2
5
c
d
3
Page 32
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
1
b
b
2
d
c
c
Weight (T2) = 8
Weight (T3) = 6
Weight (T1) = 9
Minimum Spanning Tree (MST)
Definition:
MST of a weighted, connected graph G is defined as: A spanning tree of G with
minimum total weight.
MST Applications:
Network design.
Telephone, electrical, hydraulic, TV cable, computer, road
Approximation algorithms for NP-hard problems.
Traveling salesperson problem, Steiner tree
Cluster analysis.
Reducing data storage in sequencing amino acids in a protein
Learning salient features for real-time face verification
Auto config protocol for Ethernet bridging to avoid cycles in a network, etc
Page 33
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Fringe edge: An edge which has one vertex is in partially constructed tree Ti and
the other is not.
Unseen edge: An edge with both vertices not in Ti
Algorithm:
ALGORITHM Prim (G)
//Prims algorithm for constructing a MST
//Input: A weighted connected graph G = { V, E }
//Output: ET the set of edges composing a MST of G
// the set of tree vertices can be initialized with any vertex
VT
{ v0}
ET
for i
1 to |V| - 1 do
Find a minimum-weight edge e* = (v*, u*) among all the edges (v, u) such
that v is in VT and u is in V - VT
VT
VT U { u*}
ET
ET U { e*}
return ET
The method:
STEP 1: Start with a tree, T0, consisting of one vertex
STEP 2: Grow tree one vertex/edge at a time
Construct a series of expanding sub-trees T1, T2, Tn-1.
At each stage construct Ti + 1 from Ti by adding the minimum weight edge
connecting a vertex in tree (Ti) to one vertex not yet in tree, choose from
fringe edges (this is the greedy step!)
Algorithm stops when all vertices are included
Page 34
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Example:
Apply Prims algorithm for the following graph to find MST.
1
b
3
c
4
6
5
2
6
8
e
Solution:
Tree
vertices
Remaining
vertices
a ( -, - )
b(a,3)
c(-,)
d(-,)
e(a,6)
f(a,5)
b ( a, 3 )
c(b,1)
d(-,)
e(a,6)
f(b,4)
c ( b, 1 )
d(c,6)
e(a,6)
f(b,4)
Graph
b
3
a
1
3
4
a
Page 35
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
1
b
3
f ( b, 4)
d(f,5)
e(f,2)
f
2
e
1
b
3
e ( f, 2)
d(f,5)
c
4
5
2
e
d( f, 5)
Efficiency:
Efficiency of Prims algorithm is based on data structure used to store priority queue.
Unordered array: Efficiency: (n2)
Binary heap: Efficiency: (m log n)
Min-heap: For graph with n nodes and m edges: Efficiency: (n + m) log n
Conclusion:
Page 36
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
The method:
STEP 1: Sort the edges by increasing weight
STEP 2: Start with a forest having |V| number of trees.
STEP 3: Number of trees are reduced by ONE at every inclusion of an edge
At each stage:
Among the edges which are not yet included, select the one with minimum
weight AND which does not form a cycle.
the edge will reduce the number of trees by one by combining two trees of
the forest
Algorithm stops when |V| -1 edges are included in the MST i.e : when the number of
trees in the forest is reduced to ONE.
Page 37
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Example:
Apply Kruskals algorithm for the following graph to find MST.
1
b
3
c
4
6
5
2
8
Solution:
The list of edges is:
Edge
Weight
ab
3
af
5
ae
6
bc
1
bf
4
cf
4
cd
6
df
5
de
8
ef
2
cf
4
af
5
df
5
ae
6
cd
6
de
8
Edge
Weight
Insertion
status
Insertion
order
Edge
Weight
Insertion
status
Insertion
order
bc
1
ef
2
ab
3
bf
4
bc
YES
e
1
1
ef
2
a
YES
2
d
2
Page 38
Dept of CSE,SJBIT
www.rejinpaul.com
7
www.rejinpaul.com
Edge
Weight
Insertion
status
Insertion
order
Edge
Weight
Insertion
status
Insertion
order
Edge
Weight
Insertion
status
Insertion
order
Edge
Weight
Insertion
status
Insertion
order
Edge
Weight
Insertion
status
Insertion
order
10CS43
ab
3
a
YES
2
e
bf
c
4
YES
d
2
4
cf
4
NO
af
5
NO
-
df
1
3
5
YES
c
4
d
5
2
Page 39
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Efficiency:
Efficiency of Kruskals algorithm is based on the time needed for sorting the edge
weights of a given graph.
With an efficient sorting algorithm: Efficiency: (|E| log |E| )
Conclusion:
Algorithm
ALGORITHM Dijkstra(G, s)
//Input: Weighted connected graph G and source vertex s
//Output: The length Dv of a shortest path from s to v and its penultimate vertex Pv for
every vertex v in V
//initialize vertex priority in the priority queue
Initialize (Q)
for every vertex v in V do
Dv ; Pvnull // Pv , the parent of v
insert(Q, v, Dv) //initialize vertex priority in priority queue
ds0
//update priority of s with ds, making ds, the minimum
Decrease(Q, s, ds)
VT
Page 40
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
for i0 to |V| - 1 do
u*DeleteMin(Q)
//expanding the tree, choosing the locally best vertex
VTVT U {u*}
for every vertex u in V VT that is adjacent to u* do
if Du* + w (u*, u) < Du
DuDu + w (u*, u); Pu u*
Decrease(Q, u, Du)
The method
Dijkstras algorithm solves the single source shortest path problem in 2 stages.
Stage 1: A greedy algorithm computes the shortest distance from source to all other
nodes in the graph and saves in a data structure.
Stage 2 : Uses the data structure for finding a shortest path from source to any vertex v.
At each step, and for each vertex x, keep track of a distance D(x)
and a directed path P(x) from root to vertex x of length D(x).
Scan first from the root and take initial paths P( r, x ) = ( r, x ) with
D(x) = w( rx ) when rx is an edge,
D(x) = when rx is not an edge.
For each temporary vertex y distinct from x, set
D(y) = min{ D(y), D(x) + w(xy) }
Example:
Apply Dijkstras algorithm to find Single source shortest paths with vertex a as the
source.
1
b
3
c
4
6
5
2
6
8
e
Solution:
Length Dv of shortest path from source (s) to other vertices v and Penultimate vertex Pv
for every vertex v in V:
Da = 0 ,
Db = ,
Dc = ,
Dd = ,
De = ,
Df = ,
Pa = null
Pb = null
Pc = null
Pd = null
Pe = null
Pf = null
Page 41
Dept of CSE,SJBIT
www.rejinpaul.com
Tree
vertices
a ( -, 0 )
b ( a, 3 )
c ( b, 4 )
f ( a, 5)
www.rejinpaul.com
10CS43
e ( a, 6)
d( c, 10)
Graph
b
3
a
Pa = a
Pb = [ a, b ]
Pc = [a,b,c]
Pd = null
Pe = [ a, e ]
Pf = [ a, f ]
Pa = a
Pb = [ a, b ]
Pc = [a,b,c]
Pd = [a,b,c,d]
Pe = [ a, e ]
Pf = [ a, f ]
Pa = a
Pb = [ a, b ]
Pc = [a,b,c]
Pd = [a,b,c,d]
Pe = [ a, e ]
Pf = [ a, f ]
Da = 0 Pa = a
Db = 3 Pb = [ a, b ]
Dc = 4 Pc = [a,b,c]
Pd = [a,b,c,d]
d ( c, 10Dd=10
)
De = 6 Pe = [ a, e ]
Df = 5 Pf = [ a, f ]
1
b
5
a
a
6
e
1
b
3
a
c
6
d
Conclusion:
Page 42
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
UNIT - 4
Dynamic Programming
4.1 The General Method
Definition
Dynamic programming (DP) is a general algorithm design technique for solving
problems with overlapping sub-problems. This technique was invented by American
mathematician Richard Bellman in 1950s.
Key Idea
The key idea is to save answers of overlapping smaller sub-problems to avoid recomputation.
Dynamic Programming
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
F(n) =
if n=0
if n=1
if n >1
Algorithm F(n)
// Computes the nth Fibonacci number recursively by using its definitions
// Input: A non-negative integer n
// Output: The nth Fibonacci number
if n==0 || n==1 then
return n
else
return F(n-1) + F(n-2)
Algorithm F(n): Analysis
Is too expensive as it has repeated calculation of smaller Fibonacci numbers.
Exponential order of growth.
F(n)
F(n-1)
F(n-2)
F(n-3)
F(n-2)
F(n-3)
F(n-4)
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Directed Graph: A graph whose every edge is directed is called directed graph
OR digraph
Adjacency matrix: The adjacency matrix A = {aij} of a directed graph is the
boolean matrix that has
o 1 - if there is a directed edge from ith vertex to the jth vertex
o 0 - Otherwise
Transitive Closure: Transitive closure of a directed graph with n vertices can be
defined as the n-by-n matrix T={tij}, in which the elements in the ith row (1 i
n) and the jth column(1 j n) is 1 if there exists a nontrivial directed path (i.e., a
directed path of a positive length) from the ith vertex to the jth vertex, otherwise
tij is 0.
The transitive closure provides reach ability information about a digraph.
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Algorithm:
Algorithm Warshall(A[1..n, 1..n])
// Computes transitive closure matrix
// Input: Adjacency matrix A
// Output: Transitive closure matrix R
R(0) A
for k1 to n do
for i 1 to n do
for j 1 to n do
R(k)[i, j] R(k-1)[i, j] OR (R(k-1)[i, k] AND R(k-1)[k, j] )
return R(n)
Find Transitive closure for the given digraph using Warshalls algorithm.
C
Solution:
R(0)
A
B
C
D
A
0
1
0
0
B
0
0
0
1
C
1
0
0
0
D
0
1
0
0
Page 46
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
R(0)
R(1)
k=1
Vertex 1
can be
intermediate
node
A
B
C
D
A
0
1
0
0
B
0
0
0
1
10CS43
C
1
0
0
0
D
0
1
0
0
R1[2,3]
= R0[2,3] OR
R0[2,1] AND R0[1,3]
= 0 OR ( 1 AND 1)
=1
A
B
C
D
k=2
0
0
1
0
A
Vertex
B
1
0
1
1
{1,2 } can
C
0
0
0
0
be
0
1
0
0
intermediate D
nodes
R2[4,1]
= R1[4,1] OR
R1[4,2] AND R1[2,1]
= 0 OR ( 1 AND 1)
=1
A
B
C
D
A
0
1
0
0
B
0
0
0
1
C
1
1
0
0
D
0
1
0
0
A
B
C
D
A
0
1
0
1
B
0
0
0
1
C
1
1
0
1
D
0
1
0
1
A
B
C
D
A
0
1
0
1
B
0
0
0
1
C
1
1
0
1
D
0
1
0
1
R2[4,3]
= R1[4,3] OR
R1[4,2] AND R1[2,3]
= 0 OR ( 1 AND 1)
=1
R2[4,4]
= R1[4,4] OR
R1[4,2] AND R1[2,4]
= 0 OR ( 1 AND 1)
=1
R(2)
k=3
Vertex
{1,2,3 } can
be
intermediate
nodes
A
B
C
D
A
0
1
0
1
B
0
0
0
1
C
1
1
0
1
D
0
1
0
1
NO CHANGE
R(3)
k=4
Dept of CSE,SJBIT
Vertex
{1,2,3,4 }
can be
intermediate
Page 47
www.rejinpaul.com
www.rejinpaul.com
A
B
C
D
A
0
1
0
1
B
0
0
0
1
10CS43
C
1
1
0
1
D
0
1
0
1
A
B
C
D
A
0
1
0
1
B
0
1
0
1
C
1
1
0
1
D
0
1
0
1
R4[2,2]
= R3[2,2] OR
R3[2,4] AND R3[4,2]
= 0 OR ( 1 AND 1)
=1
A
B
C
D
A
0
1
0
1
B
0
1
0
1
C
1
1
0
1
D
0
1
0
1
TRANSITIVE CLOSURE
for the given graph
Efficiency:
Weighted Graph: Each edge has a weight (associated numerical value). Edge
weights may represent costs, distance/lengths, capacities, etc. depending on the
problem.
Weight matrix: W(i,j) is
o 0 if i=j
o if no edge b/n i and j.
o weight of edge if edge b/n i and j.
Problem statement:
Given a weighted graph G( V, Ew), the all-pairs shortest paths problem is to find the
shortest path between every pair of vertices ( vi, vj ) V.
Solution:
A number of algorithms are known for solving All pairs shortest path problem
Matrix multiplication based algorithm
Dijkstra's algorithm
Bellman-Ford algorithm
Floyd's algorithm
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Algorithm:
Algorithm Floyd(W[1..n, 1..n])
// Implements Floyds algorithm
// Input: Weight matrix W
// Output: Distance matrix of shortest paths length
D W
for k 1 to n do
for i 1 to n do
for j 1 to n do
D [ i, j] min { D [ i, j], D [ i, k] + D [ k, j]
return D
Example:
Find All pairs shortest paths for the given weighted connected graph using Floyds
algorithm.
A
5
4
Solution:
D(0)
=
A
B
C
A
0
4
B
2
0
3
C
5
Page 49
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
D(1)
D(2)
k=1
Vertex 1
can be
intermediate
node
k=2
Vertex 1,2
can be
intermediate
nodes
k=3
Vertex 1,2,3
can be
intermediate
nodes
A
B
C
A
0
4
10CS43
B
2
0
3
C
5
A
B
C
A
0
4
B
2
0
3
C
5
9
0
A
B
C
A
0
4
7
B
2
0
3
C
5
9
0
A
B
C
A
0
4
7
B
2
0
3
C
5
9
0
D1[2,3]
= min { D0 [2,3],
D0 [2,1] + D0 [1,3] }
= min { , ( 4 + 5) }
=9
A
B
C
A
0
4
B
2
0
3
C
5
9
0
D2[3,1]
= min { D1 [3,1],
D1 [3,2] + D1 [2,1] }
= min { , ( 4 + 3) }
=7
A
B
C
A
0
4
7
B
2
0
3
C
5
9
0
NO Change
D(3)
A
B
C
A
0
4
7
B
2
0
3
C
5
9
0
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Definition:
Given a set of n items of known weights w1,,wn and values v1,,vn and a knapsack
of capacity W, the problem is to find the most valuable subset of the items that fit into the
knapsack.
Knapsack problem is an OPTIMIZATION PROBLEM
for j 0
for i 0
Recursive step:
max { V[ i-1, j ], vi +V[ i-1, j - wi ] }
V[ i, j ] =
if j - wi 0
if j - wi < 0
V[ i-1, j ]
Step 3:
Bottom up computation using iteration
Question:
Apply bottom-up dynamic programming algorithm to the following instance of the
knapsack problem Capacity W= 5
Item #
1
2
3
4
Weight (Kg)
2
3
4
5
Value (Rs.)
3
4
5
6
Solution:
Using dynamic programming approach, we have:
Page 51
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
Step
Calculation
1
Initial conditions:
V[ 0, j ] = 0 for j 0
V[ i, 0 ] = 0 for i 0
W1 = 2,
Available knapsack capacity = 1
W1 > WA,
CASE 1 holds:
V[ i, j ] = V[ i-1, j ]
V[ 1,1] = V[ 0, 1 ] = 0
W1 = 2,
Available knapsack capacity = 2
W1 = WA,
CASE 2 holds:
V[ i, j ] = max { V[ i-1, j ],
vi +V[ i-1, j - wi ] }
V[ 1,2] = max { V[ 0, 2 ],
3 +V[ 0, 0 ] }
= max { 0, 3 + 0 } = 3
W1 = 2,
Available knapsack capacity =
3,4,5
W1 < WA,
CASE 2 holds:
V[ i, j ] = max { V[ i-1, j ],
vi +V[ i-1, j - wi ] }
V[ 1,3] = max { V[ 0, 3 ],
3 +V[ 0, 1 ] }
= max { 0, 3 + 0 } = 3
W2 = 3,
Available knapsack capacity = 1
W2 >WA,
CASE 1 holds:
V[ i, j ] = V[ i-1, j ]
V[ 2,1] = V[ 1, 1 ] = 0
10CS43
Table
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
1
0
2
0
3
0
4
0
5
0
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
0
0
2
0
3
0
0
1
0
0
5
0
4
0
3
3
0
3
0
3
2
0
3
0
3
1
0
0
0
5
0
3
4
0
3
5
0
3
Page 52
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10
W2 = 3,
Available knapsack capacity = 2
W2 >WA,
CASE 1 holds:
V[ i, j ] = V[ i-1, j ]
V[ 2,2] = V[ 1, 2 ] = 3
W2 = 3,
Available knapsack capacity = 3
W2 = WA,
CASE 2 holds:
V[ i, j ] = max { V[ i-1, j ],
vi +V[ i-1, j - wi ] }
V[ 2,3] = max { V[ 1, 3 ],
4 +V[ 1, 0 ] }
= max { 3, 4 + 0 } = 4
W2 = 3,
Available knapsack capacity = 4
CASE 2 holds:
W2 < WA,
V[ i, j ] = max { V[ i-1, j ],
vi +V[ i-1, j - wi ] }
V[ 2,4] = max { V[ 1, 4 ],
4 +V[ 1, 1 ] }
= max { 3, 4 + 0 } = 4
W2 = 3,
Available knapsack capacity = 5
W2 < WA,
CASE 2 holds:
V[ i, j ] = max { V[ i-1, j ],
vi +V[ i-1, j - wi ] }
V[ 2,5] = max { V[ 1, 5 ],
4 +V[ 1, 2 ] }
= max { 3, 4 + 3 } = 7
W3 = 4,
Available knapsack capacity =
1,2,3
W3 > WA,
CASE 1 holds:
V[ i, j ] = V[ i-1, j ]
10CS43
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
1
0
0
0
2
0
3
3
1
0
0
0
0
0
0
0
3
4
5
0
3
4
0
3
4
3
0
3
4
4
5
0
3
0
3
4
2
0
3
3
3
5
0
3
0
3
0
3
4
0
3
3
4
0
3
0
3
4
0
3
3
0
0
0
0
0
3
0
3
3
0
0
0
0
3
7
4
0
3
4
5
0
3
7
Page 53
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
11
12
13
14
10CS43
W3 = 4,
Available knapsack capacity = 4
W3 = WA,
CASE 2 holds:
V[ i, j ] = max { V[ i-1, j ],
vi +V[ i-1, j - wi ] }
V[ 3,4] = max { V[ 2, 4 ],
5 +V[ 2, 0 ] }
= max { 4, 5 + 0 } = 5
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
0
0
0
0
W3 = 4,
Available knapsack capacity = 5
CASE 2 holds:
W3 < WA,
V[ i, j ] = max { V[ i-1, j ],
vi +V[ i-1, j - wi ] }
V[ 3,5] = max { V[ 2, 5 ],
5 +V[ 2, 1 ] }
= max { 7, 5 + 0 } = 7
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
0
0
0
0
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
W4 = 5,
Available knapsack capacity =
1,2,3,4
W4 < WA,
CASE 1 holds:
V[ i, j ] = V[ i-1, j ]
W4 = 5,
Available knapsack capacity = 5
CASE 2 holds:
W4 = WA,
V[ i, j ] = max { V[ i-1, j ],
vi +V[ i-1, j - wi ] }
V[ 4,5] = max { V[ 3, 5 ],
6 +V[ 3, 0 ] }
= max { 7, 6 + 0 } = 7
2
0
3
3
3
0
3
4
4
1
0
0
0
0
0
5
0
3
7
7
4
0
3
4
5
5
3
0
3
4
4
4
5
0
3
7
0
3
4
5
0
3
4
4
4
2
0
3
3
3
3
0
3
4
5
0
3
4
4
0
3
3
3
3
2
0
3
3
3
0
0
0
0
0
5
0
3
7
7
4
0
3
4
5
5
5
0
3
7
7
7
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
Step
1
10CS43
Table
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
1
0
0
0
0
0
Remarks
2
0
3
3
3
3
3
0
3
4
4
4
4
0
3
4
5
5
5
0
3
7
7
7
V[ 4, 5 ] = V[ 3, 5 ]
ITEM 4 NOT included in the
subset
2
1
0
0
0
0
0
2
0
3
3
3
3
3
0
3
4
4
4
4
0
3
4
5
5
5
0
3
7
7
7
V[ 3, 5 ] = V[ 2, 5 ]
ITEM 3 NOT included in the
subset
0
0
0
0
0
2
0
3
3
3
3
3
0
3
4
4
4
4
0
3
4
5
5
5
0
3
7
7
7
j=0
0
0
0
0
0
1
0
0
0
0
0
2
0
3
3
3
3
3
0
3
4
4
4
4
0
3
4
5
5
V[ 2, 5 ] V[ 1, 5 ]
ITEM 2 included in the subset
V[ 1, 2 ] V[ 0, 2 ]
ITEM 1 included in the subset
0
3
7
7
7
Page 55
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Efficiency:
Memory function
The method:
Algorithm:
Algorithm MFKnap( i, j )
if V[ i, j] < 0
if j < Weights[ i ]
value MFKnap( i-1, j )
else
value max {MFKnap( i-1, j ),
Values[i] + MFKnap( i-1, j - Weights[i] )}
V[ i, j ] value
return V[ i, j]
Example:
Apply memory function method to the following instance of the knapsack problem
Capacity W= 5
Item #
1
2
3
4
Weight (Kg)
2
3
4
5
Value (Rs.)
3
4
5
6
Solution:
Using memory function approach, we have:
Page 56
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Computation
1
Remarks
V[i,j] j=0
i=0
0
1
0
2
0
3
0
4
0
1
0
-1
-1
-1
-1
2
0
-1
-1
-1
-1
3
0
-1
-1
-1
-1
4
0
-1
-1
-1
-1
5
0
-1
-1
-1
-1
1
0
-1
-1
-1
-1
2
0
-1
-1
-1
-1
3
0
-1
-1
-1
-1
4
0
-1
-1
-1
-1
5
0
3
-1
-1
-1
1
0
-1
-1
-1
-1
2
0
3
-1
-1
-1
3
0
-1
-1
-1
-1
4
0
-1
-1
-1
-1
5
0
3
-1
-1
-1
1
0
-1
-1
-1
-1
2
0
3
-1
-1
-1
3
0
-1
-1
-1
-1
4
0
-1
-1
-1
-1
5
0
3
7
-1
-1
MFKnap( 4, 5 )
V[ 1, 5 ] = 3
MFKnap( 3, 5 )
6 + MFKnap( 3, 0 )
5 + MFKnap( 2, 1 )
MFKnap( 2, 5 )
MFKnap( 1, 5 )
4 + MFKnap( 1, 2 )
3
0
MFKnap( 0, 5 )
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
3 + MFKnap( 0, 3 )
3+0
MFKnap( 4, 5 )
V[ 1, 2 ] = 3
MFKnap( 3, 5 )
6 + MFKnap( 3, 0 )
MFKnap( 2, 5 )
5 + MFKnap( 2, 1 )
4 + MFKnap( 1, 2 )
MFKnap( 1, 5 )
3
0
MFKnap( 0, 2 )
j=0
0
0
0
0
0
3 + MFKnap( 0, 0 )
V[i,j]
i=0
1
2
3
4
3+0
MFKnap( 4, 5 )
V[ 2, 5 ] = 7
MFKnap( 3, 5 )
MFKnap( 2, 5 )
3
MFKnap( 1, 5 )
3
6 + MFKnap( 3, 0 )
5 + MFKnap( 2, 1 )
7
4 + MFKnap( 1, 2 )
3
V[i,j]
i=0
1
2
3
4
j=0
0
0
0
0
0
Page 57
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Efficiency:
Time efficiency same as bottom up algorithm: O( n * W ) + O( n + W )
Just a constant factor gain by using memory function
Less space efficient than a space efficient version of a bottom-up
algorithm
Page 58
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
UNIT-5
DECREASE-AND-CONQUER APPROACHES, SPACE-TIMETRADEOFFS
5.1 Decrease and conquer :Introduction
Decrease & conquer is a general algorithm design strategy based on exploiting the relationship between
a solution to a given instance of a problem and a solution to a smaller instance of the same problem.
The exploitation can be either top-down (recursive) or bottom-up (non-recursive).
The major variations of decrease and conquer are
1. Decrease by a constant :(usually by 1):
a. insertion sort
b. graph traversal algorithms (DFS and BFS)
c. topological sorting
d. algorithms for generating permutations, subsets
2. Decrease by a constant factor (usually by half)
a. binary search and bisection method
3. Variable size decrease
a. Euclids algorithm
Following diagram shows the major variations of decrease & conquer approach.
Decrease by a constant :(usually by 1):
Dept of CSE,SJBIT
Page 59
www.rejinpaul.com
Dept of CSE,SJBIT
www.rejinpaul.com
10CS43
Page 60
www.rejinpaul.com
www.rejinpaul.com
10CS43
Page 61
www.rejinpaul.com
www.rejinpaul.com
10CS43
Algorithm:
ALGORITHM Insertionsort(A [0 n-1] )
//sorts a given array by insertion sort
//i/p: Array A[0n-1]
//o/p: sorted array A[0n-1] in ascending order
for i
1 to n-1
V A[i]
j i-1
while j 0 AND A[j] > V do
A[j+1] A[j]
j j 1
A[j + 1] V
Analysis:
Dept of CSE,SJBIT
Page 62
www.rejinpaul.com
www.rejinpaul.com
10CS43
Example:
Sort the following list of elements using insertion sort:
89, 45, 68, 90, 29, 34, 17
89
45
45
45
29
29
17
45
89
68
68
45
34
29
68
68
89
89
68
45
34
90
90
90
90
89
68
45
29
29
29
29
90
89
68
34
34
34
34
34
90
89
17
17
17
17
17
17
90
Dept of CSE,SJBIT
Page 63
www.rejinpaul.com
www.rejinpaul.com
10CS43
Tree edges: edges used by DFS traversal to reach previously unvisited vertices
Back edges: edges connecting vertices to previously visited vertices other than
their immediate predecessor in the traversals
Cross edges: edge that connects an unvisited vertex to vertex other than its
immediate predecessor. (connects siblings)
DAG: Directed acyclic graph
Page 64
www.rejinpaul.com
www.rejinpaul.com
10CS43
Algorithm:
ALGORITHM DFS (G)
//implements DFS traversal of a given graph
//i/p: Graph G = { V, E}
//o/p: DFS tree
Mark each vertex in V with 0 as a mark of being unvisited
count
0
for each vertex v in V do
if v is marked with 0
dfs(v)
dfs(v)
count count + 1
mark v with count
for each vertex w in V adjacent to v do
if w is marked with 0
dfs(w)
Example:
Dept of CSE,SJBIT
Page 65
www.rejinpaul.com
www.rejinpaul.com
10CS43
Starting at vertex A traverse the following graph using DFS traversal method:
A
Solution:
Step
Graph
Remarks
Insert A into stack
A
A(1)
2
A
3
A
4
Dept of CSE,SJBIT
F (3)
B (2)
A(1)
Page 66
www.rejinpaul.com
www.rejinpaul.com
E
5
10CS43
E (4)
F (3)
B (2)
A(1)
Delete E from stack
E (4, 1)
F (3)
B (2)
A(1)
7
Insert G into stack
A
B
Page 67
Dept of CSE,SJBIT
G
8
www.rejinpaul.com
www.rejinpaul.com
10CS43
E (4, 1)
F (3, 2)
B (2)
A(1)
G (5)
E (4, 1)
F (3, 2)
B (2)
A(1)
E (4, 1)
F (3, 2)
B (2)
A(1)
C (6)
G (5)
D (7)
C (6)
G (5)
10
11
Dept of CSE,SJBIT
Page 68
www.rejinpaul.com
www.rejinpaul.com
10CS43
E (4, 1)
F (3, 2)
B (2)
A(1)
12
13
14
Dept of CSE,SJBIT
H (8, 3)
D (7)
C (6)
G (5)
www.rejinpaul.com
15
16
www.rejinpaul.com
10CS43
A(1)
Delete B from stack
H (8, 3)
D (7, 4)
E (4, 1) C (6, 5)
F (3, 2) G (5, 6)
B (2, 7)
A(1)
Delete A from stack
H (8, 3)
D (7, 4)
E (4, 1) C (6, 5)
F (3, 2) G (5, 6)
B (2, 7)
A(1, 8)
B
Dept of CSE,SJBIT
Page 70
www.rejinpaul.com
www.rejinpaul.com
10CS43
D
H
Applications of DFS:
Efficiency:
Dept of CSE,SJBIT
Page 71
www.rejinpaul.com
www.rejinpaul.com
10CS43
as visited.
It visits graphs vertices by across to all the neighbors of the last visited vertex
Instead of a stack, BFS uses a queue
Similar to level-by-level tree traversal
Redraws graph in tree-like fashion (with tree edges and cross edges
for undirected graph)
Algorithm:
ALGORITHM BFS (G)
//implements BFS traversal of a given graph
//i/p: Graph G = { V, E}
//o/p: BFS tree/forest
Mark each vertex in V with 0 as a mark of being unvisited
count
0
for each vertex v in V do
if v is marked with 0
bfs(v)
bfs(v)
count count + 1
mark v with count and initialize a queue with v
while the queue is NOT empty do
for each vertex w in V adjacent to fronts vertex v do
if w is marked with 0
count count + 1
mark w with count
add w to the queue
remove vertex v from the front of the queue
Example:
Starting at vertex A traverse the following graph using BFS traversal method:
Dept of CSE,SJBIT
Page 72
www.rejinpaul.com
www.rejinpaul.com
10CS43
Solution:
Step
Remarks
Graph
2
A
E
3
A
E
Dept of CSE,SJBIT
Page 73
www.rejinpaul.com
www.rejinpaul.com
10CS43
F(3), G(4)
Delete F from queue
G(4)
6
A
8
NO unvisited adjacent vertex for H, backtrack
9
NO unvisited adjacent vertex for D, backtrack
Queue becomes empty. Algorithm stops as all the
nodes in the given graph are visited
The BFS tree is as follows: (dotted lines are cross edges)
Dept of CSE,SJBIT
Page 74
www.rejinpaul.com
www.rejinpaul.com
10CS43
A
B
F
C
Applications of BFS:
Efficiency:
Page 75
www.rejinpaul.com
www.rejinpaul.com
10CS43
BFS
Queue
1 ordering
Tree edge
Cross edge
Connectivity
Acyclicity
Minimum edge paths
(n2)
(n + e)
DFS Method:
Perform DFS traversal and note the order in which vertices become dead
Dept of CSE,SJBIT
Page 76
www.rejinpaul.com
www.rejinpaul.com
10CS43
Example:
Apply DFS based algorithm to solve the topological sorting problem for the given graph:
C1
C4
C3
C2
C5
Step
Graph
Remarks
Insert C1 into stack
C1
C1(1)
2
C1
Insert C2 into stack
C3
C2 (2)
C1(1)
3
C1
C4
C4 (3)
C2 (2)
C3
4
Dept of CSE,SJBIT
C1
Page 77
C4
C3
www.rejinpaul.com
www.rejinpaul.com
10CS43
C1(1)
Insert C5 into stack
C5 (4)
C4 (3)
C2 (2)
C1(1)
C5 (4, 1)
C4 (3)
C2 (2)
C1(1)
Delete C4 from stack
C5 (4, 1)
C4 (3, 2)
C2 (2)
C1(1)
Delete C3 from stack
C5 (4, 1)
C4 (3,2)
C2 (2, 3)
C1(1)
Delete C1 from stack
Dept of CSE,SJBIT
Page 78
www.rejinpaul.com
www.rejinpaul.com
10CS43
C5 (4, 1)
C4 (3,2)
C2 (2, 3)
C1(1, 4)
Stack becomes empty, but there is a node which is unvisited, therefore start the DFS
again from arbitrarily selecting a unvisited node as source
9
Insert C2 into stack
C2
C5 (4, 1)
C4 (3,2)
C2 (2, 3)
C1(1, 4)
C2(5)
10
C5 (4, 1)
C4 (3,2)
C2 (2, 3)
C1(1, 4)
Stack becomes empty, NO unvisited node left, therefore algorithm stops.
The popping off order is:
C5, C4, C3, C1, C2,
Topologically sorted list (reverse of pop order):
C2, C1 C3 C4 C5
C2(5, 5)
Page 79
www.rejinpaul.com
www.rejinpaul.com
10CS43
Example:
Apply Source removal based algorithm to solve the topological sorting problem for the
given graph:
C4
C1
C3
C2
C5
Solution:
C4
C4
C1
C3
C3
C2
Delete C1
C5
C5
C2
C2
Delete C2
C4
C4
Delete C3
Dept of CSE,SJBIT
C5
Page 80
C5
www.rejinpaul.com
www.rejinpaul.com
10CS43
C3
Delete C4
Delete C5
C5
input enhancement preprocess the input (or its part) to store some info to be used later in solving the
problem
counting sorts
pre structuring preprocess the input to make accessing its elements easier
Dept of CSE,SJBIT
Page 81
www.rejinpaul.com
www.rejinpaul.com
hashing
10CS43
// init frequencies
Dept of CSE,SJBIT
Page 82
www.rejinpaul.com
www.rejinpaul.com
10CS43
Page 83
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Page 84
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Page 85
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Page 86
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Page 87
Dept of CSE,SJBIT
www.rejinpaul.com
www.rejinpaul.com
10CS43
Page 88
Dept of CSE,SJBIT
www.rejinpaul.com