Daa Unit-Ii
Daa Unit-Ii
Divide and Conquer: General Method, Defective chessboard, Binary Search, finding the maximum and
minimum, Merge sort, Quick sort.
The Greedy Method: The general Method, knapsack problem, minimum-cost spanning Trees, Optimal
Merge Patterns, Single Source Shortest Paths.
…………………………………………………………………………………………………………………………….
Master Theorem
The master method is a formula for solving recurrence relations of the form:
where,
n = size of input
n/b = size of each subproblem. All subproblems are assumed to have the same size.
f(n) = cost of the work done outside the recursive call, which includes the cost of dividing the problem and
Here, a ≥ 1 and b > 1 are constants, and f(n) is an asymptotically positive function.
An asymptotically positive function means that for a sufficiently large value of n, we have f(n) > 0.
The master theorem is used in calculating the time complexity of recurrence relations (divide and conquer
algorithms) in a simple and quick way.
Solution-
Then, we have-
a=3
b=2
k=2
p=0
Now, a = 3 and bk = 22 = 4.
Since p = 0, so we have-
T(n) = θ (nklogpn)
T(n) = θ (n2log0n)
Thus,
T(n) = θ (n2)
Problem-02:
Solution-
Then, we have-
a=2
b=2
k=1
p=1
Now, a = 2 and bk = 21 = 2.
Since p = 1, so we have-
T(n) = θ (nlogba.logp+1n)
T(n) = θ (nlog22.log1+1n)
Thus,
T(n) = θ (nlog2n)
Problem-03:
Solve the following recurrence relation using Master’s theorem- T(n) = 2T(n/4) + n0.51
Solution-
Then, we have-
a=2
b=4
k = 0.51
p=0
Since p = 0, so we have-
T(n) = θ (nklogpn)
T(n) = θ (n0.51log0n)
Thus,
T(n) = θ (n0.51)
Problem-04:
Solution-
a = √2
b=2
k=0
p=1
So, we have-
T(n) = θ (nlogba)
T(n) = θ (nlog2√2)
T(n) = θ (n1/2)
Thus,
T(n) = θ (√n)
Problem-05:
Solve the following recurrence relation using Master’s theorem- T(n) = 8T(n/4) – n2logn
Solution-
• The given recurrence relation does not correspond to the general form of Master’s theorem.
Problem-06:
Solve the following recurrence relation using Master’s theorem- T(n) = 3T(n/3) + n/2
Solution-
• This is because in the general form, we have θ for function f(n) which hides constants in it.
Then, we have-
a=3
b=3
k=1
p=0
Clearly, a = bk.
Since p = 0, so we have-
T(n) = θ (nlogba.logp+1n)
T(n) = θ (nlog33.log0+1n)
T(n) = θ (n1.log1n)
Thus,
T(n) = θ (nlogn)
In divide and conquer approach, the problem in hand, is divided into smaller sub-problems and then each
problem is solved independently. When we keep on dividing the subproblems into even smaller sub-problems,
we may eventually reach a stage where no more division is possible. Those "atomic" smallest possible sub-
problem (fractions) are solved. The solution of all sub-problems is finally merged in order to obtain the solution
of an original problem.
Divide/Break
This step involves breaking the problem into smaller sub-problems. Sub-problems should represent a part of
the original problem. This step generally takes a recursive approach to divide the problem until no sub-problem
is further divisible. At this stage, sub-problems become atomic in nature but still represent some part of the
actual problem.
Conquer/Solve
This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the problems are
considered 'solved' on their own.
Merge/Combine
When the smaller sub-problems are solved, this stage recursively combines them until they formulate a solution
of the original problem. This algorithmic approach works recursively and conquer & merge steps works so
close that they appear as one.
Examples
• Merge Sort
• Quick Sort
• Binary Search
There are various ways available to solve any computer problem, but the mentioned are a good example of
divide and conquer approach.
• Divide and Conquer tend to successfully solve one of the biggest problems, such as the Tower of Hanoi,
a mathematical puzzle. It is challenging to solve complicated problems for which you have no basic
idea, but with the help of the divide and conquer approach, it has lessened the effort as it works on
dividing the main problem into two halves and then solve them recursively. This algorithm is much
faster than other algorithms.
• It efficiently uses cache memory without occupying much space because it solves simple subproblems
within the cache memory instead of accessing the slower main memory.
• Since these algorithms inhibit parallelism, it does not involve any modification and is handled by
systems incorporating parallel processing.
• Since most of its algorithms are designed by incorporating recursion, so it necessitates high memory
management.
• It may even crash the system if the recursion is performed rigorously greater than the stack present in
the CPU.
1. General Method
Divide and conquer is a design strategy which is well known to breaking down efficiency barriers. When the
method applies, it often leads to a large improvement in time complexity. For example, from O (n2) to O (n
log n) to sort the elements.
Divide and conquer strategy is as follows: divide the problem instance into two or more smaller instances of
the same problem, solve the smaller instances recursively, and assemble the solutions to form a solution of the
original instance. The recursion stops when an instance is reached which is too small to divide. When dividing
the instance, one can either use whatever division comes most easily to hand or invest time in making the
division carefully so that the assembly is simplified.
A control abstraction is a procedure whose flow of control is clear but whose primary operations are specified
by other procedures whose precise meanings are left undefined. The control abstraction for divide and conquer
technique is DANDC(P), where P is the problem to be solved.
divide p into smaller instances p1, p2, …. Pk, k>=1; apply DANDC to each of these sub problems;
SMALL (P) is a Boolean valued function which determines whether the input size is small enough so that the
answer can be computed without splitting. If this is so function ‘S’ is invoked otherwise, the problem ‘p’ into
smaller sub problems. These sub problems p1, p2, . . . , pk are solved by recursive application of DANDC.
If the sizes of the two sub problems are approximately equal then the computing time of DANDC is:
g (n) n small
Were,
g (n) is the time to complete the answer directly for small inputs and f(n) is the time for Divide and Combin
2. Defective chessboard
This problem can be solved using Divide and Conquer. Below is the recursive algorithm.
Tile(int n, Point p)
1) Base case: n = 2, A 2 x 2 square with one cell missing is nothing but a tile and can be filled with a single
tile.
2) Place a L shaped tile at the center such that it does not cover the n/2 * n/2 sub-square that has a missing
square. Now all four Sub-squares of size n/2 x n/2 have a missing cell (a cell that doesn't need to be filled).
See figure 2 below.
3) Solve the problem recursively for following four. Let p1, p2, p3 and p4 be positions of the 4 missing cells
in 4 squares.
a) Tile(n/2, p1)
b) Tile(n/2, p2)
c) Tile(n/2, p3)
d) Tile(n/2, p3)
Figure 2: After placing the first tile Figure 3: Recurring for the first sub-square.
Time Complexity:
Recurrence relation for above recursive algorithm can be written as below. C is a constant.
T(n) = 4T(n/2) + C
The above recursion can be solved using Master Method and time complexity is O(n2)
Binary search algorithm finds a given element in a list of elements with O(log n) time complexity where n is
total number of elements in the list. The binary search algorithm can be used with only a sorted list of elements.
That means the binary search is used only with a list of elements that are already arranged in an order.
This search process starts comparing the search element with the middle element in the list. If both are
matched, then the result is "element found". Otherwise, we check whether the search element is smaller or
larger than the middle element in the list. If the search element is smaller, then we repeat the same process for
the left sublist of the middle element.
If the search element is larger, then we repeat the same process for the right sublist of the middle element. We
repeat this process until we find the search element in the list or until we left with a sublist of only one element.
And if that element also doesn't match with the search element, then the result is "Element not found in the
list".
Iterative method:
return False
else
if x == arr[mid]
return mid
1 if n=1
T(n) =
T(n/2)+1 if n>1
To perform binary search time complexity analysis, we apply the master theorem to the equation and get
O(log n).
Binary search algorithm assumes the input data to be sorted. It takes following steps to find some key in the
input data.
2. Find if data will be present in left or right by comparing search key with current data item.
As we can see from the above steps, binary search algorithm break the break into half in each iteration.
So how many times we need to divide by 2 until with have only one element-
n/(2k)=1
we can rewrite it as -
2k=n
k=log2n
So, in average and worst case, time complexity of binary search algorithm is log(n).
Problem Statement
The Max-Min Problem in algorithm analysis is finding the maximum and minimum value in an array.
Solution
To find the maximum and minimum numbers in a given array numbers[] of size n, the following algorithm
can be used. First we are representing the naive method and then we will present divide and conquer
approach.
Naïve Method
Naïve method is a basic method to solve any problem. In this method, the maximum and minimum number
can be found separately. To find the maximum and minimum numbers, the following straightforward
algorithm can be used.
max := numbers[1]
min := numbers[1]
for i = 2 to n do
min := numbers[i]
Analysis
The number of comparisons can be reduced using the divide and conquer approach. Following is the
technique.
In this approach, the array is divided into two halves. Then using recursive approach maximum and
minimum numbers in each halves are found. Later, return the maximum of two maxima of each half and the
minimum of two minima of each half.
In this given problem, the number of elements in an array is y−x+1 , where y is greater than or equal to x.
Max−Min(x,y) will return the maximum and minimum values of an array numbers[x...y]
Algorithm: Max - Min(x, y)
if y – x ≤ 1 then
else
Analysis
Let T(n) be the number of comparisons made by Max−Min(x, y), where the number of elements n=y−x+1
If T(n) represents the numbers, then the recurrence relation can be represented as
4. Merge sort
The merge sort is a sorting algorithm that uses the divide and conquer strategy. In this method division is
dynamically carried out.
Sorting by merging is a recursive strategy, divide-and-conquer strategy. In the base case, we have a sequence
with exactly one element in it. Since such a sequence is already sorted, there is nothing to be done. To sort a
sequence of elements (n>1)
• Divide the sequence into two sequences of length n/2 and n/2
• Recursively sort each of the two subsequence’s; and then
• Merge the sorted subsequence’s to obtain the final list.
Conquer: We recursively solve two subproblems, each size n/2, which contributes T(n2/)+T(n/2)
Merge sort is a stable sorting algorithm. A sorting is said to be stable if it preserves the ordering of similar
elements after applying sorting method. And merge sort is a method preserves this kind of ordering. Hence
merge sort is a stable sorting algorithm.
Drawbacks:
Time Complexity: O(n log(n)), Sorting arrays on different machines. Merge Sort is a recursive algorithm and
time complexity can be expressed as following recurrence relation.
Like Merge Sort, QuickSort is a Divide and Conquer algorithm. It picks an element as a pivot and partitions
the given array around the picked pivot. There are many different versions of quickSort that pick pivot in
different ways.
The key process in quickSort is a partition(). The target of partitions is, given an array and an element x of an
array as the pivot, put x at its correct position in a sorted array and put all smaller elements (smaller than x)
before x, and put all greater elements (greater than x) after x. All this should be done in linear time.
Partition Algorithm:
There can be many ways to do partition, following pseudo-code adopts the method given in the CLRS book.
The logic is simple, we start from the leftmost element and keep track of the index of smaller (or equal to)
elements as i. While traversing, if we find a smaller element, we swap the current element with arr[i].
Otherwise, we ignore the current element.
}
18 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed
Pseudo code for partition()
/* This function takes last element as pivot, places the pivot element at its correct position in sorted array,
and places all smaller (smaller than pivot) to left of pivot and all greater elements to right of pivot */
To understand the working of quick sort, let's take an unsorted array. It will make the concept more clear
and understandable.
In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24, a[right] = 27 and
a[pivot] = 24.
Since, pivot is at left, so algorithm starts from right and move towards left.
Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. -
Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and pivot moves to right, as -
Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm starts from left and
moves to right.
Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm moves one position
to right as -
Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24, a[right] = 29, and
a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to left, as -
Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot] and a[right],
now pivot is at right, i.e. -
Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts from left and
move to right.
Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the same element.
It represents the termination of procedure.
Element 24, which is the pivot element is placed at its exact position.
Elements that are right side of element 24 are greater than it, and the elements that are left side of element
24 are smaller than it.
Worst Case Analysis: It is the case when items are already in sorted form and we try to sort them again. This
will takes lots of time and space.
Equation:
1. T (n) =T(1)+T(n-1)+n
N: the number of comparisons required to identify the exact position of itself (every element)
If we compare first element pivot with other, then there will be 5 comparisons.
Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of quicksort is O(n*logn).
• Best case scenario: The best case scenario occurs when the partitions are as evenly balanced as
possible, i.e their sizes on either side of the pivot element are either are equal or are have size
difference of 1 of each other.
o Case 1: The case when sizes of sublist on either side of pivot becomes equal occurs when the
subarray has an odd number of elements and the pivot is right in the middle after
partitioning. Each partition will have (n-1)/2 elements.
o Case 2: The size difference of 1 between the two sublists on either side of pivot happens if the
subarray has an even number, n, of elements. One partition will have n/2 elements with the
other having (n/2)-1.
In either of these cases, each partition will have at most n/2 elements, and the tree representation of the
subproblem sizes will be as below:
The Greedy Method: The general Method, knapsack problem, minimum-cost spanning Trees, Optimal
Merge Patterns, Single Source Shortest Paths
……………………………………………………………………………………………………………………………..
Among all the algorithmic approaches, the simplest and straightforward approach is the Greedy method. In
this approach, the decision is taken on the basis of current available information without worrying about the
effect of the current decision in future.
Greedy algorithms build a solution part by part, choosing the next part in such a way, that it gives an immediate
benefit. This approach never reconsiders the choices taken previously. This approach is mainly used to solve
optimization problems. Greedy method is easy to implement and quite efficient in most of the cases. Hence,
we can say that Greedy algorithm is an algorithmic paradigm based on heuristic that follows local optimal
choice at each step with the hope of finding global optimal solution.
In many problems, it does not produce an optimal solution though it gives an approximate (near optimal)
solution in a reasonable time.
• A selection function − Used to choose the best candidate to be added to the solution.
• A feasibility function − Used to determine whether a candidate can be used to contribute to the
solution.
• A solution function − Used to indicate whether a complete solution has been reached.
Areas of Application
• Finding the shortest path between two vertices using Dijkstra’s algorithm.
• Finding the minimal spanning tree in a graph using Prim’s /Kruskal’s algorithm, etc.
In many problems, Greedy algorithm fails to find an optimal solution, moreover it may produce a worst
solution. Problems like Travelling Salesman and Knapsack cannot be solved using this approach.
2. knapsack problem
Given a set of items, each with a weight and a value, determine a subset of items to include in a collection so
that the total weight is less than or equal to a given limit and the total value is as large as possible.
The knapsack problem is in combinatorial optimization problem. It appears as a subproblem in many, more
complex mathematical models of real-world problems. One general approach to difficult problems is to
identify the most restrictive constraint, ignore the others, solve a knapsack problem, and somehow adjust the
solution to satisfy the ignored constraints.
Applications
In many cases of resource allocation along with some constraint, the problem can be derived in a similar way
of Knapsack problem. Following is a set of example.
• Finding the least wasteful way to cut raw materials
• portfolio optimization
• Cutting stock problems
Problem Scenario
A thief is robbing a store and can carry a maximal weight of W into his knapsack. There are n items available
in the store and weight of ith item is wi and its profit is pi. What items should the thief take?
In this context, the items should be selected in such a way that the thief will carry those items for which he will
gain maximum profit. Hence, the objective of the thief is to maximize the profit.
Based on the nature of the items, Knapsack problems are categorized as
• Fractional Knapsack
• Knapsack
Fractional Knapsack
In this case, items can be broken into smaller pieces, hence the thief can select fractions of items.
According to the problem statement,
• There are n items in the store
• Weight of ith item wi>0
• Profit for ith item pi>0
• and
• Capacity of the Knapsack is W
In this version of Knapsack problem, items can be broken into smaller pieces. So, the thief may take only a
fraction xi of ith item.
0⩽xi⩽1
The ith item contributes the weight xi.wi to the total weight in the knapsack and profit xi.pi
to the total profit.
Hence, the objective of this algorithm is to
Let us consider that the capacity of the knapsack W = 60 and the list of provided items are shown in the
following table –
Solution
First all of B is chosen as weight of B is less than the capacity of the knapsack. Next, item A is chosen, as the
available capacity of the knapsack is greater than the weight of A. Now, C is chosen as the next item.
However, the whole item cannot be chosen as the remaining capacity of the knapsack is less than the weight
of C.
Now, the capacity of the Knapsack is equal to the selected items. Hence, no more item can be selected.
And the total profit is 100 + 280 + 120 * (10/20) = 380 + 60 = 440
This is the optimal solution. We cannot gain more profit selecting any different combination of items.
A spanning tree is a subset of Graph G, which has all the vertices covered with minimum possible number of
edges. Hence, a spanning tree does not have cycles and it cannot be disconnected.
By this definition, we can draw a conclusion that every connected and undirected Graph G has at least one
spanning tree. A disconnected graph does not have any spanning tree, as it cannot be spanned to all its
vertices.
We found three spanning trees off one complete graph. A complete undirected graph can have maximum nn-
2 number of spanning trees, where n is the number of nodes. In the above addressed example, n is 3, hence
• Spanning tree has n-1 edges, where n is the number of nodes (vertices).
• From a complete graph, by removing maximum e - n + 1 edges, we can construct a spanning tree.
Thus, we can conclude that spanning trees are a subset of connected Graph G and disconnected graphs do
not have spanning tree.
In a weighted graph, a minimum spanning tree is a spanning tree that has minimum weight than all other
spanning trees of the same graph. In real-world situations, this weight can be measured as distance,
congestion, traffic load or any arbitrary value denoted to the edges.
We shall learn about two most important spanning tree algorithms here −
• Kruskal's Algorithm
• Prim's Algorithm
1. Prim’s Algorithm
• It is used for finding the Minimum Spanning Tree (MST) of a given graph.
• To apply Prim’s algorithm, the given graph must be weighted, connected and undirected.
Step-01:
• The vertex connecting to the edge having least weight is usually selected.
Step-02:
• Find all the edges that connect the tree to new vertices.
• Find the least weight edge among those edges and include it in the existing tree.
• If including that edge creates a cycle, then reject that edge and look for the next least weight edge.
Step-03:
• Keep repeating step-02 until all the vertices are included and Minimum Spanning Tree (MST) is
obtained.
• The vertex connecting to the edge having least weight is usually selected.
Step-2: Now we are at node / Vertex 6, It has two adjacent edges, one is already selected, select second one.
Step-3: Now we are at node 5, it has three edges connected, one is already selected, from reaming two
select minimum cost edge (that is having minimum weight) Such that no loops can be formed by adding that
vertex.
Step-4: Now we are at node 4, select the minimum cost edge from the edges connected to this node. Such
that no loops can be formed by adding that vertex.
Step-6: Now we are at node 2, select minimum cost edge from the edges attached to this node. Such that no
loops can be formed by adding that vertex.
Since all the vertices have been included in the MST, so we stop.
= 10 + 25 + 22 + 12 + 16 + 14
= 99 units
Time Complexity: O(V2), If the input graph is represented using an adjacency list, then the time complexity
of Prim’s algorithm can be reduced to O(E log V) with the help of a binary heap. In this implementation, we
are always considering the spanning tree to start from the root of the graph
• It is used for finding the Minimum Spanning Tree (MST) of a given graph.
• To apply Kruskal’s algorithm, the given graph must be weighted, connected and undirected.
Step-01: Sort all the edges from low weight to high weight.
Step-02:
• Take the edge with the lowest weight and use it to connect the vertices of graph.
• If adding an edge creates a cycle, then reject that edge and go for the next least weight edge.
Step-03:
Keep adding edges until all the vertices are connected and a Minimum Spanning Tree (MST) is obtained.
• If we ignore isolated vertices, which will each their components of the minimum spanning tree, V ≤
2 E, so log V is O (log E).
Given n number of sorted files, the task is to find the minimum computations done to reach the Optimal
Merge Pattern.
When two or more sorted files are to be merged altogether to form a single file, the minimum computations
are done to reach this file are known as Optimal Merge Pattern.
If more than 2 files need to be merged then it can be done in pairs. For example, if need to merge 4 files A,
B, C, D. First Merge A with B to get X1, merge X1 with C to get X2, merge X2 with D to get X3 as the output
file.
If we have two files of sizes m and n, the total computation time will be m+n. Here, we use the greedy
strategy by merging the two smallest size files among all the files present.
Examples:
Given 3 files with sizes 2, 3, 4 units. Find an optimal way to combine these files
Method 2:
Observations:
From the above results, we may conclude that for finding the minimum cost of computation we need to
have our array always sorted, i.e., add the minimum possible computation cost and remove the files from
the array. We can achieve this optimally using a min-heap(priority-queue) data structure.
Node represents a file with a given size also given nodes are greater than 2
1. Add all the nodes in a priority queue (Min Heap).{pq.poll = file size}
1. int weight = pq.poll(); pq.pop;//pq denotes priority queue, remove 1st smallest and
pop(remove) it out
2. weight+=pq.poll() && pq.pop(); // add the second element and then pop(remove) it out
3. count +=weight;
Huffman coding is a lossless data compression algorithm. The idea is to assign variable-length codes to input
characters, lengths of the assigned codes are based on the frequencies of corresponding characters. The most
frequent character gets the smallest code and the least frequent character gets the largest code.
Huffman tree or Huffman coding tree defines as a full binary tree in which each leaf of the tree corresponds
to a letter in the given alphabet.
The Huffman tree is treated as the binary tree associated with minimum external path weight that means, the
one associated with the minimum sum of weighted path lengths for the given set of leaves. So the goal is to
construct a tree with the minimum external path weight.
Letter zkm c u d l e
Frequency 2 7 24 32 37 42 42 120
Huffman code
e 120 0 1
d 42 101 3
l 42 110 3
u 37 100 3
c 32 1110 4
m 24 11111 5
k 7 111101 6
z 2 111100 6
Dijkstra's algorithm allows us to find the shortest path between any two vertices of a graph.
It differs from the minimum spanning tree because the shortest distance between two vertices might not
include all the vertices of the graph.
Dijkstra's Algorithm works on the basis that any subpath B -> D of the shortest path A -> D between vertices
A and D is also the shortest path between vertices B and D.
The algorithm uses a greedy approach in the sense that we find the next best solution hoping that the end
result is the best solution for the whole problem.
It is easier to start with an example and then think about the algorithm.
Choose a starting vertex and assign infinity path values to all other devices
If the path length of the adjacent vertex is lesser than new path length, don't update it
Notice how the rightmost vertex has its path length updated twice
We need to maintain the path distance of every vertex. We can store that in an array of size v, where v is
the number of vertices.
We also want to be able to get the shortest path, not only know the length of the shortest path. For this, we
map each vertex to the vertex that last updated its path length.
Once the algorithm is over, we can backtrack from the destination vertex to the source vertex to find the
path.
A minimum priority queue can be used to efficiently receive the vertex with least path distance.
function dijkstra(G, S)
distance[S] <- 0
previous[V] <- U
……………………………………………………………………………………………………………………………
1. Explain the general principle of Greedy method and also list the applications of Greedy method.
2. Solve the following instance of knapsack problem using greedy method. n=7(objects), m=15, profits
are (P1,P2,P3,P4,P5,P6,P7)=(10,5,15,7,6,18,3) and its corresponding weights are (W1,W2,W3,W4,
W5, W6, W7 )=(2,3,5,7,1,4,1).
3. State the Greedy Knapsack. Find an optimal solution to the Knapsack instance n=3, m=20, (P1, P2,
P3) = (25, 24, 15) and (W1, W2, W3) = (18, 15, 10)
4. Find an optimal solution to the knapsack instance n=7 objects and the capacity of knapsack m=15.
The profits and weights of the objects are (P1,P2,P3,P4,P5,P6,P7)=(10,5,15,7,6,18,3),
(W1,W2,W3,W4, W5,W6,W7)=(2,3,5,7,1,4,1) respectively.
5. Write and explain Prism’s algorithm for finding minimum cost spanning tree of a graph with an
example.
6. What is a Minimum Cost Spanning tree? Explain Kruskal’s Minimum costs panning tree algorithm
with a suitable example.
7. What is a Spanning tree? Explain Prim’s Minimum cost spanning tree algorithm with suitable example
8. What is optimal merge pattern? Find optimal merge pattern for ten files whose record lengths are 28,
32, 12, 5, 84, 53, 91, 35, 3, and 11
9. Discuss the Dijkstra’s single source shortest path algorithm and derive its time complexity.
10. A motorist wishing to ride from city A to B. Formulate greedy based algorithms to generate shortest
path and explain with an example graph.
11. Discuss the single-source shortest paths algorithm with a suitable example