0% found this document useful (0 votes)
5 views

Daa Unit-Ii

Jntuk

Uploaded by

Hello Hello
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Daa Unit-Ii

Jntuk

Uploaded by

Hello Hello
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

UNIT II

Divide and Conquer: General Method, Defective chessboard, Binary Search, finding the maximum and
minimum, Merge sort, Quick sort.

The Greedy Method: The general Method, knapsack problem, minimum-cost spanning Trees, Optimal
Merge Patterns, Single Source Shortest Paths.

…………………………………………………………………………………………………………………………….

Master Theorem

The master method is a formula for solving recurrence relations of the form:

T(n) = aT(n/b) + f(n),

where,

n = size of input

a = number of subproblems in the recursion

n/b = size of each subproblem. All subproblems are assumed to have the same size.

f(n) = cost of the work done outside the recursive call, which includes the cost of dividing the problem and

cost of merging the solutions; f(n)=θ(nklogpn)

Here, a ≥ 1 and b > 1 are constants, and f(n) is an asymptotically positive function.

An asymptotically positive function means that for a sufficiently large value of n, we have f(n) > 0.

The master theorem is used in calculating the time complexity of recurrence relations (divide and conquer
algorithms) in a simple and quick way.

1 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Problem-01:

Solve the following recurrence relation using Master’s theorem-T(n) = 3T(n/2) + n2

Solution-

We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).

Then, we have-

a=3

b=2

k=2

p=0

Now, a = 3 and bk = 22 = 4.

Clearly, a < bk.

So, we follow case-03.

Since p = 0, so we have-

T(n) = θ (nklogpn)

T(n) = θ (n2log0n)

Thus,

T(n) = θ (n2)

Problem-02:

Solve the following recurrence relation using Master’s theorem-

T(n) = 2T(n/2) + nlogn

Solution-

We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).

Then, we have-

a=2

b=2

k=1

p=1

Now, a = 2 and bk = 21 = 2.

2 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Clearly, a = bk.

So, we follow case-02.

Since p = 1, so we have-

T(n) = θ (nlogba.logp+1n)

T(n) = θ (nlog22.log1+1n)

Thus,

T(n) = θ (nlog2n)

Problem-03:

Solve the following recurrence relation using Master’s theorem- T(n) = 2T(n/4) + n0.51

Solution-

We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).

Then, we have-

a=2

b=4

k = 0.51

p=0

Now, a = 2 and bk = 40.51 = 2.0279.

Clearly, a < bk.

So, we follow case-03.

Since p = 0, so we have-

T(n) = θ (nklogpn)

T(n) = θ (n0.51log0n)

Thus,

T(n) = θ (n0.51)

Problem-04:

Solve the following recurrence relation using Master’s theorem-

T(n) = √2T(n/2) + logn

Solution-

We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).

3 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Then, we have-

a = √2

b=2

k=0

p=1

Now, a = √2 = 1.414 and bk = 20 = 1.

Clearly, a > bk.

So, we follow case-01.

So, we have-

T(n) = θ (nlogba)

T(n) = θ (nlog2√2)

T(n) = θ (n1/2)

Thus,

T(n) = θ (√n)

Problem-05:

Solve the following recurrence relation using Master’s theorem- T(n) = 8T(n/4) – n2logn

Solution-

• The given recurrence relation does not correspond to the general form of Master’s theorem.

• So, it can not be solved using Master’s theorem.

Problem-06:

Solve the following recurrence relation using Master’s theorem- T(n) = 3T(n/3) + n/2

Solution-

• We write the given recurrence relation as T(n) = 3T(n/3) + n.

• This is because in the general form, we have θ for function f(n) which hides constants in it.

• Now, we can easily apply Master’s theorem.

We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).

Then, we have-

a=3

b=3

k=1

p=0

4 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Now, a = 3 and bk = 31 = 3.

Clearly, a = bk.

So, we follow case-02.

Since p = 0, so we have-

T(n) = θ (nlogba.logp+1n)

T(n) = θ (nlog33.log0+1n)

T(n) = θ (n1.log1n)

Thus,

T(n) = θ (nlogn)

5 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Introduction to divide and conquer approach

In divide and conquer approach, the problem in hand, is divided into smaller sub-problems and then each
problem is solved independently. When we keep on dividing the subproblems into even smaller sub-problems,
we may eventually reach a stage where no more division is possible. Those "atomic" smallest possible sub-
problem (fractions) are solved. The solution of all sub-problems is finally merged in order to obtain the solution
of an original problem.

Broadly, we can understand divide-and-conquer approach in a three-step process.

Divide/Break

This step involves breaking the problem into smaller sub-problems. Sub-problems should represent a part of
the original problem. This step generally takes a recursive approach to divide the problem until no sub-problem
is further divisible. At this stage, sub-problems become atomic in nature but still represent some part of the
actual problem.

Conquer/Solve

This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the problems are
considered 'solved' on their own.

Merge/Combine

When the smaller sub-problems are solved, this stage recursively combines them until they formulate a solution
of the original problem. This algorithmic approach works recursively and conquer & merge steps works so
close that they appear as one.

Examples

6 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


The following computer algorithms are based on divide-and-conquer programming approach −

• Merge Sort

• Quick Sort

• Binary Search

• Strassen's Matrix Multiplication

• Closest pair (points)

There are various ways available to solve any computer problem, but the mentioned are a good example of
divide and conquer approach.

Advantages of Divide and Conquer

• Divide and Conquer tend to successfully solve one of the biggest problems, such as the Tower of Hanoi,
a mathematical puzzle. It is challenging to solve complicated problems for which you have no basic
idea, but with the help of the divide and conquer approach, it has lessened the effort as it works on
dividing the main problem into two halves and then solve them recursively. This algorithm is much
faster than other algorithms.

• It efficiently uses cache memory without occupying much space because it solves simple subproblems
within the cache memory instead of accessing the slower main memory.

• It is more proficient than that of its counterpart Brute Force technique.

• Since these algorithms inhibit parallelism, it does not involve any modification and is handled by
systems incorporating parallel processing.

Disadvantages of Divide and Conquer

• Since most of its algorithms are designed by incorporating recursion, so it necessitates high memory
management.

• An explicit stack may overuse the space.

• It may even crash the system if the recursion is performed rigorously greater than the stack present in
the CPU.

1. General Method

Divide and conquer is a design strategy which is well known to breaking down efficiency barriers. When the
method applies, it often leads to a large improvement in time complexity. For example, from O (n2) to O (n
log n) to sort the elements.

Divide and conquer strategy is as follows: divide the problem instance into two or more smaller instances of
the same problem, solve the smaller instances recursively, and assemble the solutions to form a solution of the
original instance. The recursion stops when an instance is reached which is too small to divide. When dividing
the instance, one can either use whatever division comes most easily to hand or invest time in making the
division carefully so that the assembly is simplified.

Control Abstraction of Divide and Conquer

A control abstraction is a procedure whose flow of control is clear but whose primary operations are specified
by other procedures whose precise meanings are left undefined. The control abstraction for divide and conquer
technique is DANDC(P), where P is the problem to be solved.

7 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


DANDC (P)

if SMALL (P) then return S (p); else

divide p into smaller instances p1, p2, …. Pk, k>=1; apply DANDC to each of these sub problems;

return (COMBINE (DANDC (p1) , DANDC (p2),…., DANDC (pk));

SMALL (P) is a Boolean valued function which determines whether the input size is small enough so that the
answer can be computed without splitting. If this is so function ‘S’ is invoked otherwise, the problem ‘p’ into
smaller sub problems. These sub problems p1, p2, . . . , pk are solved by recursive application of DANDC.

If the sizes of the two sub problems are approximately equal then the computing time of DANDC is:

g (n) n small

T (n) = 2 T(n/2)+f (n) n otherwise

Were,

T (n) is the time for DANDC on ‘n’ inputs

g (n) is the time to complete the answer directly for small inputs and f(n) is the time for Divide and Combin

2. Defective chessboard

• A chessboard is an n x n grid, where n =2k


• A defective chessboard is a chessboard that has one unavailable (defective) position.

8 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


• The problem is to tile (cover) all non-defective cells using a triomino.
• A triomino is an L shaped object that can cover three squares of a chessboard.

This problem can be solved using Divide and Conquer. Below is the recursive algorithm.

/ n is size of given square, p is location of missing cell

Tile(int n, Point p)

1) Base case: n = 2, A 2 x 2 square with one cell missing is nothing but a tile and can be filled with a single
tile.

2) Place a L shaped tile at the center such that it does not cover the n/2 * n/2 sub-square that has a missing
square. Now all four Sub-squares of size n/2 x n/2 have a missing cell (a cell that doesn't need to be filled).
See figure 2 below.

3) Solve the problem recursively for following four. Let p1, p2, p3 and p4 be positions of the 4 missing cells
in 4 squares.

a) Tile(n/2, p1)

b) Tile(n/2, p2)

c) Tile(n/2, p3)

d) Tile(n/2, p3)

The below diagrams show working of above algorithm

Figure 2: After placing the first tile Figure 3: Recurring for the first sub-square.

Figure 4: Shows the first step in all four sub-


squares.

9 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


In general, a 2k x 2k defective chessboard can be divided as –

Time Complexity:
Recurrence relation for above recursive algorithm can be written as below. C is a constant.
T(n) = 4T(n/2) + C

The above recursion can be solved using Master Method and time complexity is O(n2)

10 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


2. Binary Search

Binary search algorithm finds a given element in a list of elements with O(log n) time complexity where n is
total number of elements in the list. The binary search algorithm can be used with only a sorted list of elements.
That means the binary search is used only with a list of elements that are already arranged in an order.

This search process starts comparing the search element with the middle element in the list. If both are
matched, then the result is "element found". Otherwise, we check whether the search element is smaller or
larger than the middle element in the list. If the search element is smaller, then we repeat the same process for
the left sublist of the middle element.

If the search element is larger, then we repeat the same process for the right sublist of the middle element. We
repeat this process until we find the search element in the list or until we left with a sublist of only one element.
And if that element also doesn't match with the search element, then the result is "Element not found in the
list".

Iterative method:

do until the pointers low and high meet each other.


mid = (low + high)/2
if (x == arr[mid])
return mid
else if (x > arr[mid]) // x is on the right side
low = mid + 1
else // x is on the left side
11 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed
high = mid - 1
Recursive method:

binarySearch(arr, x, low, high)

if (low > high)

return False

else

mid = (low + high) / 2

if x == arr[mid]

return mid

else if x > arr[mid] // x is on the right side

return binarySearch(arr, x, mid + 1, high)

else // x is on the right side

return binarySearch(arr, x, low, mid - 1)

The recursive relation for above recursive relation is

1 if n=1

T(n) =

T(n/2)+1 if n>1

To perform binary search time complexity analysis, we apply the master theorem to the equation and get
O(log n).

Binary search algorithm assumes the input data to be sorted. It takes following steps to find some key in the
input data.

12 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


1. To search a key in the search space i.e. some list of data points, we need to find the mid point in the
data and check if data present there. If data found, then then we stop the further iteration. In this case,
time complexity will be O(1), best case. Otherwise, we will move to next step.

2. Find if data will be present in left or right by comparing search key with current data item.

3. Repeat 1 and 2 until there is match of there is no further points to search.

As we can see from the above steps, binary search algorithm break the break into half in each iteration.

So how many times we need to divide by 2 until with have only one element-

n/(2k)=1

we can rewrite it as -

2k=n

by taking log both side, we get

k=log2n
So, in average and worst case, time complexity of binary search algorithm is log(n).

3. finding the maximum and minimum

Problem Statement

The Max-Min Problem in algorithm analysis is finding the maximum and minimum value in an array.

Solution

To find the maximum and minimum numbers in a given array numbers[] of size n, the following algorithm
can be used. First we are representing the naive method and then we will present divide and conquer
approach.

Naïve Method

Naïve method is a basic method to solve any problem. In this method, the maximum and minimum number
can be found separately. To find the maximum and minimum numbers, the following straightforward
algorithm can be used.

Algorithm: Max-Min-Element (numbers[])

max := numbers[1]

min := numbers[1]

for i = 2 to n do

if numbers[i] > max then


13 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed
max := numbers[i]

if numbers[i] < min then

min := numbers[i]

return (max, min)

Analysis

The number of comparison in Naive method is 2n - 2.

The number of comparisons can be reduced using the divide and conquer approach. Following is the
technique.

Divide and Conquer Approach

In this approach, the array is divided into two halves. Then using recursive approach maximum and
minimum numbers in each halves are found. Later, return the maximum of two maxima of each half and the
minimum of two minima of each half.

In this given problem, the number of elements in an array is y−x+1 , where y is greater than or equal to x.

Max−Min(x,y) will return the maximum and minimum values of an array numbers[x...y]
Algorithm: Max - Min(x, y)

if y – x ≤ 1 then

return (max(numbers[x], numbers[y]), min((numbers[x], numbers[y]))

else

(max1, min1):= maxmin(x, ⌊((x + y)/2)⌋)

(max2, min2):= maxmin(⌊((x + y)/2) + 1)⌋,y)

return (max(max1, max2), min(min1, min2))

Analysis

Let T(n) be the number of comparisons made by Max−Min(x, y), where the number of elements n=y−x+1

If T(n) represents the numbers, then the recurrence relation can be represented as

14 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Compared to Naïve method, in divide and conquer approach, the number of comparisons is less. However,
using the asymptotic notation both of the approaches are represented by O(n).

4. Merge sort

The merge sort is a sorting algorithm that uses the divide and conquer strategy. In this method division is
dynamically carried out.

Sorting by merging is a recursive strategy, divide-and-conquer strategy. In the base case, we have a sequence
with exactly one element in it. Since such a sequence is already sorted, there is nothing to be done. To sort a
sequence of elements (n>1)

• Divide the sequence into two sequences of length n/2 and n/2
• Recursively sort each of the two subsequence’s; and then
• Merge the sorted subsequence’s to obtain the final list.

15 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Divide: The divide step just computes the middle of the subarray. Which takes constant time, O(1)

Conquer: We recursively solve two subproblems, each size n/2, which contributes T(n2/)+T(n/2)

Combine: The merge procedure on an n-element subarray takes time O(n)

void merge_sort (int A[ ] , int start , int end )

if( start < end ) {

int mid = (start + end ) / 2 ; // defines the current array in 2 parts .

merge_sort (A, start , mid ) ; // sort the 1st part of array .

merge_sort (A,mid+1 , end ) ; // sort the 2nd part of array.

// merge the both parts by comparing elements of both the parts.

merge(A,start , mid , end );

Merge sort is a stable sorting algorithm. A sorting is said to be stable if it preserves the ordering of similar
elements after applying sorting method. And merge sort is a method preserves this kind of ordering. Hence
merge sort is a stable sorting algorithm.

Drawbacks:

• This algorithm requires extra storage to execute this method


• This method is slower than the quick sort method
• This method is complicated to code.

Time Complexity: O(n log(n)), Sorting arrays on different machines. Merge Sort is a recursive algorithm and
time complexity can be expressed as following recurrence relation.

T(n) = 2T(n/2) + θ(n)

16 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


17 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed
5. Quick sort

Like Merge Sort, QuickSort is a Divide and Conquer algorithm. It picks an element as a pivot and partitions
the given array around the picked pivot. There are many different versions of quickSort that pick pivot in
different ways.

• Always pick the first element as a pivot.


• Always pick the last element as a pivot (implemented below)
• Pick a random element as a pivot.
• Pick median as the pivot.

The key process in quickSort is a partition(). The target of partitions is, given an array and an element x of an
array as the pivot, put x at its correct position in a sorted array and put all smaller elements (smaller than x)
before x, and put all greater elements (greater than x) after x. All this should be done in linear time.

Partition Algorithm:

There can be many ways to do partition, following pseudo-code adopts the method given in the CLRS book.
The logic is simple, we start from the leftmost element and keep track of the index of smaller (or equal to)
elements as i. While traversing, if we find a smaller element, we swap the current element with arr[i].
Otherwise, we ignore the current element.

Pseudo Code for recursive QuickSort function:

/* low –> Starting index, high –> Ending index */

quickSort(arr[], low, high) {

if (low < high) {

/* pi is partitioning index, arr[pi] is now at right place */

pi = partition(arr, low, high);

quickSort(arr, low, pi – 1); // Before pi

quickSort(arr, pi + 1, high); // After pi

}
18 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed
Pseudo code for partition()

/* This function takes last element as pivot, places the pivot element at its correct position in sorted array,
and places all smaller (smaller than pivot) to left of pivot and all greater elements to right of pivot */

partition (arr[], low, high)

// pivot (Element to be placed at right position)


pivot = arr[high];

i = (low – 1) // Index of smaller element and indicates the


// right position of pivot found so far

for (j = low; j <= high- 1; j++){

// If current element is smaller than the pivot


if (arr[j] < pivot){
i++; // increment index of smaller element
swap arr[i] and arr[j]
}
}

swap arr[i + 1] and arr[high])


return (i + 1)
}

To understand the working of quick sort, let's take an unsorted array. It will make the concept more clear
and understandable.

Let the elements of array are -

In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24, a[right] = 27 and
a[pivot] = 24.

Since, pivot is at left, so algorithm starts from right and move towards left.

Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. -

19 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Now, a[left] = 24, a[right] = 19, and a[pivot] = 24.

Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and pivot moves to right, as -

Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm starts from left and
moves to right.

As a[pivot] > a[left], so algorithm moves one position to right as -

Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm moves one position
to right as -

20 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap a[pivot] and a[left],
now pivot is at left, i.e. -

Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24, a[right] = 29, and
a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to left, as -

Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot] and a[right],
now pivot is at right, i.e. -

Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts from left and
move to right.

Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the same element.
It represents the termination of procedure.

Element 24, which is the pivot element is placed at its exact position.

Elements that are right side of element 24 are greater than it, and the elements that are left side of element
24 are smaller than it.

21 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Now, in a similar manner, quick sort algorithm is separately applied to the left and right sub-arrays. After
sorting gets done, the array will be -

Analysis of quick sort

Worst Case Analysis: It is the case when items are already in sorted form and we try to sort them again. This
will takes lots of time and space.

Equation:

1. T (n) =T(1)+T(n-1)+n

T (1) is time taken by pivot element.

T (n-1) is time taken by remaining element except for pivot element.

N: the number of comparisons required to identify the exact position of itself (every element)

If we compare first element pivot with other, then there will be 5 comparisons.

It means there will be n comparisons if there are n items.

Here's a tree of the subproblem sizes with their partitioning times:

22 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is the middle element or
near to the middle element. The best-case time complexity of quicksort is O(n*logn).

Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of quicksort is O(n*logn).

• Best case scenario: The best case scenario occurs when the partitions are as evenly balanced as
possible, i.e their sizes on either side of the pivot element are either are equal or are have size
difference of 1 of each other.

o Case 1: The case when sizes of sublist on either side of pivot becomes equal occurs when the
subarray has an odd number of elements and the pivot is right in the middle after
partitioning. Each partition will have (n-1)/2 elements.

o Case 2: The size difference of 1 between the two sublists on either side of pivot happens if the
subarray has an even number, n, of elements. One partition will have n/2 elements with the
other having (n/2)-1.

In either of these cases, each partition will have at most n/2 elements, and the tree representation of the
subproblem sizes will be as below:

The best-case complexity of the quick sort algorithm is O(n logn)

23 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


University Previous Year Questions topic wise

Divide and Conquer

1. Write the general method of Divide – And – Conquer approach.


2. What is the Defective Chessboard problem and give its solution using divide and conquer method?
3. Describe binary search in detail and provide time complexity analysis with an example
4. Write a recursive algorithm for binary search and also bring out its efficiency.
5. Discuss the time complexity of Binary search algorithm for best and worst case.
6. With a suitable algorithm, explain the problem of finding the maximum and minimum items in a set
of n elements.
7. Explain divide-and-conquer technique; write a recursive algorithm for finding the maximum and
minimum element from the list
8. Given 2 sorted lists of numbers. Write the algorithm to merge them and analyze its time complexity.
9. Discuss the working strategy of merge sort and illustrate the process of merge sort algorithm for the
given data: 43, 32, 22, 78, 63, 57, 91 and 13
10. Apply Merge Sort to sort the list a[1:10]=(31,28,17,65,35,42.,86,25,45,52). Draw the tree of
recursive calls of merge sort, merge functions.
11. Illustrate the tracing of quick sort algorithm for the following set of numbers:25, 10, 72, 18, 40, 11,
64, 58, 32, 9.
12. Write Divide – And – Conquer recursive Quick sort algorithm and analyze the algorithm for average
time complexity.
13. Derive the time complexity of the Quicksort algorithm for the worst case.
14. For T(n)=7T(n/2)+18n2 Solve the recurrence relation and find the time complexity.
15. What are different approaches of writing randomized algorithm? Write randomized sort algorithms

24 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Chapter-II

The Greedy Method: The general Method, knapsack problem, minimum-cost spanning Trees, Optimal
Merge Patterns, Single Source Shortest Paths

……………………………………………………………………………………………………………………………..

Among all the algorithmic approaches, the simplest and straightforward approach is the Greedy method. In
this approach, the decision is taken on the basis of current available information without worrying about the
effect of the current decision in future.

Greedy algorithms build a solution part by part, choosing the next part in such a way, that it gives an immediate
benefit. This approach never reconsiders the choices taken previously. This approach is mainly used to solve
optimization problems. Greedy method is easy to implement and quite efficient in most of the cases. Hence,
we can say that Greedy algorithm is an algorithmic paradigm based on heuristic that follows local optimal
choice at each step with the hope of finding global optimal solution.

In many problems, it does not produce an optimal solution though it gives an approximate (near optimal)
solution in a reasonable time.

Components of Greedy Algorithm

Greedy algorithms have the following five components −

• A candidate set − A solution is created from this set.

• A selection function − Used to choose the best candidate to be added to the solution.

• A feasibility function − Used to determine whether a candidate can be used to contribute to the
solution.

• An objective function − Used to assign a value to a solution or a partial solution.

• A solution function − Used to indicate whether a complete solution has been reached.

Areas of Application

Greedy approach is used to solve many problems, such as

• Finding the shortest path between two vertices using Dijkstra’s algorithm.

• Finding the minimal spanning tree in a graph using Prim’s /Kruskal’s algorithm, etc.

Where Greedy Approach Fails

In many problems, Greedy algorithm fails to find an optimal solution, moreover it may produce a worst
solution. Problems like Travelling Salesman and Knapsack cannot be solved using this approach.

General method of greedy


Algorithm greedy(a,n)
{
For i=1 to n do
{
X=select(a)
If feasible(x) then
Solution = solution+x;
}
25 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed
}

2. knapsack problem
Given a set of items, each with a weight and a value, determine a subset of items to include in a collection so
that the total weight is less than or equal to a given limit and the total value is as large as possible.
The knapsack problem is in combinatorial optimization problem. It appears as a subproblem in many, more
complex mathematical models of real-world problems. One general approach to difficult problems is to
identify the most restrictive constraint, ignore the others, solve a knapsack problem, and somehow adjust the
solution to satisfy the ignored constraints.
Applications
In many cases of resource allocation along with some constraint, the problem can be derived in a similar way
of Knapsack problem. Following is a set of example.
• Finding the least wasteful way to cut raw materials
• portfolio optimization
• Cutting stock problems
Problem Scenario
A thief is robbing a store and can carry a maximal weight of W into his knapsack. There are n items available
in the store and weight of ith item is wi and its profit is pi. What items should the thief take?
In this context, the items should be selected in such a way that the thief will carry those items for which he will
gain maximum profit. Hence, the objective of the thief is to maximize the profit.
Based on the nature of the items, Knapsack problems are categorized as
• Fractional Knapsack
• Knapsack
Fractional Knapsack
In this case, items can be broken into smaller pieces, hence the thief can select fractions of items.
According to the problem statement,
• There are n items in the store
• Weight of ith item wi>0
• Profit for ith item pi>0
• and
• Capacity of the Knapsack is W
In this version of Knapsack problem, items can be broken into smaller pieces. So, the thief may take only a
fraction xi of ith item.
0⩽xi⩽1
The ith item contributes the weight xi.wi to the total weight in the knapsack and profit xi.pi
to the total profit.
Hence, the objective of this algorithm is to

26 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


It is clear that an optimal solution must fill the knapsack exactly, otherwise we could add a fraction of one of
the remaining items and increase the overall profit.
Thus, an optimal solution can be obtained by

Algorithm: Greedy-Fractional-Knapsack (w[1..n], p[1..n], W)


for i = 1 to n
do x[i] = 0
weight = 0
for i = 1 to n
if weight + w[i] ≤ W then
x[i] = 1
weight = weight + w[i]
else
x[i] = (W - weight) / w[i]
weight = W
break
return x

27 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Analysis
If the provided items are already sorted into a decreasing order of piwi, then the while loop takes a time in
O(n); Therefore, the total time including the sort is in O(n logn).
Example

Let us consider that the capacity of the knapsack W = 60 and the list of provided items are shown in the
following table –

Solution

After sorting all the items according to piwi

First all of B is chosen as weight of B is less than the capacity of the knapsack. Next, item A is chosen, as the
available capacity of the knapsack is greater than the weight of A. Now, C is chosen as the next item.
However, the whole item cannot be chosen as the remaining capacity of the knapsack is less than the weight
of C.

Hence, fraction of C (i.e. (60 − 50)/20) is chosen.

Now, the capacity of the Knapsack is equal to the selected items. Hence, no more item can be selected.

The total weight of the selected items is 10 + 40 + 20 * (10/20) = 60

And the total profit is 100 + 280 + 120 * (10/20) = 380 + 60 = 440

This is the optimal solution. We cannot gain more profit selecting any different combination of items.

28 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


3. minimum-cost spanning Trees
What is spanning Tree?

A spanning tree is a subset of Graph G, which has all the vertices covered with minimum possible number of
edges. Hence, a spanning tree does not have cycles and it cannot be disconnected.

By this definition, we can draw a conclusion that every connected and undirected Graph G has at least one
spanning tree. A disconnected graph does not have any spanning tree, as it cannot be spanned to all its
vertices.

We found three spanning trees off one complete graph. A complete undirected graph can have maximum nn-
2 number of spanning trees, where n is the number of nodes. In the above addressed example, n is 3, hence

33−2 = 3 spanning trees are possible.

Properties of Spanning Tree

• Spanning tree has n-1 edges, where n is the number of nodes (vertices).

• From a complete graph, by removing maximum e - n + 1 edges, we can construct a spanning tree.

• A complete graph can have maximum nn-2 number of spanning trees.

Thus, we can conclude that spanning trees are a subset of connected Graph G and disconnected graphs do
not have spanning tree.

29 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Minimum Spanning Tree (MST)

In a weighted graph, a minimum spanning tree is a spanning tree that has minimum weight than all other
spanning trees of the same graph. In real-world situations, this weight can be measured as distance,
congestion, traffic load or any arbitrary value denoted to the edges.

Minimum Spanning-Tree Algorithm

We shall learn about two most important spanning tree algorithms here −

• Kruskal's Algorithm

• Prim's Algorithm

Both are greedy algorithms.

1. Prim’s Algorithm

• Prim’s Algorithm is a famous greedy algorithm.

• It is used for finding the Minimum Spanning Tree (MST) of a given graph.

• To apply Prim’s algorithm, the given graph must be weighted, connected and undirected.

Prim’s Algorithm Implementation-

The implementation of Prim’s Algorithm is explained in the following steps-

Step-01:

• Randomly choose any vertex.

• The vertex connecting to the edge having least weight is usually selected.

Step-02:

• Find all the edges that connect the tree to new vertices.

• Find the least weight edge among those edges and include it in the existing tree.

• If including that edge creates a cycle, then reject that edge and look for the next least weight edge.

Step-03:

• Keep repeating step-02 until all the vertices are included and Minimum Spanning Tree (MST) is
obtained.

Prim’s Algorithm Time Complexity-

Worst case time complexity of Prim’s Algorithm is-

• O(ElogV) using binary heap

• O(E + VlogV) using Fibonacci heap

30 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Example: Construct the minimum spanning tree (MST) for the given graph using Prim’s Algorithm-

31 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Step-1:

• Randomly choose any vertex. ( Here vertex 1)

• The vertex connecting to the edge having least weight is usually selected.

Step-2: Now we are at node / Vertex 6, It has two adjacent edges, one is already selected, select second one.

Step-3: Now we are at node 5, it has three edges connected, one is already selected, from reaming two
select minimum cost edge (that is having minimum weight) Such that no loops can be formed by adding that
vertex.

Step-4: Now we are at node 4, select the minimum cost edge from the edges connected to this node. Such
that no loops can be formed by adding that vertex.

32 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Step-5: Now we are at node 3, since the minimum cost edge is already selected, so to reach node 2 we
selected the edge which cost 16. Then the MST is

Step-6: Now we are at node 2, select minimum cost edge from the edges attached to this node. Such that no
loops can be formed by adding that vertex.

Since all the vertices have been included in the MST, so we stop.

Now, Cost of Minimum Spanning Tree

= Sum of all edge weights

= 10 + 25 + 22 + 12 + 16 + 14

= 99 units

Time Complexity: O(V2), If the input graph is represented using an adjacency list, then the time complexity
of Prim’s algorithm can be reduced to O(E log V) with the help of a binary heap. In this implementation, we
are always considering the spanning tree to start from the root of the graph

33 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


2. Kruskal’s Algorithm-

• Kruskal’s Algorithm is a famous greedy algorithm.

• It is used for finding the Minimum Spanning Tree (MST) of a given graph.

• To apply Kruskal’s algorithm, the given graph must be weighted, connected and undirected.

Kruskal’s Algorithm Implementation-

The implementation of Kruskal’s Algorithm is explained in the following steps-

Step-01: Sort all the edges from low weight to high weight.

Step-02:

• Take the edge with the lowest weight and use it to connect the vertices of graph.

• If adding an edge creates a cycle, then reject that edge and go for the next least weight edge.

Step-03:

Keep adding edges until all the vertices are connected and a Minimum Spanning Tree (MST) is obtained.

34 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Analysis: Where E is the number of edges in the graph and V is the number of vertices, Kruskal's Algorithm
can be shown to run in O (E log E) time, or simply, O (E log V) time, all with simple data structures. These
running times are equivalent because:

• E is at most V2 and log V2= 2 x log V is O (log V).

• If we ignore isolated vertices, which will each their components of the minimum spanning tree, V ≤
2 E, so log V is O (log E).

Thus, the total time is

1. O (E log E) = O (E log V).

35 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


4. Optimal Merge Patterns

Given n number of sorted files, the task is to find the minimum computations done to reach the Optimal
Merge Pattern.
When two or more sorted files are to be merged altogether to form a single file, the minimum computations
are done to reach this file are known as Optimal Merge Pattern.

If more than 2 files need to be merged then it can be done in pairs. For example, if need to merge 4 files A,
B, C, D. First Merge A with B to get X1, merge X1 with C to get X2, merge X2 with D to get X3 as the output
file.

If we have two files of sizes m and n, the total computation time will be m+n. Here, we use the greedy
strategy by merging the two smallest size files among all the files present.

Examples:
Given 3 files with sizes 2, 3, 4 units. Find an optimal way to combine these files

Input: n = 3, size = {2, 3, 4}


Output: 14
Explanation: There are different ways to combine these files:
Method 1: Optimal method

Method 2:

36 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Method 3:

Input: n = 6, size = {2, 3, 4, 5, 6, 7}


Output: 68
Explanation: Optimal way to combine these files

Input: n = 5, size = {5,10,20,30,30}


Output: 205

Input: n = 5, size = {8,8,8,8,8}


Output: 96

Observations:

From the above results, we may conclude that for finding the minimum cost of computation we need to
have our array always sorted, i.e., add the minimum possible computation cost and remove the files from
the array. We can achieve this optimally using a min-heap(priority-queue) data structure.

37 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Approach:

Node represents a file with a given size also given nodes are greater than 2

1. Add all the nodes in a priority queue (Min Heap).{pq.poll = file size}

2. Initialize count = 0 // variable to store file computations.

3. Repeat while (size of priority Queue is greater than 1)

1. int weight = pq.poll(); pq.pop;//pq denotes priority queue, remove 1st smallest and
pop(remove) it out

2. weight+=pq.poll() && pq.pop(); // add the second element and then pop(remove) it out

3. count +=weight;

4. pq.add(weight) // add this combined cost to priority queue;

4. count is the final answer

Time Complexity: O(nlogn)

Auxiliary Space: O(n)

4.1 Huffman coding

Huffman coding is a lossless data compression algorithm. The idea is to assign variable-length codes to input
characters, lengths of the assigned codes are based on the frequencies of corresponding characters. The most
frequent character gets the smallest code and the least frequent character gets the largest code.

Huffman tree or Huffman coding tree defines as a full binary tree in which each leaf of the tree corresponds
to a letter in the given alphabet.

The Huffman tree is treated as the binary tree associated with minimum external path weight that means, the
one associated with the minimum sum of weighted path lengths for the given set of leaves. So the goal is to
construct a tree with the minimum external path weight.

An example is given below-

Letter frequency table

Letter zkm c u d l e

Frequency 2 7 24 32 37 42 42 120

Huffman code

Letter Freq Code Bits

e 120 0 1

d 42 101 3

l 42 110 3

u 37 100 3

c 32 1110 4

38 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Letter Freq Code Bits

m 24 11111 5

k 7 111101 6

z 2 111100 6

The Huffman tree (for the above example) is given below -

5. Single Source Shortest Paths.

Dijkstra's algorithm allows us to find the shortest path between any two vertices of a graph.

It differs from the minimum spanning tree because the shortest distance between two vertices might not
include all the vertices of the graph.

How Dijkstra's Algorithm works

Dijkstra's Algorithm works on the basis that any subpath B -> D of the shortest path A -> D between vertices
A and D is also the shortest path between vertices B and D.

Each subpath is the shortest path

39 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Djikstra used this property in the opposite direction i.e we overestimate the distance of each vertex from the
starting vertex. Then we visit each node and its neighbors to find the shortest subpath to those neighbors.

The algorithm uses a greedy approach in the sense that we find the next best solution hoping that the end
result is the best solution for the whole problem.

Example of Dijkstra's algorithm

It is easier to start with an example and then think about the algorithm.

Start with a weighted graph

Choose a starting vertex and assign infinity path values to all other devices

40 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Go to each vertex and update its path length

If the path length of the adjacent vertex is lesser than new path length, don't update it

Avoid updating path lengths of already visited vertices

41 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


After each iteration, we pick the unvisited vertex with the least path length. So we choose 5 before 7

Notice how the rightmost vertex has its path length updated twice

Repeat until all the vertices have been visited

42 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


Djikstra's algorithm pseudocode

We need to maintain the path distance of every vertex. We can store that in an array of size v, where v is
the number of vertices.

We also want to be able to get the shortest path, not only know the length of the shortest path. For this, we
map each vertex to the vertex that last updated its path length.

Once the algorithm is over, we can backtrack from the destination vertex to the source vertex to find the
path.

A minimum priority queue can be used to efficiently receive the vertex with least path distance.

function dijkstra(G, S)

for each vertex V in G

distance[V] <- infinite

previous[V] <- NULL

If V != S, add V to Priority Queue Q

distance[S] <- 0

while Q IS NOT EMPTY

U <- Extract MIN from Q

for each unvisited neighbour V of U

tempDistance <- distance[U] + edge_weight(U, V)

if tempDistance < distance[V]

distance[V] <- tempDistance

previous[V] <- U

return distance[], previous[]

……………………………………………………………………………………………………………………………

University Previous Year Questions topic wise

Divide and Conquer

16. Write the general method of Divide – And – Conquer approach.


17. What is the Defective Chessboard problem and give its solution using divide and conquer method?
18. Describe binary search in detail and provide time complexity analysis with an example
19. Write a recursive algorithm for binary search and also bring out its efficiency.
20. Discuss the time complexity of Binary search algorithm for best and worst case.
21. With a suitable algorithm, explain the problem of finding the maximum and minimum items in a set
of n elements.
22. Explain divide-and-conquer technique; write a recursive algorithm for finding the maximum and
minimum element from the list
23. Given 2 sorted lists of numbers. Write the algorithm to merge them and analyze its time complexity.
24. Discuss the working strategy of merge sort and illustrate the process of merge sort algorithm for the
given data: 43, 32, 22, 78, 63, 57, 91 and 13
25. Apply Merge Sort to sort the list a[1:10]=(31,28,17,65,35,42.,86,25,45,52). Draw the tree of
recursive calls of merge sort, merge functions.

43 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed


26. Illustrate the tracing of quick sort algorithm for the following set of numbers:25, 10, 72, 18, 40, 11,
64, 58, 32, 9.
27. Write Divide – And – Conquer recursive Quick sort algorithm and analyze the algorithm for average
time complexity.
28. Derive the time complexity of the Quicksort algorithm for the worst case.
29. For T(n)=7T(n/2)+18n2 Solve the recurrence relation and find the time complexity.
30. What are different approaches of writing randomized algorithm? Write randomized sort algorithms.

The Greedy Method

1. Explain the general principle of Greedy method and also list the applications of Greedy method.
2. Solve the following instance of knapsack problem using greedy method. n=7(objects), m=15, profits
are (P1,P2,P3,P4,P5,P6,P7)=(10,5,15,7,6,18,3) and its corresponding weights are (W1,W2,W3,W4,
W5, W6, W7 )=(2,3,5,7,1,4,1).
3. State the Greedy Knapsack. Find an optimal solution to the Knapsack instance n=3, m=20, (P1, P2,
P3) = (25, 24, 15) and (W1, W2, W3) = (18, 15, 10)
4. Find an optimal solution to the knapsack instance n=7 objects and the capacity of knapsack m=15.
The profits and weights of the objects are (P1,P2,P3,P4,P5,P6,P7)=(10,5,15,7,6,18,3),
(W1,W2,W3,W4, W5,W6,W7)=(2,3,5,7,1,4,1) respectively.
5. Write and explain Prism’s algorithm for finding minimum cost spanning tree of a graph with an
example.
6. What is a Minimum Cost Spanning tree? Explain Kruskal’s Minimum costs panning tree algorithm
with a suitable example.
7. What is a Spanning tree? Explain Prim’s Minimum cost spanning tree algorithm with suitable example
8. What is optimal merge pattern? Find optimal merge pattern for ten files whose record lengths are 28,
32, 12, 5, 84, 53, 91, 35, 3, and 11
9. Discuss the Dijkstra’s single source shortest path algorithm and derive its time complexity.
10. A motorist wishing to ride from city A to B. Formulate greedy based algorithms to generate shortest
path and explain with an example graph.
11. Discuss the single-source shortest paths algorithm with a suitable example

44 © www.tutorialtpoint.net Pepared by D.Venkata Reddy M.Tech(Ph.D), UGC NET, AP SET Qualififed

You might also like