0% found this document useful (0 votes)
5 views

Chapter 2 Divide and Conquer

The document discusses the Divide-and-Conquer algorithm design strategy, explaining its general method and advantages, such as supporting parallelism, as well as disadvantages like high memory management. It covers specific algorithms like Binary Search, Selection Sort, and Merge Sort, detailing their implementations, time complexities, and comparisons with Quick Sort. Additionally, it includes an assignment prompt for implementing sorting algorithms in C++.

Uploaded by

githouse36
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Chapter 2 Divide and Conquer

The document discusses the Divide-and-Conquer algorithm design strategy, explaining its general method and advantages, such as supporting parallelism, as well as disadvantages like high memory management. It covers specific algorithms like Binary Search, Selection Sort, and Merge Sort, detailing their implementations, time complexities, and comparisons with Quick Sort. Additionally, it includes an assignment prompt for implementing sorting algorithms in C++.

Uploaded by

githouse36
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Design And Analysis of

Algorithm

Chapter 2
Divide – and – conquer
Divide – and – conquer
 Divide – and – conquer :- is the most-well known algorithm design strategy.
 It’s a method works by recursively breaking down a problem into two or more sub
problems of the same types until these become simple enough to be solved directly.
The General Method
 Divide instance of problem into two or more smaller instances(Divide)
 Solve smaller instances recursively.(Conquer)
 Obtain solution to original (larger) instance by combining these solutions.
(Combine)

2
Divide – and – conquer
Pros and cons of Divide and Conquer Approach
 Pros
 Divide and conquer approach supports parallelism as sub-problems are independent.
 The problem solved using multiprocessor system or in different machines simultaneously.

 Cons
 Memory management is very high.
 For recursive function stack is used, where function state needs to be stored.

3
Divide – and – conquer
Binary Search :- a technique to finding a position of specified value within a sorted array.
Algorithm: Algorithm_binary_search(A,n,key(x)){
low = first_index; Note
high = last_index;  Where x is element to be searched and A is an array
while(low<=high){
mid = (low + high)/2; with the list of sorted elements.
if(A[mid] == x)  If low value is greater than high index then the
return mid; element is not exist in the array.
else if(x<A[mid])  Where low, high, mid are represented by integer
high = mid-1;
else
low = mid+1;
}
return 0;
}

4
Recurrence Solution for Decreasing Function

5
Master Theorem For Decreasing Function

6
Recurrence Solution for Dividing Function

7
Master Theorem For Dividing Function

Function

8
Binary Search
Binary search or Half-interval search algorithm:

1. This algorithm finds the position of a specified input value (the search "key") within an array sorted by key
value.

2. In each step, the algorithm compares the search key value with the key value of the middle element of the
array.

3. If the keys match, then a matching element has been found and its index, or position, is returned.

4. Otherwise, if the search key is less than the middle element's key, then the algorithm repeats its action on the
sub-array to the left of the middle element or, if the search key is greater, then the algorithm repeats on sub
array to the right of the middle element.

5. If the search element is less than the minimum position element or greater than the maximum position
element then this algorithm returns not found.

9
Binary search algorithm by using recursive
methodology:
int binary_search(int A[], int key, int imin, int imax Algorithm binary_search(A, key, imin, imax)

{ {
if (imax < imin) if (imax < imin) then return “array is empty”;
return array is empty; if(key<imin || K>imax) then return “element not in
if(key<imin || K>imax) array list” else
return element not in array list else {
{ imid = (imin +imax)/2;
int imid = (imin +imax)/2; if (A[imid] > key) then
if (A[imid] > key) return binary_search(A, key, imin, imid-1);
return binary_search(A, key, imin, imid-1); else if (A[imid] < key) then
else if (A[imid] < key) return binary_search(A, key, imid+1, imax);
return binary_search(A, key, imid+1, imax); else
else return imid;
return imid; }
} }
}

10
Binary search algorithm by using
recursive methodology:
For successful search Unsuccessful search

Worst case O(log n) or θ(log n) θ(log n):- for all cases.

Average caseà O(log n) or θ(log n)

Best caseà O(1) or θ(1)

11
Binary search algorithm by using iterative methodology

int binary_search(int A[], int key, int imin, int imax) Algorithm binary_search(A, key, imin, imax)
{ {
while (imax >= imin) While < (imax >= imin)> do
{ {
int imid = midpoint(imin, imax); int imid = midpoint(imin, imax);
if(A[imid] == key) if(A[imid] == key)
return imid; return imid;
else if (A[imid] < key) else if (A[imid] < key)
imin = imid + 1; imin = imid + 1;
else else
imax = imid - 1; imax = imid - 1;
} }
} }

12
Graphical Representation of Binary Search
 In Binary search index is used to represent each node in graph representation.
Example: A[15] = {3, 6, 8, 12, 14, 17, 25, 29, 31, 36, 42, 47, 53, 55, 62}
Index = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15} assumed that index start from 1 for illustration

4 12

14
2 6 10

1 3 5 7 9 11 13 15

Graphical Representation of Binary Search

13
Time Complexity for Binary Search
 Discuss in what case the time complexity of binary search is
equal to constant 1?

 In Worst case = O(logn) / Big-oh(logn)


 In Best case = Ω(1) / Omega(1)
 In Average case = ϴ(logn) / theta(logn)

14
Selection Sort
 In 1st pass, smallest element of the array is to be found along with its index pos. then,
swap(A[0] and A[pos]). Thus A[0] is sorted, we now have n-1 elements to be sorted.
 In 2nd pass, n-2 elements left to be sorted.
 In n-1th pass, one element left to be sorted/ all elements are sorted.
Algorithm: selection_sort_algorithm(A,n) { Example: Array A = {6, 2, 11, 7, 5}
for(i=0; i< n-1; i++){ solution:
int min = i; {6, 2, 11, 7, 5} pos = 6 and min = 2 swap(6,2)
for(j=i+1; j<n; j++) {2, 6, 11, 7, 5} pos = 6 and min = 5 swap(6,5)
if(A[j] < A[min]; {2, 5, 11, 7, 6} pos = 11 and min = 6 swap (11,6)
min = j;
{2, 5, 6, 7, 11} pos = 7 and min = 7 no swap
}
}
if(min!=i){
swap(A[i], A[min]);
}
}

15
Time and Space Complexity for Selection Sort
 Discuss the time and space complexity of selection sort
algorithm?

 In Worst case = O() / Big-oh()


 In Best case = Ω() / Omega()
 In Average case = ϴ() / theta()
 Coz selection sort is in-place algorithm space complexity
of the algorithm is O(1)
16
Merge Sort
 Merging a two lists of one element each is the same as sorting them.
 Merge sort divides up an unsorted list until the above condition is met and then sorts the
divided parts back together in pairs.
 Specifically this can be done by recursively dividing the unsorted list in half, merge sorting the
right side then the left side and then merging the right and left back together.

Algorithm: Given a list L with a length k:


If k == 1, the list is sorted
Else:
 Merge Sort the left side (0 to k/2)
 Merge Sort the right side (k/2+1 to k)
 Merge the right side with the left side

17
Merge Sort Example
99 6 86 15 58 35 86 4 0

99 6 86 15 58 35 86 4 0

99 6 86 15 58 35 86 4 0

99 6 86 15 58 35 86 4 0

4 0
18
Merge Sort Example

99 6 86 15 58 35 86 0 4

4 0
19
Merge Sort Example

6 99 15 86 35 58 0 4 86

99 6 86 15 58 35 86 0 4

20
Merge Sort Example

6 15 86 99 0 4 35 58 86

6 99 15 86 35 58 0 4 86

21
Merge Sort Example
0 4 6 15 35 58 86 86 99

6 15 86 99 0 4 35 58 86

22
Merge Sort Example
0 4 6 15 35 58 86 86 99

23
Merge Sort Implementation
 There are two basic ways to implement merge sort:
 In Place: Merging is done with only the input array
 Pro: Requires only the space needed to hold the array
 Con: Takes longer to merge because if the next element is in the right side then
all of the elements must be moved down.
Double Storage: Merging is done with a temporary array of the
same size as the input array.
 Pro: Faster than In Place since the temp array holds the resulting array until
both left and right sides are merged into the temp array, then the temp array is
appended over the input array.
 Con: The memory requirement is doubled.
24
Merge Sort Analysis
 The time complexity of merge sort is O(nlogn) at all cases.

 Divide = O(1) / Big-oh(nlog)


 Conquer = n/2 at each time
 Combine = n at each sub-list maximum O(n)

 Coz merge sort is Double Storage algorithm space


complexity of the algorithm is O(2) double of the array size
25
Advantages of Merge Sort
1. Marginally faster than the heap sort for larger sets
2. Merge Sort always does lesser number of comparisons than Quick Sort. Worst
case for merge sort does about 39% less comparisons against quick sort’s
average case.
3. Merge sort is often the best choice for sorting a linked list because the slow
random- access performance of a linked list makes some other algorithms
(such as quick sort)perform poorly, and others (such as heap sort) completely
impossible.

26
Time Complexity
Name Space
Complexity
Best case Average Worst
Case Case
Bubble O(n) - O(n2) O(n)
Insertion O(n) O(n2) O(n2) O(n)
Selection O(n2) O(n2) O(n2) O(n)

Quick O(log n) O(n log n) O(n2) O(n + log n)

Merge O(n log n) O(n log n) O(n log n) O(2n)

Heap O(n log n) O(n log n) O(n log n) O(n)

27
Assignment 1
1. Write c++ program that store student
information and sort them using Merge sort and
Quick Sort.
Submission Next Week and there could be
presentation on your code and how it work

28
Quick Sort
Quick Sort is an algorithm based on the DIVIDE-AND-CONQUER paradigm
that selects a pivot element and reorders the given list in such a way that all
elements smaller to it are on one side and those bigger than it are on the other.
Then the sub lists are recursively sorted until the list gets completely sorted. The
time complexity of this algorithm is O (n log n).
 Auxiliary space used in the average case for implementing recursive function
calls is O (log n) and hence proves to be a bit space costly, especially when it
comes to large data sets.
Its worst case has a time complexity of O (n ) which can prove very fatal for
large data sets. Competitive sorting algorithms
29
Algorithm for Quick Sort
Algorithm quickSort (a, low, high) while(i<=j) do { while(a[i]<=pivot)
{ i++;
if(high>low) then while(a[j]>pivot)
{ j--;
m=partition(a,low,high); if(i<=j)
if(low<m) then quick(a,low,m); {
if(m+1<high) then quick(a,m+1,high); temp=a[i];
} a[i]=a[j];
} a[j]=temp;
Algorithm partition(a, low, high) i++;
{ }
i=low,j=high; }
mid=(low+high)/2; return j;
pivot=a[mid]; }

30
Randomized Quick Sorting Algorithm
While sorting the array a[p:q] instead of picking a[m], pick a random element (from

among a[p], a[p+1], a[p+2]---a[q]) as the partition elements.


The resultant randomized algorithm works on any input and runs in an expected O(n log
n) times.
algorithm RquickSort (a, p, q)
{ If(high>low) then{
If((q-p)>5) then
Interchange(a, Random() mod (q-p+1)+p, p);
m=partition(a,p, q+1); quick(a, p, m-1); quick(a,m+1,q);
}}
31
Comparison between Merge and Quick
Sort:
1. Both follows Divide and Conquer rule.
2. Statistically both merge sort and quick sort have the same average case time i.e., O(n
log n).
3. Merge Sort Requires additional memory. The pros of merge sort are: it is a stable sort,
and there is no worst case (means average case and worst case time complexity is
same).
4. Quick sort is often implemented in place thus saving the performance and memory by
not creating extra storage space.
5. But in Quick sort, the performance falls on already sorted/almost sorted list if the pivot
is not randomized. Thus why the worst case time is O(n2).
32
Find Maximum and Minimum
 Find the minimum and Maximum from the given array(lists) recursively.
 Divide the list into sub-list until the number in the list is equal two.
 Find min and max from each list and merge the lists.
Example:
4 8 3 6 1 5 2

4 8 3 6 1 5 2

4 8 3 6 1 5 2

33
Find Maximum and Minimum Algorithm
min_max(i, j, max, min, A[ ]){ else{
if(i==j){ mid = (i+j)/2
Max = A[i]; min_max(i, mid, max, min)
Min = A[j]; min_max(mid+1, j, max_new, min_new)
} if(max<max_new){
else if(i=j-1){ Max = max_new
if(A[i]<A[j]){ }
Max = A[j]; if(min>min_new){
Min = A[i]; Min = min_new
} }
else{ }
Max = A[i];
Min = A[j];
}
}

34
Time and Space Complexity for Min & Max
 Discuss the time and space complexity of Min_Max algorithm?

 In Worst case = O(3n/2 – 2) = O(n) / Big-oh(n)


 In Best case = Ω(n) / Omega(n)
 In Average case = ϴ(n) / theta(n)

space complexity of the algorithm is O(1)

35
Individual Assignment (20%)
1. Discuss Quick sort with its time and space complexity prove?
2. Discuss Single source shortest pattern with it time complexity?
3. Discuss Depth first search with its time complexity?
4. Discuss Graph coloring, Hamiltonian cycles with its time complexity?
5. Discuss disconnected components with its time complexity?

Criteria:
1. Use hand writing to write your assignment.
2. Provide at list one example for each questions.
3. Don’t use black pen to write your assignment.
4. Write only by using your hand writing.
5. Submit one week before the final exam date.
36

You might also like