0% found this document useful (0 votes)
20 views67 pages

Sorting & Searching

Uploaded by

akhilmvs19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views67 pages

Sorting & Searching

Uploaded by

akhilmvs19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 67

Sorting

⮚ A fundamental application for computers


⮚ Done to make finding data (searching) faster
⮚ Many different algorithms for sorting
⮚ One of the difficulties with sorting is working with a fixed size storage container (array)
⮚ if resize, that is expensive (slow)
⮚ The "simple" sorts run in quadratic time O(N2)
Why Sorting?
⮚ Practical application
▪ People by last name
▪ Countries by population
▪ Search engine results by relevance

⮚ Fundamental to other algorithms

⮚ Different algorithms have different asymptotic and constant-factor


trade-offs
⮚ No single ‘best’ sort for all scenarios
⮚ Knowing one way to sort just isn’t enough

⮚ Many to approaches to sorting which can be used for other problems


Problem statement
There are n comparable elements in an array and we want to rearrange them to be in increasing
order

Pre:
An array A of data records
A value in each data record
A comparison function
<, =, >, compareTo

Post:
For each distinct position i and j of A, if i<j then A[i] ≤ A[j]
A has all the same data it started with
Sorting Classification
Externa
In memory sorting l
sorting
Comparison sorting Specialized
Ω(N log N) Sorting
# of
O(N log
O(N2) O(N) tape
N)
accesses
Bubble Sort Merge Sort Bucket Sort • Simple
Selection Quick Sort Radix Sort External
Sort Heap Sort Merge
Insertion Sort
Sort •
Bubble Sort
• Bubble sort is the easiest sorting algorithm to implement.
• It is inspired by observing the behavior of air bubbles over foam.
• It is an in-place sorting algorithm.
• It uses no auxiliary data structures (extra space) while sorting.

How Bubble Sort Works?

•Bubble sort uses multiple passes (scans) through an array.


•In each pass, bubble sort compares the adjacent elements of the array.
•It then swaps the two elements if they are in the wrong order.
•In each pass, bubble sort places the next largest element to its proper position.
•In short, it bubbles down the largest element to its correct position.
Bubble Sort Algorithm
for(int pass=1 ; pass<=n-1 ; ++pass) // Making passes through array
{
for(int i=0 ; i<=n-2 ; ++i)
{
if(A[i] > A[i+1]) // If adjacent elements are in wrong order
swap(i,i+1,A); // Swap them
}
}
//swap function : Exchange elements from array A at position x,y
void swap(int x, int y, int[] A)
{
int temp = A[x];
A[x] = A[y];
A[y] = temp;
return ;
}
Bubble Sort Example
Bubble Sort Example-

Consider the following array A-


Step-02:

•We have pass=1 and i=1.


•We perform the comparison A[1] > A[2] and swaps if the 1 th element
is greater than the 2th element.
Now, we shall implement the above bubble sort algorithm on this array. •Since 6 < 11, so no swapping is required.

Step-01:

•We have pass=1 and i=0.


•We perform the comparison A[0] > A[1] and swaps if the 0 th element is greater than the 1 th element.
•Since 6 > 2, so we swap the two elements.
Bubble Sort Example
Step-03: Step-04:

•We have pass=1 and i=2.


•We perform the comparison A[2] > A[3] and swaps if the 2 nd element is greater than the •We have pass=1 and i=3.
. 3rd element. •We perform the comparison A[3] > A[4] and swaps if the 3 rd element is greater
•Since 11 > 7, so we swap the two elements.
than
the 4th element.
•Since 11 > 5, so we swap the two elements.

Finally after the first pass, we see that the largest element 11 reaches its correct position.
Bubble Sort Example
Step-07:
Step-05:

•Similarly after pass=2, element 7 reaches its correct position. •No further improvement is done in pass=4.
.
•The modified array after pass=2 is shown below-
•This is because at this point, elements 2 and 5 are already present at their
correct positions.
•The loop terminates after pass=4.
•Finally, the array after pass=4 is shown below-

Step-06:

•Similarly after pass=3, element 6 reaches its correct position.


•The modified array after pass=3 is shown below-
Bubble Sort
Time Complexity Analysis-

•Bubble sort uses two loops- inner loop and outer loop.
•The inner loop deterministically performs O(n) comparisons.

Worst Case-
.
•In worst case, the outer loop runs O(n) times.
•Hence, the worst case time complexity of bubble sort is O(n x n) = O(n 2).

Best Case-

•In best case, the array is already sorted but still to check, bubble sort performs O(n) comparisons.
•Hence, the best case time complexity of bubble sort is O(n).

Average Case-

•In average case, bubble sort may require (n/2) passes and O(n) comparisons for each pass.
•Hence, the average case time complexity of bubble sort is O(n/2 x n) = Θ(n 2).
Bubble Sort
PRACTICE PROBLEMS BASED ON MERGE SORT ALGORITHM-

Problem-01:

The number of swapping needed to sort the numbers 8, 22, 7, 9, 31, 5, 13 in ascending order using bubble sort is- (ISRO CS 2017)
1.11
2.12
.
3.13
4.10

Solution-

In bubble sort, Number of swaps required = Number of inversion pairs.


Here, there are 10 inversion pairs present which are-
5.(8,7)
6.(22,7)
7.(22,9)
8.(8,5)
9.(22,5)
10.(7,5)
11.(9,5)
12.(31,5)
13.(22,13)
14.(31,13)

Thus, Option (D) is correct.


Bubble Sort
Problem-02:

When will bubble sort take worst-case time complexity?


1.The
. array is sorted in ascending order.
2.The array is sorted in descending order.
3.Only the first half of the array is sorted.
4.Only the second half of the array is sorted.

Solution-

•In bubble sort, Number of swaps required = Number of inversion pairs.


•When an array is sorted in descending order, the number of inversion pairs = n(n-1)/2 which is maximum for any permutation of array.

Thus, Option (B) is correct.


Insertion Sort
•Insertion sort is an in-place sorting algorithm.
•It uses no auxiliary data structures while sorting.
•It is inspired from the way in which we sort playing cards.

.
How Insertion Sort Works?

Consider the following elements are to be sorted in ascending order- 6, 2, 11, 7, 5


Firstly,
Thirdly,
It selects the second element (2). It selects the fourth element (7).
It checks whether it is smaller than any of the elements before it.
It checks whether it is smaller than any of the elements before it. Since 7 < 11, so it shifts 11 towards right and places 7 before it.
Since 2 < 6, so it shifts 6 towards right and places 2 before it. The resulting list is 2, 6, 7, 11, 5.
The resulting list is 2, 6, 11, 7, 5. Fourthly,
It selects the fifth element (5).
It checks whether it is smaller than any of the elements before it.
Secondly, Since 5 < (6, 7, 11), so it shifts (6, 7, 11) towards right and places 5 before them.
It selects the third element (11). The resulting list is 2, 5, 6, 7, 11.
It checks whether it is smaller than any of the elements before it. As a result, sorted elements in ascending order are-
Since 11 > (2, 6), so no shifting takes place. 2, 5, 6, 7, 11
The resulting list remains the same.
Insertion Sort Algorithm
for (i = 1 ; i < n ; i++)
{
key = A [ i ];
j = i - 1;
. while(j > 0 && A [ j ] > key)
{
A [ j+1 ] = A [ j ];
j--;
}
A [ j+1 ] = key;
}

Here,
•i = variable to traverse the array A
•key = variable to store the new number to be inserted into the sorted sub-array
•j = variable to traverse the sorted sub-array
Insertion Sort Example
Consider the following elements are to be sorted in ascending order-
6, 2, 11, 7, 5 Step-03: For i = 3

The above insertion sort algorithm works as illustrated below-

Step-01: For i = 1

Step-02: For i = 2

Working of inner loop when i = 3


Insertion Sort Example
Step-04: For i = 4

.
Important Notes-
•Insertion sort is not a very efficient algorithm when data sets are large.
•This is indicated by the average and worst case complexities.
•Insertion sort is adaptive and number of comparisons are less if array is
partially sorted.
Loop gets terminated as ‘i’ becomes 5. The state of array after the loops are finished-

With each loop cycle,


•One element is placed at the correct location in the sorted sub-array until array A is completely sorted.
Quick Sort
Partition_Array (a , beg , end , loc)
•Quick Sort is a famous sorting algorithm. Begin
Set left = beg , right = end , loc = beg
•It sorts the given data items in ascending order. Set done = false
While (not done) do
•It uses the idea of divide and conquer approach. While ( (a[loc] <= a[right] ) and (loc ≠ right) ) do
. Set right = right - 1
•It follows a recursive algorithm. end while
if (loc = right) then
Set done = true
else if (a[loc] > a[right]) then
Quick Sort Interchange a[loc] and a[right]
Algorithm- Set loc = right
end if
Consider- if (not done) then
While ( (a[loc] >= a[left] ) and (loc ≠ left) ) do
•a = Linear Array in memory Set left = left + 1
•beg = Lower bound of the sub array in question end while
if (loc = left) then
•end = Upper bound of the sub array in question Set done = true
else if (a[loc] < a[left]) then
Interchange a[loc] and a[left]
Set loc = left
end if
end if
end while
End
Quick Sort
How Does Quick Sort Works?

•Quick Sort follows a recursive algorithm.


.
•It divides the given array into two sections using a partitioning element called as pivot.

The division performed is such that-

•All the elements to the left side of pivot are smaller than pivot.
•All the elements to the right side of pivot are greater than pivot.

After dividing the array into two sections, the pivot is set at its correct position.
Then, sub arrays are sorted separately by applying quick sort algorithm recursively.
Quick Sort Example
Consider the following array has to be sorted in ascending order using quick sort algorithm-

Quick Sort Algorithm works in the following steps-

Step-01:

Initially-

•Left and Loc (pivot) points to the first element of the array.

•Right points to the last element of the array.

So to begin with, we set loc = 0, left = 0 and right = 5 as-


Quick Sort Example
Step-02: Step-03:

Since loc points at left, so algorithm starts from right and move towards
Since loc points at left, so algorithm starts from right and move towards left.
left.
As a[loc] < a[right], so algorithm moves right one position towards left as- As a[loc] > a[right], so algorithm swaps a[loc] and a[right] and loc points
. at right as-

Now, loc = 0, left = 0 and right = 4.


Now, loc = 4, left = 0 and right = 4.
Quick Sort Example
Step-04: Step-05:

Since loc points at right, so algorithm starts from left and move towards Since loc points at right, so algorithm starts from left and move towards right.
right. As a[loc] > a[left], so algorithm moves left one position towards right as-
As a[loc] > a[left], so algorithm moves left one position towards right as-

Now, loc = 4, left = 2 and right = 4.


Now, loc = 4, left = 1 and right = 4.
Quick Sort Example
Step-07:
Step-06:
Since loc points at left, so algorithm starts from right and move towards left.
Since loc points at right, so algorithm starts from left and move towards right. As a[loc] < a[right], so algorithm moves right one position towards left as-
As a[loc] < a[left], so we algorithm swaps a[loc] and a[left] and loc points at left as-

Now, loc = 2, left = 2 and right = 3.


Now, loc = 2, left = 2 and right = 4.
Quick Sort Example
Step-08: Step-09:

Since loc points at left, so algorithm starts from right and move towards left. Since loc points at right, so algorithm starts from left and move towards right.
As a[loc] > a[right], so algorithm swaps a[loc] and a[right] and loc points As a[loc] > a[left], so algorithm moves left one position towards right as-
at right as-

Now, loc = 3, left = 2 and right = 3


Now, loc = 3, left = 3 and right = 3.
Quick Sort Example
Now,
•loc, left and right points at the same element.
•This indicates the termination of procedure.
•The pivot element 25 is placed in its final position.
.
•All elements to the right side of element 25 are greater than it.
•All elements to the left side of element 25 are smaller than it.

Now, quick sort algorithm is applied on the left and right sub arrays separately in the similar manner.
Quick Sort Analysis
•To find the location of an element that splits the array into two parts, O(n) operations are required.
•This is because every element in the array is compared to the partitioning element.

.
•After the division, each section is examined separately.
•If the array is split approximately in half (which is not usually), then there will be log 2n splits.

•Therefore, total comparisons required are f(n) = n x log 2n = O(nlog2n).

Order of Quick Sort = O(nlog2n)

Worst Case-

•Quick Sort is sensitive to the order of input data.


•It gives the worst performance when elements are already in the ascending order.
•It then divides the array into sections of 1 and (n-1) elements in each call.
•Then, there are (n-1) divisions in all.
•Therefore, here total comparisons required are f(n) = n x (n-1) = O(n 2)

Order of Quick Sort in worst case = O(n2)


Quick Sort
Advantages of Quick Sort-

The advantages of quick sort algorithm are-

.
•Quick Sort is an in-place sort, so it requires no temporary memory.
•Quick Sort is typically faster than other algorithms.
(because its inner loop can be efficiently implemented on most architectures)
•Quick Sort tends to make excellent usage of the memory hierarchy like virtual memory or caches.
•Quick Sort can be easily parallelized due to its divide and conquer nature.

Disadvantages of Quick
Sort-
•The worst case complexity of quick sort is O(n 2).
•This complexity is worse than O(nlogn) worst case complexity of algorithms like merge sort, heap sort etc.
•It is not a stable sort i.e. the order of equal elements may not be preserved.
Selection Sort
•Selection sort is one of the easiest approaches to sorting.
•It is inspired from the way in which we sort things out in day to day life.

.
•It is an in-place sorting algorithm because it uses no auxiliary data structures while sorting.

How Selection Sort


Works?
6, 2, 11, 7, 5
•It finds the first smallest element (2).
•It swaps it with the first element of the unordered list.
•It finds the second smallest element (5).
•It swaps it with the second element of the unordered list.
•Similarly, it continues to sort the given elements.

As a result, sorted elements in ascending order are-


2, 5, 6, 7, 11
Selection Sort Algorithm
for (i = 0 ; i < n-1 ; i++)
{
index = i;
for(j = i+1 ; j < n ; j++)
.
{ Here,
if(A[j] < A[index]) •i = variable to traverse the array A
index = j; •index = variable to store the index of minimum element
} •j = variable to traverse the unsorted sub-array
temp = A[i]; •temp = temporary variable used for swapping
A[i] = A[index];
A[index] = temp;
}
Selection Sort Example
Step-03: For i = 2
Consider the following elements are to be sorted in ascending order-
6, 2, 11, 7, 5
Step-01: For i = 0

Step-04: For i = 3

Step-02: For i = 1
Selection Sort Example
Step-05: For i = 4

Loop gets terminated as ‘i’ becomes 4.

The state of array after the loops are finished is as shown-

With each loop cycle,


•The minimum element in unsorted sub-array is selected.
•It is then placed at the correct location in the sorted sub-array until array A
is completely sorted.
Selection Sort
Time Complexity Analysis-

•Selection sort algorithm consists of two nested loops.


•Owing to the two nested loops, it has O(n2) time complexity.
.

Important Notes-

•Selection sort is not a very efficient algorithm when data sets are large.
•This is indicated by the average and worst case complexities.
•Selection sort uses minimum number of swap operations O(n) among all the sorting algorithms.
Merge Sort
•Merge sort is a famous sorting algorithm.
•It uses a divide and conquer paradigm for sorting.
•It divides the problem into sub problems and solves them individually.

.
•It then combines the results of sub problems to get the solution of the original problem.

How Merge Sort


Works?
The merge procedure of merge sort algorithm is used to merge two sorted arrays into a third array in sorted order.

Consider we want to merge the following two sorted sub arrays into a third array in sorted order-
Merge Sort Algorithm
// L : Left Sub Array , R : Right Sub Array , A : Array
merge(L, R, A)
{
nL = length(L) // Size of Left Sub Array
nR = length(R) // Size of Right Sub Array
i=j=k=0
while(i<nL && j<nR)
{
/* When both i and j are valid i.e. when both the sub arrays have elements to insert in A */
. if(L[i] <= R[j])
{
A[k] = L[i]
k = k+1
i = i+1
}
else
{
A[k] = R[j]
k = k+1
j = j+1
}
}
// Adding Remaining elements from left sub array to array A
while(i<nL)
{
A[k] = L[i]
i = i+1
k = k+1
}
// Adding Remaining elements from right sub array to array A
while(j<nR)
{
A[k] = R[j]
j = j+1
k = k+1
}
}
Merge Sort Algorithm
Merge Sort Algorithm works in the following steps-
•It divides the given unsorted array into two halves- left and right sub arrays.
•The sub arrays are divided recursively.
•This division continues until the size of each sub array becomes 1.
.
•After each sub array contains only a single element, each sub array is sorted trivially.
•Then, the above discussed merge procedure is called.
•The merge procedure combines these trivially sorted arrays to produce a final sorted array.

MergeSort(A)
{
n = length(A)
if n<2 return
mid = n/2
left = new_array_of_size(mid) // Creating temporary array for left
right = new_array_of_size(n-mid) // and right sub arrays
for(int i=0 ; i<=mid-1 ; ++i)
{
left[i] = A[i] // Copying elements from A to left
}
for(int i=mid ; i<=n-1 ; ++i)
{
right[i-mid] = A[i] // Copying elements from A to right
}
MergeSort(left) // Recursively solving for left sub array
MergeSort(right) // Recursively solving for right sub array
merge(left, right, A) // Merging two sorted left/right sub array to final array
}
Merge Sort Example
Step-01:
Step-02:
•Create two variables i and j for left and right sub arrays.
•Create variable k for sorted output array. •We have i = 0, j = 0, k = 0.
•Since L[0] < R[0], so we perform A[0] = L[0] i.e. we copy the first element from left sub
array to our sorted output array.
•Then, we increment i and k by 1.
.

Then, we have-
Merge Sort Example
Step-03: Step-04:

•We have i = 1, j = 0, k = 1. •We have i = 1, j = 1, k = 2.


•Since L[1] > R[0], so we perform A[1] = R[0] i.e. we copy the first element •Since L[1] > R[1], so we perform A[2] = R[1].
from right sub array to our sorted output array. •Then, we increment j and k by 1.
•Then, we increment j and k by 1.
. Then, we have-
Then, we have-
Merge Sort Example
Step-06:
Step-05:
•We have i = 2, j = 2, k = 4.
•We have i = 1, j = 2, k = 3. •Since L[2] > R[2], so we perform A[4] = R[2].
•Since L[1] < R[2], so we perform A[3] = L[1]. •Then, we increment j and k by 1.
•Then, we increment i and k by 1.
. Then, we have-
Merge Sort Example
Step-07:

•Clearly, all the elements from right sub array have been added to the sorted output array.
•So, we exit the first while loop with the condition while(i<nL && j<nR) since now j>nR.
•Then, we add remaining elements from the left sub array to the sorted output array using next while loop.

. Finally, our sorted output array is-

Basically,
•After finishing elements from any of the sub arrays, we can add the
remaining elements from the other sub array to our sorted output array as
it is.
•This is because left and right sub arrays are already sorted.
Merge Sort Example

.
Merge Sort
Time Complexity Analysis-

In merge sort, we divide the array into two (nearly) equal halves and solve them recursively using merge sort only.
So, we have-

Finally, we merge these two sub arrays using merge procedure which takes Θ(n) time as explained above.

If T(n) is the time required by merge sort for sorting an array of size n, then the recurrence relation for time complexity of merge sort is-

On solving this recurrence relation, we get T(n) = Θ(nlogn).


Thus, time complexity of merge sort algorithm is T(n) = Θ(nlogn).
Merge Sort
Properties-

Some of the important properties of merge sort algorithm are-

. •Merge sort uses a divide and conquer paradigm for sorting.


•Merge sort is a recursive sorting algorithm.
•Merge sort is a stable sorting algorithm.
•Merge sort is not an in-place sorting algorithm.
•The time complexity of merge sort algorithm is Θ(nlogn).
•The space complexity of merge sort algorithm is Θ(n).
Merge Sort
PRACTICE PROBLEMS BASED ON MERGE SORT ALGORITHM-

Problem-

Assume that a merge sort algorithm in the worst case takes 30 seconds for an input of size 64. Which of the following most closely approximates the
maximum
. input size of a problem that can be solved in 6 minutes? (GATE 2015)
1.256
2.512
3.1024
4.2048

Solution-

We know, time complexity of merge sort algorithm is Θ(nlogn).

Step-02:
Step-01:
Let n be the maximum input size of a problem that can be solved in 6
It is given that a merge sort algorithm in the worst case takes 30
minutes (or 360 seconds).
seconds for an input of size 64.
Then, we have-
So, we have-
k x nlogn = 360
k x nlogn = 30 (for n = 64)
(5/64) x nlogn = 360 { Using Result of Step-01 }
k x 64 log64 = 30
nlogn = 72 x 64
k x 64 x 6 = 30
nlogn = 4608
From here, k = 5 / 64
On solving this equation, we get n = 512.
Heap Sort
Heap sort basically recursively performs two main operations -
• Build a heap H, using the elements of array.
• Repeatedly delete the root element of the heap formed in 1st phase.
Before knowing more about the heap sort, let's first see a brief description of Heap.

What is a heap?
A heap is a complete binary tree, and the binary tree is a tree in which the node can have the utmost two children. A complete
binary tree is a binary tree in which all the levels except the last level, i.e., leaf node, should be completely filled, and all the nodes
should be left-justified.
What is heap sort?
Heapsort is a popular and efficient sorting algorithm. The concept of heap sort is to eliminate the elements one by one from the
heap part of the list, and then insert them into the sorted part of the list.
Heapsort is the in-place sorting algorithm.
Now, let's see the algorithm of heap sort.
Heap Sort
Algorithm BuildMaxHeap(arr)
1.HeapSort(arr) 1.BuildMaxHeap(arr)
2.BuildMaxHeap(arr) 2. heap_size(arr) = length(arr)
3.for i = length(arr) to 2 3. for i = length(arr)/2 to 1
4. swap arr[1] with arr[i] 4.MaxHeapify(arr,i)
5. heap_size[arr] = heap_size[arr] ? 1 5.End
6. MaxHeapify(arr,1)
7.End
Heap Sort
MaxHeapify(arr,i)
1.MaxHeapify(arr,i)
2.L = left(i)
3.R = right(i)
4.if L ? heap_size[arr] and arr[L] > arr[i]
5.largest = L
6.else
7.largest = i
8.if R ? heap_size[arr] and arr[R] > arr[largest]
9.largest = R
10.if largest != i
11.swap arr[i] with arr[largest]
12.MaxHeapify(arr,largest)
13.End
Heap Sort
Working of Heap sort Algorithm
Now, let's see the working of the Heapsort Algorithm.
In heap sort, basically, there are two phases involved in the sorting of elements. By using the heap sort algorithm, they are
as follows -
•The first step includes the creation of a heap by adjusting the elements of the array.
•After the creation of heap, now remove the root element of the heap repeatedly by shifting it to the end of the array, and
then store the heap structure with the remaining elements.

First, we have to construct a heap from the given array and convert it into max heap.
Heap Sort
After converting the given heap into max heap, the array elements are -

Next, we have to delete the root element (89) from the max heap. To delete this node, we have to swap it with the last node,
i.e. (11). After deleting the root element, we again have to heapify it to convert it into max heap.

After swapping the array element 89 with 11, and converting the heap into max-heap, the elements of array are -
Heap Sort
In the next step, again, we have to delete the root element (81) from the max heap. To delete this node, we have to swap it with the
last node, i.e. (54). After deleting the root element, we again have to heapify it to convert it into max heap.

After swapping the array element 81 with 54 and converting the heap into max-heap, the elements of array are
-
Heap Sort
In the next step, we have to delete the root element (76) from the max heap again. To delete this node, we have to swap it with the
last node, i.e. (9). After deleting the root element, we again have to heapify it to convert it into max heap.

After swapping the array element 76 with 9 and converting the heap into max-heap, the elements of array
are -
Heap Sort
In the next step, again we have to delete the root element (54) from the max heap. To delete this node, we have to swap it with the
last node, i.e. (14). After deleting the root element, we again have to heapify it to convert it into max heap.

After swapping the array element 54 with 14 and converting the heap into max-heap, the elements of array are -
Heap Sort
In the next step, again we have to delete the root element (22) from the max heap. To delete this node, we have to swap it with the
last node, i.e. (11). After deleting the root element, we again have to heapify it to convert it into max heap.

After swapping the array element 22 with 11 and converting the heap into max-heap, the elements of array are -

In the next step, again we have to delete the root element (14) from the max heap. To delete this node, we have to swap it with
the last node, i.e. (9). After deleting the root element, we again have to heapify it to convert it into max heap.
Heap Sort
After swapping the array element 14 with 9 and converting the heap into max-heap, the elements of array are -

In the next step, again we have to delete the root element (11) from the max heap. To delete this node, we have to swap it with the
last node, i.e. (9). After deleting the root element, we again have to heapify it to convert it into max heap.

After swapping the array element 11 with 9, the elements of


array are -

Now, heap has only one element left. After deleting it, heap will
be empty.
Heap Sort

After completion of sorting, the array elements are -

Now, the array is completely sorted.


Heap Sort
Heap sort complexity
Now, let's see the time complexity of Heap sort in the best case, average case, and worst case. We will also see the space complexity of Heapsort.

Time Complexity
•Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The best-case time complexity of heap sort is O(n
logn).
•Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly ascending and not properly descending. The
average case time complexity of heap sort is O(n log n).
•Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That means suppose you have to sort the array
elements in ascending order, but its elements are in descending order. The worst-case time complexity of heap sort is O(n log n).

Case Time Complexity


Best Case O(n logn)
Average Case O(n log n)
Worst Case O(n log n)
SEARCHING
Searching-

•Searching is a process of finding a particular element among several given elements.


. •The search is successful if the required element is found.
•Otherwise, the search is unsuccessful.

Searching Algorithms-

Searching Algorithms are a family of algorithms used for the purpose of searching.
LINEAR SEARCHING
•Linear Search is the simplest searching algorithm.
•It traverses the array sequentially to locate the required element.
•It searches for an element by comparing it with each element of the array one by one.
.
•So, it is also called as Sequential Search.

Linear Search Algorithm is applied when-


• No information is given about the array.
• The given array is unsorted or the elements are unordered.
• The list of data items is smaller.
LINEAR SEARCH ALGORITHM
Consider-
•There is a linear array ‘a’ of size ‘n’.
•Linear search algorithm is being used to search an element ‘item’ in this linear array.
•If search ends in success, it sets loc to the index of the element otherwise it sets loc to -1.
.

Then, Linear Search Algorithm is as follows-


Linear_Search (a , n , item , loc)

Begin
for i = 0 to (n - 1) by 1 do
if (a[i] = item) then
set loc = i
Exit
endif
endfor
set loc = -1
End
LINEAR SEARCHING
Time Complexity Analysis-

Linear Search time complexity analysis is done below-

. Time Complexity of Linear Search Algorithm is O(n).


Best case-
Here, n is the number of elements in the linear array.

In the best possible case,


•The element being searched may be found at the first position.
•In this case, the search terminates in success with just one comparison.
•Thus in best case, linear search algorithm takes O(1) operations.

Worst Case-

• The element being searched may be present at the last position or not present in the array at all.
• In the former case, the search terminates in success with n comparisons.
• In the later case, the search terminates in failure with n comparisons.
• Thus in worst case, linear search algorithm takes O(n) operations.
LINEAR SEARCHING
Linear Search Efficiency-

•Linear Search is less efficient when compared with other algorithms like Binary Search & Hash tables.
. •The other algorithms allow significantly faster searching.

Linear Search Example-

Consider-
•We are given the following linear array.
•Element 15 has to be searched in it using Linear Search Algorithm.

Now,
•Linear Search algorithm compares element 15 with all the elements of the array one by one.
•It continues searching until either the element 15 is found or all the elements are searched.
LINEAR SEARCHING
Step-01:
Step-04:

•It compares element 15 with the 1st element 92.


•It compares element 15 with the 4th element 10.
•Since 15 ≠ 92, so required element is not found.
. •Since 15 ≠ 10, so required element is not found.
•So, it moves to the next element.
•So, it moves to the next element.

Step-02:
Step-05:

•It compares element 15 with the 2nd element 87.


•It compares element 15 with the 5th element 15.
•Since 15 ≠ 87, so required element is not found.
•Since 15 = 15, so required element is found.
•So, it moves to the next element.
•Now, it stops the comparison and returns index 4 at which element 15 is
present.
Step-03:

•It compares element 15 with the 3rd element 53.


•Since 15 ≠ 53, so required element is not found.
•So, it moves to the next element.
BINARY SEARCHING
• Binary Search is one of the fastest searching algorithms.
• It is used for finding the location of an element in a linear array.
• It works on the principle of divide and conquer technique.

.
Binary Search Algorithm can be applied only on Sorted arrays.

So, the elements must be arranged in-

Either ascending order if the elements are numbers.


Or dictionary order if the elements are strings.

To apply binary search on an unsorted array,

First, sort the array using some sorting technique.


Then, use binary search algorithm.
BINARY SEARCHING ALGORITHM
Consider-
•There is a linear array ‘a’ of size ‘n’.
•Binary search algorithm is being used to search an element ‘item’ in this linear array.
•If search ends in success, it sets loc to the index of the element otherwise it sets loc to -1.
.
•Variables beg and end keeps track of the index of the first and last element of the array or sub array in which the element is being searched at that
instant.
•Variable mid keeps track of the index of the middle element of that array or sub array in which the element is being searched at that instant.
Begin
Set beg = 0
Set end = n-1
Set mid = (beg + end) / 2
while ( (beg <= end) and (a[mid] ≠ item) ) do
if (item < a[mid]) then
Set end = mid - 1
else
Set beg = mid + 1
endif
Set mid = (beg + end) / 2
endwhile
if (beg > end) then
Set loc = -1
else
Set loc = mid
endif
End
BINARY SEARCHING ALGORITHM
Explanation

Binary Search Algorithm searches an element by comparing it with the middle most element of the array.
Then, following three cases are possible-
.

Case-01

If the element being searched is found to be the middle most element, its index is returned.

Case-02

If the element being searched is found to be greater than the middle most element,
then its search is further continued in the right sub array of the middle most element.

Case-03

If the element being searched is found to be smaller than the middle most element,
then its search is further continued in the left sub array of the middle most element.

This iteration keeps on repeating on the sub arrays until the desired element is found
or size of the sub array reduces to zero.
BINARY SEARCHING ALGORITHM
Time Complexity Analysis-

Binary Search time complexity analysis is done below-


•In each iteration or in each recursive call, the search gets reduced to half of the array.
.
•So for n elements in the array, there are log 2n iterations or recursive calls.

Thus, we have-

Time Complexity of Binary Search Algorithm is O(log2n).


Here, n is the number of elements in the sorted linear array.
BINARY SEARCHING EXAMPLE
Consider-
•We are given the following sorted linear array.

.
•Element 15 has to be searched in it using Binary Search Algorithm.
BINARY SEARCHING EXAMPLE
Step-01:
Step-03:
•To begin with, we take beg=0 and end=6. •Since a[mid] = 10 < 15, so we take beg = mid + 1 = 1 + 1 = 2
•We compute location of the middle element as- whereas end remains unchanged.
mid •We compute location of the middle element as-
. = (beg + end) / 2 mid
= (0 + 6) / 2 = (beg + end) / 2
=3 = (2 + 2) / 2
•Here, a[mid] = a[3] = 20 ≠ 15 and beg < end. =2
•So, we start next iteration. •Here, a[mid] = a[2] = 15 which matches to the element being
searched.
•So, our search terminates in success and index 2 is returned .
Step-02:

•Since a[mid] = 20 > 15, so we take end = mid – 1 = 3 – 1 = 2 whereas


beg remains unchanged.
•We compute location of the middle element as-
mid
= (beg + end) / 2
= (0 + 2) / 2
=1
•Here, a[mid] = a[1] = 10 ≠ 15 and beg < end.
•So, we start next iteration.
BINARY SEARCHING
Binary Search Algorithm Advantages-

The advantages of binary search algorithm are-


•It eliminates half of the list from further searching by using the result of each comparison.
.
•It indicates whether the element being searched is before or after the current position in the list.
•This information is used to narrow the search.
•For large lists of data, it works significantly better than linear search.
Important
Binary Search Algorithm Disadvantages-
Note-
For in-memory searching, if the interval to be searched is small,
• Linear search may exhibit better performance than binary search.
• This is because it exhibits better locality of reference.
The disadvantages of binary search algorithm are-
•It employs recursive approach which requires more stack space.
•Programming binary search algorithm is error prone and difficult.
•The interaction of binary search with memory hierarchy i.e. caching is poor.
(because of its random access nature)

You might also like