Searching and Sorting
Searching and Sorting
Quick sort is a fast sorting algorithm used to sort a list of elements. Quick sort algorithm is invented by C. A. R. Hoare.
The quick sort algorithm attempts to separate the list of elements into two parts and then sort each part recursively. That means it use divide and
conquer strategy. In quick sort, the partition of the list is performed based on the element called pivot. Here pivot element is one of the elements in the
list.
The list is divided into two partitions such that "all elements to the left of pivot are smaller than the pivot and all elements to the right of pivot are
greater than or equal to the pivot".
What is Quick Sort and how is it associated with Algorithms?
Quick Sort is a sorting algorithm, which is commonly used in computer science. Quick Sort is a divide and conquer algorithm. It creates two empty arrays
to hold elements less than the pivot value and elements greater than the pivot value, and then recursively sort the sub arrays. There are two basic
operations in the algorithm, swapping items in place and partitioning a section of the array.
Quick sort
It is an algorithm of Divide & Conquer type.
Divide: Rearrange the elements and split arrays into two sub-arrays and an element in between search that each element in left sub array is less than or
equal to the average element and each element in the right sub- array is larger than the middle element.
Conquer: Recursively, sort two sub arrays.
Combine: Combine the already sorted array.
Quick Sort Algorithm: Steps on how it works:
1.Find a “pivot” item in the array. This item is the basis for comparison for a single round.
2.Start a pointer (the left pointer) at the first item in the array.
3.Start a pointer (the right pointer) at the last item in the array.
4.While the value at the left pointer in the array is less than the pivot value, move the left
pointer to the right (add 1). Continue until the value at the left pointer is greater than or equal
to the pivot value.
5.While the value at the right pointer in the array is greater than the pivot value, move the
right pointer to the left (subtract 1). Continue until the value at the right pointer is less than
or equal to the pivot value.
6.If the left pointer is less than or equal to the right pointer, then swap the values at these
locations in the array.
7.Move the left pointer to the right by one and the right pointer to the left by one.
8.If the left pointer and right pointer don’t meet, go to step 1.
Below is an image of an array, which needs to be sorted. We will use the Quick Sort Algorithm, to
sort this array:
Step by Step Process
In Quick sort algorithm, partitioning of the list is performed using following steps...
Step 1 - Consider the first element of the list as pivot (i.e., Element at first position in the list).
Step 2 - Define two variables i and j. Set i and j to first and last elements of the list respectively.
Step 3 - Increment i until list[i] > pivot then stop.
Step 4 - Decrement j until list[j] < pivot then stop.
Step 5 - If i < j then exchange list[i] and list[j].
Step 6 - Repeat steps 3,4 & 5 until i > j.
Step 7 - Exchange the pivot element with list[j] element.
Complexity of the Quick Sort Algorithm
To sort an unsorted list with 'n' number of elements, we need to make ((n-1)+(n-2)+(n-
3)+......+1) = (n (n-1))/2 number of comparisions in the worst case. If the list is already
sorted, then it requires 'n' number of comparisions.
Worst Case : O(n )
2
Let 44 be the Pivot element and scanning done from right to left
Comparing 44 to the right-side elements, and if right-side elements are smaller than 44, then swap it. As 22 is smaller than 44 so swap them.
22 33 11 55 77 90 40 60 99 44 88
Now comparing 44 to the left side element and the element must be greater than 44 then swap them. As 55 are greater than 44 so swap them.
22 33 11 44 77 90 40 60 99 55 88
Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivot element 44 & one right from pivot element.
22 33 11 40 77 90 44 60 99 55 88
Now, the element on the right side and left side are greater than and smaller than 44 respectively.
Merge Sort
Merge sort is yet another sorting algorithm that falls under the category of Divide and Conquer
technique. It is one of the best sorting techniques that successfully build a recursive algorithm.
Divide and Conquer Strategy
In this technique, we segment a problem into two halves and solve them individually. After
finding the solution of each half, we merge them back to represent the solution of the main
problem.
Suppose we have an array A, such that our main concern will be to sort the subsection, which
starts at index p and ends at index r, represented by A[p..r].
Divide
If assumed q to be the central point somewhere in between p and r, then we will fragment the
subarray A[p..r] into two arrays A[p..q] and A[q+1, r].
Conquer
After splitting the arrays into two halves, the next step is to conquer. In this step, we individually sort
both of the subarrays A[p..q] and A[q+1, r]. In case if we did not reach the base situation, then we
again follow the same procedure, i.e., we further segment these subarrays followed by sorting them
separately.
Combine
As when the base step is acquired by the conquer step, we successfully get our sorted
subarrays A[p..q] and A[q+1, r], after which we merge them back to form a new sorted
array [p..r].
Merge Sort algorithm
The MergeSort function keeps on splitting an array into two halves until a condition is met where we try to perform
MergeSort on a subarray of size 1, i.e., p == r.
Merge sort is another sorting technique and has an algorithm that has a reasonably proficient space-time complexity - O(n log n) and is quite trivial to apply. This algorithm is
based on splitting a list, into two comparable sized lists, i.e., left and right and then sorting each list and then merging the two sorted lists back together as one.
Merge sort can be done in two types both having similar logic and way of implementation. These are:
•Top down implementation
•Bottom up implementation
Below given figure shows how Merge Sort works:
As you can see in the image given below, the merge sort algorithm recursively divides the array into halves until the base condition is
met, where we are left with only 1 element in the array. And then, the merge function picks up the sorted sub-arrays and merge them
back to sort the entire array.
The following figure illustrates the dividing (splitting) procedure.
Merge( ) Function Explained Step-By-Step
Consider the following example of an unsorted array, which we are going to sort with the
help of the Merge Sort algorithm.
A= (36,25,40,2,7,80,15)
Step1: The merge sort algorithm iteratively divides an array into equal halves until we
achieve an atomic value. In case if there are an odd number of elements in an array, then
one of the halves will have more elements than the other half.
Step2: After dividing an array into two subarrays, we will notice that it did not hamper the
order of elements as they were in the original array. After now, we will further divide these
two arrays into other halves.
Step3: Again, we will divide these arrays until we achieve an atomic value, i.e., a value that
cannot be further divided.
Step4: Next, we will merge them back in the same way as they were broken down.
Step5: For each list, we will first compare the element and then combine them to form a
new sorted list.
Step6: In the next iteration, we will compare the lists of two data values and merge them
back into a list of found data values, all placed in a sorted manner.
Merge Sort Applications
The concept of merge sort is applicable in the
following areas:
•Inversion count problem
•External sorting
•E-commerce applications
Bubble Sort
Bubble Sort, also known as Exchange Sort, is a simple sorting algorithm. It works by repeatedly
stepping throughout the list to be sorted, comparing two items at a time and swapping them if
they are in the wrong order. The pass through the list is duplicated until no swaps are desired,
which means the list is sorted.
This is the easiest method among all sorting algorithms.
Pass 1:
Compare a0 and a1
Pass 3:
•Compare a0 and a1
Pass 4:
•Compare a0 and a1
Therefore, the bubble sort algorithm encompasses a time complexity of O(n2) and a space
complexity of O(1) because it necessitates some extra memory space for temp variable for
swapping.
Space Complexity of Bubble sort
The space complexity for the algorithm is O(1), because only a single additional memory space is required i.e. for
temporary variable used for swapping.
FAQs
•What is the best case time complexity of bubble sort?
The time complexity in the best case scenario is O(n) because it has to traverse through all the elements once to recognize that the array is already sorted.
•What is the advantage of bubble sort over other sorting techniques?
•The built-in ability to detect whether the list is sorted efficiently is the only advantage of bubble sort over other sorting techniques.
•When the list is already sorted (which is the best-case scenario), the complexity of bubble sort is only O(n).
•It is faster than other in case of sorted array and consumes less time to describe whether the input array is sorted or not.
•Why bubble sort is called “bubble” sort?
The “bubble” sort is called so because the list elements with greater value than their surrounding elements “bubble” towards the end of the list. For example, after first
pass, the largest element is bubbled towards the right most position. After second pass, the second largest element is bubbled towards the second last position in the list
and so on.
•Is bubble sort a stable algorithm?
•Bubble sort is a stable algorithm.
•A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input array to be sorted.
•Is bubble sort an inplace algorithm?
•Bubble sort is an inplace algorithm.
•An algorithm is said to be inplace if it does not need an extra space and produces an output in the same memory that contains the data by transforming the input
‘in-place’. However, a small constant extra space used for variables is allowed.
•Is Bubble sort slow?
•Bubble sort is slower than the other O(n2) sorting algorithms.
•It is about four times as slow as insertion sort and twice as slow as selection sort.
•It has a good best-case behavior, but is impractically slow on almost all real world data sets and is not considered for implementation in real applications.
•Can bubble sort be implemented recursively?
•Yes.
•Recursive Bubble Sort has no additional performance/implementation advantages, but can be used to understand recursion and sorting concepts better.
•Base Case: If array size is 1, return.
•Do One Pass of normal Bubble Sort. This pass bubbles largest element of current subarray to correct position.
•Recur for all elements except last of current subarray.
Implementation of Bubble sort in C++
#include <iostream>
#include <vector>
using namespace std;
void BubbleSort (vector<int> &arr, int n)
{
for (int i = 0; i < n - 1; ++i)
{
bool swapped = false;
for (int j = 0; j < n - i - 1; ++j)
{
if (arr[j] > arr[j+1])
//check if adjacent element is //not in order
{
swap(arr[j], arr[j + 1]);
swapped = true;
}
} // Value at n-i-1 will be maximum of all the values below this index.
if(!swapped) break;
}
}
int main()
{
vector<int> arr = {5, 6, 2, 6, 9, 0, -1};
int n = 7;
BubbleSort(arr, n); /
/ Display the sorted data.
cout << "\nSorted Data: ";
for (i = 0; i < n; i++)
Cout << arr[i] << “ “;
return 0;
}
Advantages of Bubble Sort
1.Easily understandable.
2.Does not necessitates any extra memory.
3.The code can be written easily for this algorithm.
4.Minimal space requirement than that of other sorting algorithms.
Disadvantages of Bubble Sort
5.It does not work well when we have large unsorted lists, and it necessitates more resources
that end up taking so much of time.
6.It is only meant for academic purposes, not for practical implementations.
7.It involves the n2 order of steps to sort an algorithm.
Application
A [] = (7, 4, 3, 6, 5).
A [] =
1st Iteration:
Set minimum = 7
•Compare a0 and a1
4th Iteration:
Set minimum = 6
•Compare a3 and a4
Therefore, the selection sort algorithm encompasses a time complexity of O(n2) and a space
complexity of O(1) because it necessitates some extra memory space for temp variable for
swapping.
Insertion Sort
Insertion sort is one of the simplest sorting algorithms for the reason that it sorts a single element
at a particular instance. It is not the best sorting algorithm in terms of performance, but it's slightly
more efficient than selection sort and bubble sort in practical scenarios. It is an intuitive sorting
technique.
Let's consider the example of cards to have a better understanding of the logic behind the insertion
sort.
Suppose we have a set of cards in our hand, such that we want to arrange these cards in ascending
order. To sort these cards, we have a number of intuitive ways.
One such thing we can do is initially we can hold all of the cards in our left hand, and we can start
taking cards one after other from the left hand, followed by building a sorted arrangement in the
right hand.
Assuming the first card to be already sorted, we will select the next unsorted card. If the unsorted
card is found to be greater than the selected card, we will simply place it on the right side, else to
the left side. At any stage during this whole process, the left hand will be unsorted, and the right
hand will be sorted.
In the same way, we will sort the rest of the unsorted cards by placing them in the correct position.
At each iteration, the insertion algorithm places an unsorted element at its right place.
Insertion Sort works as follows:
1.The first step involves the comparison of the element in question with its adjacent element.
2.And if at every comparison reveals that the element in question can be inserted at a particular position, then space is created for it by shifting
the other elements one position to the right and inserting the element at the suitable position.
3.The above procedure is repeated until all the element in the array is at their apt position.
Let us now understand working with the following example:
Consider the following array: 25, 17, 31, 13, 2
First Iteration: Compare 25 with 17. The comparison shows 17< 25. Hence swap 17 and 25.
The array now looks like:
17, 25, 31, 13, 2
First Iteration
Second Iteration: Begin with the second element (25), but it was already swapped on for the correct position, so we move ahead to the next
element.
Now hold on to the third element (31) and compare with the ones preceding it.
Since 31> 25, no swapping takes place.
Also, 31> 17, no swapping takes place and 31 remains at its position.
The array after the Second iteration looks like:
17, 25, 31, 13, 2
Second Iteration
Third Iteration: Start the following Iteration with the fourth element (13), and compare it with its preceding elements.
Since 13< 31, we swap the two.
Array now becomes: 17, 25, 13, 31, 2.
But there still exist elements that we haven’t yet compared with 13. Now the comparison takes place between 25 and 13. Since, 13 < 25, we swap
the two.
The array becomes 17, 13, 25, 31, 2.
The last comparison for the iteration is now between 17 and 13. Since 13 < 17, we swap the two.
The array now becomes 13, 17, 25, 31, 2.
Third Iteration
Fourth Iteration: The last iteration calls for the comparison
of the last element (2), with all the preceding elements and
make the appropriate swapping between elements.
Since, 2< 31. Swap 2 and 31.
Array now becomes: 13, 17, 25, 2, 31.
Compare 2 with 25, 17, 13.
Since, 2< 25. Swap 25 and 2.
13, 17, 2, 25, 31.
Compare 2 with 17 and 13.
Since, 2<17. Swap 2 and 17.
Array now becomes:
13, 2, 17, 25, 31.
The last comparison for the Iteration is to compare 2 with
13.
Since 2< 13. Swap 2 and 13.
The array now becomes:
2, 13, 17, 25, 31.
This is the final array after all the corresponding iterations
and swapping of elements.
Fourth Iteration
Insertion sort Implementation in C++:
#include <stdlib.h>
#include <iostream>
using namespace std;
//member functions declaration
void insertionSort(int arr[], int length);
void printArray(int array[], int size);
// main function int main()
{
int array[6] = {5, 1, 6, 2, 4, 3};
// calling insertion sort function to sort the array
insertionSort(array, 6);
return 0;
}
void insertionSort(int arr[], int length)
{
int i, j, key;
for (i = 1; i < length; i++)
{
key = arr[i];
j = i-1;
while (j >= 0 && arr[j] >key)
{
arr[j+1] = arr[j];
j--;
}
arr[j +1] = key;
}
cout << "Sorted Array: ";
// print the sorted array
printArray(arr, length);
}
// function to print the given array
void printArray(int array[], int size)
{
int j;
for (j = 0; j < size; j++)
Time Complexity Analysis:
Even though insertion sort is efficient, still, if we provide an already sorted array to the insertion sort
algorithm, it will still execute the outer for loop, thereby requiring n steps to sort an already sorted array
of n elements, which makes its best case time complexity a linear function of n.
Wherein for an unsorted array, it takes for an element to compare with all the other elements which
mean every n element compared with all other n elements. Thus, making it for n x n, i.e., n2
comparisons. One can also take a look at other sorting algorithms such as Merge sort, Quick Sort,
Selection Sort, etc. and understand their complexities.
Worst Case Time Complexity [ Big-O ]: O(n2)
1st Iteration:
Set key = 22
Compare a1 with a0
2nd Iteration:
Set key = 63
Compare a2 with a1 and a0
Since a2 > a1 > a0, keep the array as it is.
3rd Iteration:
Set key = 14
Compare a3 with a2, a1 and a0
ce a3 is the smallest among all the elements on the left-hand side, place a3 at the beginning of the array.
4th Iteration:
Set key = 55
Compare a4 with a3, a2, a1 and a0.
As a4 < a3, swap both of them.
5th Iteration:
Set key = 36
Compare a5 with a4, a3, a2, a1 and a0.
Since a5 < a2, so we will place the elements in their correct positions.
Since heapify uses recursion, it can be difficult to grasp. So let's first think about how you would heapify a tree
with just three elements.
Heapify base cases
Now let's think of another scenario in which there is more than one level.
The top element isn't a max-heap but all the sub-trees are max-heaps.
To maintain the max-heap property for the entire tree, we will have to keep pushing 2 downwards until it reaches its correct
position.
Build max-heap
To build a max-heap from any tree, we can thus start heapifying each sub-tree from the bottom up and end
up with a max-heap after the function is applied to all the elements including the root element.
In the case of a complete tree, the first index of a non-leaf node is given by n/2 - 1. All other nodes after that
are leaf-nodes and thus don't need to be heapified.
So, we can build a maximum heap as