0% found this document useful (0 votes)
19 views

Searching and Sorting

Uploaded by

Anshu Jayswal
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Searching and Sorting

Uploaded by

Anshu Jayswal
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 67

Quick Sort Algorithm

Quick sort is a fast sorting algorithm used to sort a list of elements. Quick sort algorithm is invented by C. A. R. Hoare.
The quick sort algorithm attempts to separate the list of elements into two parts and then sort each part recursively. That means it use divide and
conquer strategy. In quick sort, the partition of the list is performed based on the element called pivot. Here pivot element is one of the elements in the
list.
The list is divided into two partitions such that "all elements to the left of pivot are smaller than the pivot and all elements to the right of pivot are
greater than or equal to the pivot".
What is Quick Sort and how is it associated with Algorithms?
Quick Sort is a sorting algorithm, which is commonly used in computer science. Quick Sort is a divide and conquer algorithm. It creates two empty arrays
to hold elements less than the pivot value and elements greater than the pivot value, and then recursively sort the sub arrays. There are two basic
operations in the algorithm, swapping items in place and partitioning a section of the array.

Quick sort
It is an algorithm of Divide & Conquer type.
Divide: Rearrange the elements and split arrays into two sub-arrays and an element in between search that each element in left sub array is less than or
equal to the average element and each element in the right sub- array is larger than the middle element.
Conquer: Recursively, sort two sub arrays.
Combine: Combine the already sorted array.
Quick Sort Algorithm: Steps on how it works:
1.Find a “pivot” item in the array. This item is the basis for comparison for a single round.
2.Start a pointer (the left pointer) at the first item in the array.
3.Start a pointer (the right pointer) at the last item in the array.
4.While the value at the left pointer in the array is less than the pivot value, move the left
pointer to the right (add 1). Continue until the value at the left pointer is greater than or equal
to the pivot value.
5.While the value at the right pointer in the array is greater than the pivot value, move the
right pointer to the left (subtract 1). Continue until the value at the right pointer is less than
or equal to the pivot value.
6.If the left pointer is less than or equal to the right pointer, then swap the values at these
locations in the array.
7.Move the left pointer to the right by one and the right pointer to the left by one.
8.If the left pointer and right pointer don’t meet, go to step 1.
Below is an image of an array, which needs to be sorted. We will use the Quick Sort Algorithm, to
sort this array:
Step by Step Process
In Quick sort algorithm, partitioning of the list is performed using following steps...
 Step 1 - Consider the first element of the list as pivot (i.e., Element at first position in the list).
 Step 2 - Define two variables i and j. Set i and j to first and last elements of the list respectively.
 Step 3 - Increment i until list[i] > pivot then stop.
 Step 4 - Decrement j until list[j] < pivot then stop.
 Step 5 - If i < j then exchange list[i] and list[j].
 Step 6 - Repeat steps 3,4 & 5 until i > j.
 Step 7 - Exchange the pivot element with list[j] element.
Complexity of the Quick Sort Algorithm
To sort an unsorted list with 'n' number of elements, we need to make ((n-1)+(n-2)+(n-
3)+......+1) = (n (n-1))/2 number of comparisions in the worst case. If the list is already
sorted, then it requires 'n' number of comparisions.
Worst Case : O(n )
2

Best Case : O (n log n)


Average Case : O (n log n)
Example of Quick Sort:
1.44 33 11 55 77 90 40 60 99 22 88

Let 44 be the Pivot element and scanning done from right to left
Comparing 44 to the right-side elements, and if right-side elements are smaller than 44, then swap it. As 22 is smaller than 44 so swap them.
22 33 11 55 77 90 40 60 99 44 88

Now comparing 44 to the left side element and the element must be greater than 44 then swap them. As 55 are greater than 44 so swap them.
22 33 11 44 77 90 40 60 99 55 88

Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivot element 44 & one right from pivot element.
22 33 11 40 77 90 44 60 99 55 88

Swap with 77:


22 33 11 40 44 90 77 60 99 55 88

Now, the element on the right side and left side are greater than and smaller than 44 respectively.
Merge Sort
Merge sort is yet another sorting algorithm that falls under the category of Divide and Conquer
technique. It is one of the best sorting techniques that successfully build a recursive algorithm.
Divide and Conquer Strategy
In this technique, we segment a problem into two halves and solve them individually. After
finding the solution of each half, we merge them back to represent the solution of the main
problem.
Suppose we have an array A, such that our main concern will be to sort the subsection, which
starts at index p and ends at index r, represented by A[p..r].
Divide
If assumed q to be the central point somewhere in between p and r, then we will fragment the
subarray A[p..r] into two arrays A[p..q] and A[q+1, r].
Conquer
After splitting the arrays into two halves, the next step is to conquer. In this step, we individually sort
both of the subarrays A[p..q] and A[q+1, r]. In case if we did not reach the base situation, then we
again follow the same procedure, i.e., we further segment these subarrays followed by sorting them
separately.
Combine
As when the base step is acquired by the conquer step, we successfully get our sorted
subarrays A[p..q] and A[q+1, r], after which we merge them back to form a new sorted
array [p..r].
Merge Sort algorithm
The MergeSort function keeps on splitting an array into two halves until a condition is met where we try to perform
MergeSort on a subarray of size 1, i.e., p == r.
Merge sort is another sorting technique and has an algorithm that has a reasonably proficient space-time complexity - O(n log n) and is quite trivial to apply. This algorithm is
based on splitting a list, into two comparable sized lists, i.e., left and right and then sorting each list and then merging the two sorted lists back together as one.
Merge sort can be done in two types both having similar logic and way of implementation. These are:
•Top down implementation
•Bottom up implementation
Below given figure shows how Merge Sort works:
As you can see in the image given below, the merge sort algorithm recursively divides the array into halves until the base condition is
met, where we are left with only 1 element in the array. And then, the merge function picks up the sorted sub-arrays and merge them
back to sort the entire array.
The following figure illustrates the dividing (splitting) procedure.
Merge( ) Function Explained Step-By-Step
Consider the following example of an unsorted array, which we are going to sort with the
help of the Merge Sort algorithm.
A= (36,25,40,2,7,80,15)
Step1: The merge sort algorithm iteratively divides an array into equal halves until we
achieve an atomic value. In case if there are an odd number of elements in an array, then
one of the halves will have more elements than the other half.
Step2: After dividing an array into two subarrays, we will notice that it did not hamper the
order of elements as they were in the original array. After now, we will further divide these
two arrays into other halves.
Step3: Again, we will divide these arrays until we achieve an atomic value, i.e., a value that
cannot be further divided.
Step4: Next, we will merge them back in the same way as they were broken down.
Step5: For each list, we will first compare the element and then combine them to form a
new sorted list.
Step6: In the next iteration, we will compare the lists of two data values and merge them
back into a list of found data values, all placed in a sorted manner.
Merge Sort Applications
The concept of merge sort is applicable in the
following areas:
•Inversion count problem
•External sorting
•E-commerce applications
Bubble Sort
Bubble Sort, also known as Exchange Sort, is a simple sorting algorithm. It works by repeatedly
stepping throughout the list to be sorted, comparing two items at a time and swapping them if
they are in the wrong order. The pass through the list is duplicated until no swaps are desired,
which means the list is sorted.
This is the easiest method among all sorting algorithms.

How Bubble Sort Works


The bubble sort starts with the very first index and makes it a bubble element. Then it compares the bubble element,
which is currently our first index element, with the next element. If the bubble element is greater and the second element
is smaller, then both of them will swap.
After swapping, the second element will become the bubble element. Now we will compare the second element with the
third as we did in the earlier step and swap them if required. The same process is followed until the last element.
We will follow the same process for the rest of the iterations. After each of the iteration, we will notice that the largest
element present in the unsorted array has reached the last index.
For each iteration, the bubble sort will compare up to the last unsorted element.
Once all the elements get sorted in the ascending order, the algorithm will get terminated.
Consider the following example of an unsorted array that we will sort with the help of the Bubble Sort algorithm.
Initially,

Pass 1:

Compare a0 and a1

As a0 < a1 so the array will remain as it is.


•Compare a1 and a2
Pass 2:
•Compare a0 and a1

As a0 < a1 so the array will remain as it is.


•Compare a1 and a2

Here a1 < a2, so the array will remain as it is.


•Compare a2 and a3

In this case, a2 > a3, so both of them will get swapped.

Pass 3:
•Compare a0 and a1

As a0 < a1 so the array will remain as it is.


•Compare a1 and a2

Now a1 > a2, so both of them will get swapped.

Pass 4:
•Compare a0 and a1

Here a0 > a1, so we will swap both of them.


Complexity Analysis
Time Complexity of Bubble sort
•Best case scenario: The best case scenario occurs when the array is already sorted. In this case, no swapping will happen in the first iteration (The swapped variable will be
false). So, when this happens, we break from the loop after the very first iteration. Hence, time complexity in the best case scenario is O(n) because it has to traverse through all
the elements once.
•Worst case and Average case scenario: In Bubble Sort, n-1 comparisons are done in the 1st pass, n-2 in 2nd pass, n-3 in 3rd pass and so on. So, the total number of
comparisons will be:Sum = (n-1) + (n-2) + (n-3) + ..... + 3 + 2 + 1 Sum = n(n-1)/2

Complexity Analysis of Bubble Sort


Input: Given n input elements.
Output: Number of steps incurred to sort a list.
Logic: If we are given n elements, then in the first pass, it will do n-1 comparisons; in the second pass, it will do n-2; in the third pass, it
will do n-3 and so on. Thus, the total number of comparisons can be found by;

Therefore, the bubble sort algorithm encompasses a time complexity of O(n2) and a space
complexity of O(1) because it necessitates some extra memory space for temp variable for
swapping.
Space Complexity of Bubble sort
The space complexity for the algorithm is O(1), because only a single additional memory space is required i.e. for
temporary variable used for swapping.
FAQs
•What is the best case time complexity of bubble sort?
The time complexity in the best case scenario is O(n) because it has to traverse through all the elements once to recognize that the array is already sorted.
•What is the advantage of bubble sort over other sorting techniques?
•The built-in ability to detect whether the list is sorted efficiently is the only advantage of bubble sort over other sorting techniques.
•When the list is already sorted (which is the best-case scenario), the complexity of bubble sort is only O(n).
•It is faster than other in case of sorted array and consumes less time to describe whether the input array is sorted or not.
•Why bubble sort is called “bubble” sort?
The “bubble” sort is called so because the list elements with greater value than their surrounding elements “bubble” towards the end of the list. For example, after first
pass, the largest element is bubbled towards the right most position. After second pass, the second largest element is bubbled towards the second last position in the list
and so on.
•Is bubble sort a stable algorithm?
•Bubble sort is a stable algorithm.
•A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input array to be sorted.
•Is bubble sort an inplace algorithm?
•Bubble sort is an inplace algorithm.
•An algorithm is said to be inplace if it does not need an extra space and produces an output in the same memory that contains the data by transforming the input
‘in-place’. However, a small constant extra space used for variables is allowed.
•Is Bubble sort slow?
•Bubble sort is slower than the other O(n2) sorting algorithms.
•It is about four times as slow as insertion sort and twice as slow as selection sort.
•It has a good best-case behavior, but is impractically slow on almost all real world data sets and is not considered for implementation in real applications.
•Can bubble sort be implemented recursively?
•Yes.
•Recursive Bubble Sort has no additional performance/implementation advantages, but can be used to understand recursion and sorting concepts better.
•Base Case: If array size is 1, return.
•Do One Pass of normal Bubble Sort. This pass bubbles largest element of current subarray to correct position.
•Recur for all elements except last of current subarray.
Implementation of Bubble sort in C++
#include <iostream>
#include <vector>
using namespace std;
void BubbleSort (vector<int> &arr, int n)
{
for (int i = 0; i < n - 1; ++i)
{
bool swapped = false;
for (int j = 0; j < n - i - 1; ++j)
{
if (arr[j] > arr[j+1])
//check if adjacent element is //not in order
{
swap(arr[j], arr[j + 1]);
swapped = true;
}
} // Value at n-i-1 will be maximum of all the values below this index.
if(!swapped) break;
}
}
int main()
{
vector<int> arr = {5, 6, 2, 6, 9, 0, -1};
int n = 7;
BubbleSort(arr, n); /
/ Display the sorted data.
cout << "\nSorted Data: ";
for (i = 0; i < n; i++)
Cout << arr[i] << “ “;
return 0;
}
Advantages of Bubble Sort
1.Easily understandable.
2.Does not necessitates any extra memory.
3.The code can be written easily for this algorithm.
4.Minimal space requirement than that of other sorting algorithms.
Disadvantages of Bubble Sort
5.It does not work well when we have large unsorted lists, and it necessitates more resources
that end up taking so much of time.
6.It is only meant for academic purposes, not for practical implementations.
7.It involves the n2 order of steps to sort an algorithm.
Application

Understand the working of Bubble sort


•Bubble sort is mainly used in educational purposes for helping students understand the foundations of sorting.
•This is used to identify whether the list is already sorted. When the list is already sorted (which is the best-case scenario), the complexity of
bubble sort is only O(n).
•In real life, bubble sort can be visualised when people in a queue wanting to be standing in a height wise sorted manner swap their positions
among themselves until everyone is standing based on increasing order of heights.
Selection Sort
The selection sort enhances the bubble sort by making only a single swap for each pass through
the rundown. In order to do this, a selection sort searches for the biggest value as it makes a
pass and, after finishing the pass, places it in the best possible area. Similarly, as with a bubble
sort, after the first pass, the biggest item is in the right place. After the second pass, the following
biggest is set up. This procedure proceeds and requires n-1 goes to sort n item since the last item
must be set up after the (n-1) th pass.
How Selection Sort works
In the selection sort, first of all, we set the initial element as a minimum.
Now we will compare the minimum with the second element. If the second element turns out to be smaller than the
minimum, we will swap them, followed by assigning to a minimum to the third element.
Else if the second element is greater than the minimum, which is our first element, then we will do nothing and move on
to the third element and then compare it with the minimum.
We will repeat this process until we reach the last element.
After the completion of each iteration, we will notice that our minimum has reached the start of the unsorted list.
For each iteration, we will start the indexing from the first element of the unsorted list. We will repeat the Steps from 1 to
4 until the list gets sorted or all the elements get correctly positioned.
1.Consider the following example of an unsorted array that we will sort with the help of the Selection Sort algorithm.

A [] = (7, 4, 3, 6, 5).
A [] =

1st Iteration:
Set minimum = 7
•Compare a0 and a1

As, a0 > a1, set minimum = 4.


•Compare a1 and a2
As, a1 > a2, set minimum = 3.
•Compare a2 and a3

As, a2 < a3, set minimum= 3.


•Compare a2 and a4

As, a2 < a4, set minimum =3.


Since 3 is the smallest element, so we will swap a 0 and a2.
2nd Iteration:
Set minimum = 4
•Compare a1 and a2

As, a1 < a2, set minimum = 4.


•Compare a1 and a3
As, A[1] < A[3], set minimum = 4.
•Compare a1 and a4

Again, a1 < a4, set minimum = 4.


Since the minimum is already placed in the correct position, so there will be no swapping.
3rd Iteration:
Set minimum = 7
•Compare a2 and a3

As, a2 > a3, set minimum = 6.


•Compare a3 and a4
As, a3 > a4, set minimum = 5.
Since 5 is the smallest element among the leftover unsorted elements, so we will swap 7 and 5.

4th Iteration:
Set minimum = 6
•Compare a3 and a4

As a3 < a4, set minimum = 6.


Since the minimum is already placed in the correct position, so there will be no swapping.
Complexity Analysis of Selection Sort
Input: Given n input elements.
Output: Number of steps incurred to sort a list.
Logic: If we are given n elements, then in the first pass, it will do n-1 comparisons; in the second pass, it will do n-2; in the third pass, it
will do n-3 and so on. Thus, the total number of comparisons can be found by;

Therefore, the selection sort algorithm encompasses a time complexity of O(n2) and a space
complexity of O(1) because it necessitates some extra memory space for temp variable for
swapping.
Insertion Sort
Insertion sort is one of the simplest sorting algorithms for the reason that it sorts a single element
at a particular instance. It is not the best sorting algorithm in terms of performance, but it's slightly
more efficient than selection sort and bubble sort in practical scenarios. It is an intuitive sorting
technique.
Let's consider the example of cards to have a better understanding of the logic behind the insertion
sort.
Suppose we have a set of cards in our hand, such that we want to arrange these cards in ascending
order. To sort these cards, we have a number of intuitive ways.
One such thing we can do is initially we can hold all of the cards in our left hand, and we can start
taking cards one after other from the left hand, followed by building a sorted arrangement in the
right hand.
Assuming the first card to be already sorted, we will select the next unsorted card. If the unsorted
card is found to be greater than the selected card, we will simply place it on the right side, else to
the left side. At any stage during this whole process, the left hand will be unsorted, and the right
hand will be sorted.
In the same way, we will sort the rest of the unsorted cards by placing them in the correct position.
At each iteration, the insertion algorithm places an unsorted element at its right place.
Insertion Sort works as follows:
1.The first step involves the comparison of the element in question with its adjacent element.
2.And if at every comparison reveals that the element in question can be inserted at a particular position, then space is created for it by shifting
the other elements one position to the right and inserting the element at the suitable position.
3.The above procedure is repeated until all the element in the array is at their apt position.
Let us now understand working with the following example:
Consider the following array: 25, 17, 31, 13, 2
First Iteration: Compare 25 with 17. The comparison shows 17< 25. Hence swap 17 and 25.
The array now looks like:
17, 25, 31, 13, 2
First Iteration
Second Iteration: Begin with the second element (25), but it was already swapped on for the correct position, so we move ahead to the next
element.
Now hold on to the third element (31) and compare with the ones preceding it.
Since 31> 25, no swapping takes place.
Also, 31> 17, no swapping takes place and 31 remains at its position.
The array after the Second iteration looks like:
17, 25, 31, 13, 2

Second Iteration
Third Iteration: Start the following Iteration with the fourth element (13), and compare it with its preceding elements.
Since 13< 31, we swap the two.
Array now becomes: 17, 25, 13, 31, 2.
But there still exist elements that we haven’t yet compared with 13. Now the comparison takes place between 25 and 13. Since, 13 < 25, we swap
the two.
The array becomes 17, 13, 25, 31, 2.
The last comparison for the iteration is now between 17 and 13. Since 13 < 17, we swap the two.
The array now becomes 13, 17, 25, 31, 2.
Third Iteration
Fourth Iteration: The last iteration calls for the comparison
of the last element (2), with all the preceding elements and
make the appropriate swapping between elements.
Since, 2< 31. Swap 2 and 31.
Array now becomes: 13, 17, 25, 2, 31.
Compare 2 with 25, 17, 13.
Since, 2< 25. Swap 25 and 2.
13, 17, 2, 25, 31.
Compare 2 with 17 and 13.
Since, 2<17. Swap 2 and 17.
Array now becomes:
13, 2, 17, 25, 31.
The last comparison for the Iteration is to compare 2 with
13.
Since 2< 13. Swap 2 and 13.
The array now becomes:
2, 13, 17, 25, 31.
This is the final array after all the corresponding iterations
and swapping of elements.
Fourth Iteration
Insertion sort Implementation in C++:
#include <stdlib.h>
#include <iostream>
using namespace std;
//member functions declaration
void insertionSort(int arr[], int length);
void printArray(int array[], int size);
// main function int main()
{
int array[6] = {5, 1, 6, 2, 4, 3};
// calling insertion sort function to sort the array
insertionSort(array, 6);
return 0;
}
void insertionSort(int arr[], int length)
{
int i, j, key;
for (i = 1; i < length; i++)
{
key = arr[i];
j = i-1;
while (j >= 0 && arr[j] >key)
{
arr[j+1] = arr[j];
j--;
}
arr[j +1] = key;
}
cout << "Sorted Array: ";
// print the sorted array
printArray(arr, length);
}
// function to print the given array
void printArray(int array[], int size)
{
int j;
for (j = 0; j < size; j++)
Time Complexity Analysis:
Even though insertion sort is efficient, still, if we provide an already sorted array to the insertion sort
algorithm, it will still execute the outer for loop, thereby requiring n steps to sort an already sorted array
of n elements, which makes its best case time complexity a linear function of n.
Wherein for an unsorted array, it takes for an element to compare with all the other elements which
mean every n element compared with all other n elements. Thus, making it for n x n, i.e., n2
comparisons. One can also take a look at other sorting algorithms such as Merge sort, Quick Sort,
Selection Sort, etc. and understand their complexities.
Worst Case Time Complexity [ Big-O ]: O(n2)

Best Case Time Complexity [Big-omega]: O(n)


Average Time Complexity [Big-theta]: O(n2)
Selection Sort
Selection sort is a simple comparison-based sorting algorithm. It is in-place and needs no extra memory.
The idea behind this algorithm is pretty simple. We divide the array into two parts: sorted and unsorted. The left part is sorted subarray and the right part is unsorted subarray.
Initially, sorted subarray is empty and unsorted array is the complete given array.
We perform the steps given below until the unsorted subarray becomes empty:
1.Pick the minimum element from the unsorted subarray.
2.Swap it with the leftmost element of the unsorted subarray.
3.Now the leftmost element of unsorted subarray becomes a part (rightmost) of sorted subarray and will not be a part of unsorted subarray.

A selection sort works as follows:


Part of unsorted array
Part of sorted array
Leftmost element in unsorted array
Minimum element in unsorted array
This is our initial array A = [5, 2, 6, 7, 2, 1, 0, 3]

Leftmost element of unsorted part = A[0]


Minimum element of unsorted part = A[6]
We will swap A[0] and A[6] then, make A[0] part of sorted subarray.

Leftmost element of unsorted part = A[1]


Minimum element of unsorted part = A[5]
We will swap A[1] and A[5] then, make A[1] part of sorted subarray.
Leftmost element of unsorted part = A[2]
Minimum element of unsorted part = A[4]
We will swap A[2] and A[4] then, make A[2] part of sorted subarray.

Leftmost element of unsorted part = A[3]


Minimum element of unsorted part = A[5]
We will swap A[3] and A[5] then, make A[3] part of sorted subarray.

Leftmost element of unsorted part = A[4]


Minimum element of unsorted part = A[7]
We will swap A[4] and A[7] then, make A[4] part of sorted subarray.
Leftmost element of unsorted part = A[5]
Minimum element of unsorted part = A[6]
We will swap A[5] and A[6] then, make A[5] part of sorted subarray.

Leftmost element of unsorted part = A[6]


Minimum element of unsorted part = A[7]
We will swap A[6] and A[7] then, make A[6] part of sorted subarray.

This is the final sorted array.


Selection sort program in C++:
#include <iostream> #include <vector> using namespace std; int findMinIndex(vector<int> &A, int start) { int min_index = start; ++start; while(start <
(int)A.size()) { if(A[start] < A[min_index]) min_index = start; ++start; } return min_index; } void selectionSort(vector<int> &A) { for(int i = 0; i <
(int)A.size(); ++i) { int min_index = findMinIndex(A, i); if(i != min_index) swap(A[i], A[min_index]); } } int main() { vector<int> A = {5, 2, 6, 7, 2, 1,
0, 3}; selectionSort(A); for(int num : A) cout << num << ' '; return 0; }
How Insertion Sort Works
1. We will start by assuming the very first element
of the array is already sorted. Inside the key, we
will store the second element.
Next, we will compare our first element with
the key, such that if the key is found to be
smaller than the first element, we will interchange
their indexes or place the key at the first index.
After doing this, we will notice that the first two
elements are sorted.
2. Now, we will move on to the third element and
compare it with the left-hand side elements. If it is
the smallest element, then we will place the third
element at the first index.
Else if it is greater than the first element and
smaller than the second element, then we will
interchange its position with the third element and
place it after the first element. After doing this, we
will have our first three elements in a sorted
manner.
3. Similarly, we will sort the rest of the elements
and place them in their correct position.
Consider the following example of an unsorted
array that we will sort with the help of the Insertion
Initially,

1st Iteration:
Set key = 22
Compare a1 with a0

Since a0 > a1, swap both of them.

2nd Iteration:
Set key = 63
Compare a2 with a1 and a0
Since a2 > a1 > a0, keep the array as it is.

3rd Iteration:
Set key = 14
Compare a3 with a2, a1 and a0
ce a3 is the smallest among all the elements on the left-hand side, place a3 at the beginning of the array.

4th Iteration:
Set key = 55
Compare a4 with a3, a2, a1 and a0.
As a4 < a3, swap both of them.
5th Iteration:
Set key = 36
Compare a5 with a4, a3, a2, a1 and a0.
Since a5 < a2, so we will place the elements in their correct positions.

Hence the array is arranged in ascending order, so no more swapping is required.


the insertion sort algorithm encompasses a time complexity of O(n2) and a space complexity of O(1) because it necessitates some extra memory space
for a key variable to perform swaps.
Insertion Sort Applications
The insertion sort algorithm is used in the following
cases:
•When the array contains only a few elements.
•When there exist few elements to sort.
Advantages of Insertion Sort
1.It is simple to implement.
2.It is efficient on small datasets.
3.It is stable (does not change the relative order of
elements with equal keys)
4.It is in-place (only requires a constant amount O
(1) of extra memory space).
5.It is an online algorithm, which can sort a list
when it is received.
Linear Search Algorithm
What is Search?
Search is a process of finding a value in a list of values. In other words, searching is the
process of locating given value position in a list of values.
Linear Search Algorithm (Sequential Search Algorithm)
Linear search algorithm finds a given element in a list of elements with O(n) time
complexity where n is total number of elements in the list. This search process starts
comparing search element with the first element in the list. If both are matched then result
is element found otherwise search element is compared with the next element in the list.
Repeat the same until search element is compared with the last element in the list, if that
last element also doesn't match, then the result is "Element not found in the list". That
means, the search element is compared with element by element in the list.

Linear search is implemented using following steps...


•Step 1 - Read the search element from the user.
•Step 2 - Compare the search element with the first element in the list.
•Step 3 - If both are matched, then display "Given element is found!!!" and terminate the
function
•Step 4 - If both are not matched, then compare search element with the next element in
the list.
•Step 5 - Repeat steps 3 and 4 until search element is compared with last element in the
list.
•Step 6 - If last element in the list also doesn't match, then display "Element is not
Example
Consider the following list of elements and the element to be searched...
Binary Search Algorithm
In this article, we will discuss the Binary Search Algorithm. Searching is the process of finding some particular
element in the list. If the element is present in the list, then the process is called successful, and the process
returns the location of that element. Otherwise, the search is called unsuccessful.
Linear Search and Binary Search are the two popular searching techniques. Here we will discuss the Binary Search
Algorithm.
Binary search is the search technique that works efficiently on sorted lists. Hence, to search an element into some
list using the binary search technique, we must ensure that the list is sorted.
Binary search follows the divide and conquer approach in which the list is divided into two
halves, and the item is compared with the middle element of the list. If the match is found
then, the location of the middle element is returned. Otherwise, we search into either of the
halves depending upon the result produced through the match.
What is Heap Data Structure?
Heap is a special tree-based data structure. A binary tree is said to follow a heap data structure if
•it is a complete binary tree
•All nodes in the tree follow the property that they are greater than their children i.e. the largest element is at the root and both its children
and smaller than the root and so on. Such a heap is called a max-heap. If instead, all nodes are smaller than their children, it is called a min-
heap
The following example diagram shows Max-Heap and Min-Heap.
Max Heap and Min Heap
How to "heapify" a tree
Starting from a complete binary tree, we can modify it to become a Max-Heap by running a
function called heapify on all the non-leaf elements of the heap.

Since heapify uses recursion, it can be difficult to grasp. So let's first think about how you would heapify a tree
with just three elements.
Heapify base cases
Now let's think of another scenario in which there is more than one level.
The top element isn't a max-heap but all the sub-trees are max-heaps.
To maintain the max-heap property for the entire tree, we will have to keep pushing 2 downwards until it reaches its correct
position.
Build max-heap
To build a max-heap from any tree, we can thus start heapifying each sub-tree from the bottom up and end
up with a max-heap after the function is applied to all the elements including the root element.
In the case of a complete tree, the first index of a non-leaf node is given by n/2 - 1. All other nodes after that
are leaf-nodes and thus don't need to be heapified.
So, we can build a maximum heap as

Create array and calculate i


Steps to build max heap for heap sort
Steps to build max heap for heap sort
As shown in the above diagram, we start by heapifying the lowest smallest trees and
gradually move up until we reach the root element.
Working of Heap Sort
1.Since the tree satisfies Max-Heap property, then the largest item is stored at the root
node.
2.Swap: Remove the root element and put at the end of the array (nth position) Put the
last item of the tree (heap) at the vacant place.
3.Remove: Reduce the size of the heap by 1.
4.Heapify: Heapify the root element again so that we have the highest element at root.
5.The process is repeated until all the items of the list are sorted.

You might also like