0% found this document useful (0 votes)
22 views

Sorting

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Sorting

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Sorting

Bubble Sort Algorithm with Complexity

Introduction to Bubble Sort

 Definition: Bubble Sort is a simple comparison-based sorting algorithm. It repeatedly


steps through the list, compares adjacent elements, and swaps them if they are in the
wrong order. This process is repeated until the list is sorted.
 Origin of Name: The algorithm gets its name because smaller elements "bubble" to
the top of the list, while larger elements sink to the bottom.

Bubble Sort Algorithm

Algorithm Steps:

1. Start at the beginning of the list.


2. Compare the first two elements.
3. If the first element is greater than the second, swap them.
4. Move to the next pair of elements and repeat the comparison and swapping
steps.
5. Continue this process until you reach the end of the list.
6. Repeat the entire process for the remaining unsorted elements.
7. The process is repeated until no more swaps are needed, indicating that the list
is sorted.

Bubble Sort Example

def bubble_sort(arr):
n = len(arr)
for i in range(n):
swapped = False
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
swapped = True
if not swapped:
break
return arr

# Example Usage
arr = [64, 34, 25, 12, 22, 11, 90]
sorted_arr = bubble_sort(arr)
print("Sorted array is:", sorted_arr)

Explanation:

o The outer loop runs n times, where n is the length of the list.
o The inner loop runs n-i-1 times for each outer loop iteration.
o Elements are compared and swapped if needed.
o The swapped flag is used to optimize the algorithm by stopping early if no
swaps are made in an inner loop iteration.
Complexity Analysis

 Time Complexity:
o Worst Case: O(n^2)
 Occurs when the list is in reverse order.
 Each element needs to be compared with every other element.
o Average Case: O(n^2)
 Comparisons and swaps are performed on average, leading to quadratic
time complexity.
o Best Case: O(n)
 Occurs when the list is already sorted.
 Only one pass is needed to confirm the list is sorted.

Advantages and Disadvantages

Advantages:

o Simple and easy to implement.


o Requires only a small amount of additional memory space.
o Can be efficient for small datasets or nearly sorted lists.
o Stable sort: does not change the relative order of equal elements.

Disadvantages:

o Inefficient for large datasets.


o High time complexity (O(n^2)) makes it impractical for most real-world
applications.

Insertion Sort with Complexity

Introduction to Insertion Sort

 Definition: Insertion Sort is a simple, comparison-based sorting algorithm that builds


the final sorted array one item at a time. It is much like sorting playing cards in your
hands.
 Approach: It iterates through the list, taking one element at a time and inserting it
into its correct position relative to the already sorted part of the list.

Insertion Sort Algorithm

Algorithm Steps:

1. Assume the first element is sorted.


2. Take the next element.
3. Compare it with the elements in the sorted part of the list (moving from right
to left).
4. Shift all the sorted elements that are greater than the new element to the right.
5. Insert the new element into its correct position.
6. Repeat the process for all elements.

Insertion Sort algorithm

void insertionSort(int arr[], int n) {


int i, key, j;
for (i = 1; i < n; i++) {
key = arr[i];
j = i - 1;

while (j >= 0 && arr[j] > key) {


arr[j + 1] = arr[j];
j = j - 1;
}
arr[j + 1] = key;
}
}

Explanation:

o Start from the second element (index 1) and assume the first element is sorted.
o For each element, compare it with elements in the sorted part.
o Shift elements in the sorted part to the right if they are greater than the current
element.
o Insert the current element into its correct position.

Complexity Analysis

 Time Complexity:
o Worst Case: O(n^2)
 Occurs when the list is in reverse order.
 Each element needs to be compared with every other element in the
sorted part.
o Average Case: O(n^2)
 Comparisons and shifts are performed on average, leading to quadratic
time complexity.
o Best Case: O(n)
 Occurs when the list is already sorted.
 Only one pass is needed to confirm the list is sorted, with no shifting
required.

Advantages and Disadvantages

Advantages:
o Simple and easy to implement.
o Efficient for small datasets and nearly sorted lists.
o Stable sort: does not change the relative order of equal elements.
o In-place sort: requires only a small amount of additional memory.

Disadvantages:

o Inefficient for large datasets due to its O(n^2) time complexity.


o Performance degrades quickly as the size of the input grows.

Selection Sort Algorithm

Selection sort is a simple comparison-based sorting algorithm. It has an average, worst-case,


and best-case time complexity of O(n2)O(n^2)O(n2), where nnn is the number of items being
sorted. Despite its simplicity, it's not suitable for large data sets due to its quadratic time
complexity.

How Selection Sort Works

1. Initial Setup: Begin with the entire list unsorted.


2. Finding the Minimum: Find the smallest (or largest, depending on sorting order)
element in the unsorted portion of the list.
3. Swapping: Swap the found minimum element with the first element of the unsorted
portion.
4. Update and Repeat: Consider the first element of the unsorted portion as sorted, then
repeat the process for the remaining unsorted portion of the list.
5. Termination: Continue until the entire list is sorted.

Algorithm Steps

For an array A of length n:

1. Outer Loop: Iterate over the array from the first to the second-last element (index i
from 0 to n-2).
2. Find Minimum: For each i, find the minimum element in the subarray from A[i] to
A[n-1].
3. Swap: Swap the found minimum element with A[i].

function selectionSort(A: array of n items)


for i = 0 to n-2 do
minIndex = i
for j = i+1 to n-1 do
if A[j] < A[minIndex] then
minIndex = j
swap A[i] with A[minIndex]
end function
Selection sort function in C
void selection_sort(int arr[], int n) {
int i, j, min_index;
// Traverse through all array elements
for (i = 0; i < n – 1; i++) {
// Find the minimum element in the unsorted part of the array
min_index = i;
for (j = i + 1; j < n; j++) {
if (arr[j] < arr[min_index]) {
min_index = j;
}
}
// Swap the found minimum element with the first element
int temp = arr[i];
arr[i] = arr[min_index];
arr[min_index] = temp;
}
}

Example

Consider sorting the array [29,10,14,37,13]:

1. Initial array: [29,10,14,37,13]


2. First pass: Find the minimum element from index 0 to 4 (minimum is 10).
o Swap 10 with 29.
o Array after first pass: [10,29,14,37,13]
3. Second pass: Find the minimum element from index 1 to 4 (minimum is 13).
o Swap 13 with 29.
o Array after second pass: [10,13,14,37,29]
4. Third pass: Find the minimum element from index 2 to 4 (minimum is 14).
o 14 is already in the correct position.
o Array after third pass: [10,13,14,37,29]
5. Fourth pass: Find the minimum element from index 3 to 4 (minimum is 29).
o Swap 29 with 37.
o Array after fourth pass: [10,13,14,29,37]
6. Final sorted array: [10,13,14,29,37]

Complexity Analysis

1. Time Complexity:
o Best Case: O(n2)
o Average Case: O(n2)
o Worst Case: O(n2)
o The time complexity is dominated by the nested loops: the outer loop runs n−1
times, and the inner loop runs an average of n/2 times.

Advantages and Disadvantages

Advantages:

 Simple to understand and implement.


 Performs well on small lists.

Disadvantages:

 Inefficient on large lists due to quadratic time complexity.


 Not a stable sort; equal elements may not retain their original relative order.

Heapsort

Heapsort is an efficient comparison-based sorting algorithm. It is based on the binary heap


data structure and has a time complexity of O(nlogn). Heapsort is not a stable sort but is in-
place, meaning it requires only a constant amount of additional storage space.

How Heapsort Works

1. Build a Max-Heap: Transform the array into a max-heap, a complete binary tree
where the value of each node is greater than or equal to the values of its children.
2. Extract Maximum Element: Swap the root of the max-heap with the last element of
the heap and reduce the heap size by one. Restore the max-heap property by
heapifying the root.
3. Repeat: Repeat the extraction process until the heap size is reduced to one

Algorithm Steps

1. Build Max-Heap:
oStart from the last non-leaf node and heapify each node up to the root.
2. Heapsort:
o Swap the root (maximum value) with the last element of the heap.
o Reduce the heap size by one.
o Heapify the root to restore the max-heap property.
o Repeat the process for the remaining elements.

Pseudocode
function heapsort(A: array of n items)
buildMaxHeap(A)
for i = n-1 to 1 do
swap A[0] with A[i]
heapSize = heapSize - 1
maxHeapify(A, 0)
end function

function buildMaxHeap(A: array of n items)


heapSize = n
for i = ⌊n/2⌋ downto 0 do
maxHeapify(A, i)
end function

function maxHeapify(A: array of n items, i: index)


left = 2*i + 1
right = 2*i + 2
largest = i
if left < heapSize and A[left] > A[largest] then
largest = left
if right < heapSize and A[right] > A[largest] then
largest = right
if largest ≠ i then
swap A[i] with A[largest]
maxHeapify(A, largest)
end function

Complexity Analysis

1. Time Complexity:
o Building the Max-Heap: O(n)
 Each element is heapified once, and the heapify operation takes
O(logn) time.
o Heapsort Process: O(nlogn)
 Each of the n elements is extracted from the heap, and each extraction
involves O(logn) time for heapifying the root.

Therefore, the overall time complexity is O(nlogn).


The operation of BUILD-MAX-HEAP

The operation of HEAPSORT

Merge Sort with Complexity

The concept of Divide and Conquer involves three steps:

1. Divide the problem into multiple subproblems.


2. Solve the Sub Problems. The idea is to break down the problem into atomic subproblems,
where they are actually solved.
3. Combine the solutions of the subproblems to find the solution of the actual problem.

So, the merge sort working rule involves the following steps:

1. Divide the unsorted array into subarray, each containing a single element.
2. Take adjacent pairs of two single-element array and merge them to form an array of 2
elements.
3. Repeat the process till a single sorted array is obtained.
ALGORITHM-MERGE SORT
If p<r
Then q → ( p+ r)/2
MERGE-SORT (A, p, q)
MERGE-SORT ( A, q+1,r)
MERGE ( A, p, q, r)

Here we called MergeSort(A, 0, length(A)-1) to sort the complete array.


# Python program to merge two sorted arrays

# Merge arr1[0..n1-1] and arr2[0..n2-1] into arr3[0..n1+n2-1]

def mergeArrays(arr1, arr2, n1, n2):


arr3 = [None] * (n1 + n2)
i = 0
j = 0
k = 0

# Traverse both array


while i < n1 and j < n2:

if arr1[i] < arr2[j]:


arr3[k] = arr1[i]
k = k + 1
i = i + 1
else:
arr3[k] = arr2[j]

k = k + 1
j = j + 1

# Store remaining elements of first array


while i < n1:
arr3[k] = arr1[i];
k = k + 1
i = i + 1

# Store remaining elements of second array


while j < n2:
arr3[k] = arr2[j];
k = k + 1
j = j + 1
print("Array after merging")
for i in range(n1 + n2):
print(str(arr3[i]), end = " ")
As we can see from the figure above, n comparisons are needed on each level, and there
are log2n levels, so there are n⋅log2n comparison operations in total.

Time complexity can be calculated based on the number of split operations and the number of
merge operations:

O((n−1) + n⋅log2n)=O(n⋅log2 n)
The number of splitting operations (n−1) can be removed from the Big O calculation above
because n⋅log2n will dominate for large n.

You might also like