0% found this document useful (0 votes)
3 views

C UNIT-5

The document provides an overview of searching and sorting algorithms, detailing linear and binary search methods, their complexities, advantages, and disadvantages. It also covers various sorting algorithms such as bubble sort and selection sort, including their implementations and performance characteristics. The document emphasizes when to use each search and sort method based on dataset size and order.

Uploaded by

joteshsaiganesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

C UNIT-5

The document provides an overview of searching and sorting algorithms, detailing linear and binary search methods, their complexities, advantages, and disadvantages. It also covers various sorting algorithms such as bubble sort and selection sort, including their implementations and performance characteristics. The document emphasizes when to use each search and sort method based on dataset size and order.

Uploaded by

joteshsaiganesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

UNIT-5: SEARCHING AND SORTING

SEARCHING:
1. Linear Search in an Array

How It Works:
1. The function linear_Search iterates through each
element of the array.
2. If the element matches the target, the function
immediately returns the index of the element.
3. If the function completes the loop without finding the
target, it returns -1 to indicate that the element is not
present in the array.
Algorithm for Linear Search:

1. Start with the first element of the array.


2. Set a loop counter i = 0.
3. Repeat the following steps while i<ni < ni<n:
o Check if arr[i] == target.

o If true, return i (index of the target element).

o Otherwise, increment iii by 1.

4. If the loop ends and the target is not found, return -1.
Complexity Analysis:
 Time Complexity:
o Best Case: O(1)O(1)O(1) (target is the first

element).
oWorst Case: O(n)O(n)O(n) (target is not in the
array or is the last element).
o Average Case: O(n)O(n)O(n), where n is the

size of the array.


 Space Complexity:
o O(1)O(1)O(1), as no extra space is used except

for variables.
PROGRAM:
#include <stdio.h>
int linearSearch(int arr[], int size, int target) {
for (int i = 0; i < size; i++) {
if (arr[i] == target) {
return i;
}
} return -1;
}
int main() {
int arr[] = {10, 20, 30, 40, 50};
int size = sizeof(arr) / sizeof(arr[0]);
int target;
printf("Enter the element to search: ");
scanf("%d", &target);
int result = linearSearch(arr, size, target);
if (result != -1) {
printf("Element %d found at index %d.\n", target, result);
}
else {
printf("Element %d not found in the array.\n", target);
}
return 0;
}
OUTPUT:
Enter the element to search: 30
Element 30 found at index 2.
Enter the element to search: 60
Element 60 not found in the array.
Steps:

1. Start at the first element of the array.


2. Compare the current element with the target value.
3. If it matches, return the index of the element.
4. If it doesn't match, move to the next element.
5. Repeat steps 2-4 until you either find the target or reach
the end of the array.
6. If the element is not found by the time the end of the
array is reached, return -1 or an indication that the
element is not present.

Example:

Let's search for the number 25 in the array [10, 22, 35,
40, 25, 60]:

1. Start at the first element (10), no match.


2. Move to the second element (22), no match.
3. Move to the third element (35), no match.
4. Move to the fourth element (40), no match.
5. Move to the fifth element (25), match found!.

Thus, the element 25 is found at index 4.


Advantages:

1. Simplicity: Easy to implement and understand.


2. No Prerequisite Sorting: Works on unsorted and
unordered arrays.
3. Versatility: Can be used with any type of data
structure (arrays, linked lists, etc.).
4. Small Dataset Efficiency: Suitable for small datasets
where the overhead of sorting is unnecessary.
Disadvantages:

1. Inefficient for Large Datasets: Time complexity of


O(n)O(n)O(n) makes it slow for large arrays.
2. Sequential Access: Each element must be checked
one by one, which is time-consuming.
3. No Early Exit for Disordered Data: Cannot take
advantage of patterns or ordered data for
optimization.
Applications:

1. Searching in small datasets.


2. Searching in unsorted or unordered data.
3. Suitable when the cost of sorting outweighs the
benefits of faster searching.
4. Useful in linked lists or other structures where random
access is not possible.

2. Binary Search:
Algorithm
Input:

1. A sorted array arr of size n.


2. A target value target.
Output:

1. Index of the target if found.


2. -1 if the target is not found.
Steps:

1. Start with the entire array and determine the middle


element.
2. If the middle element is the target, return its index.
3. If the target is smaller than the middle element, narrow
the search to the left half of the array.
4. If the target is larger than the middle element, narrow the
search to the right half of the array.
5. Repeat steps 1-4 until the element is found or the search
space is reduced to zero.

Example:

Let's search for the number 25 in the sorted array [10, 22,
25, 35, 40, 60]:

1. Middle element is 35 (index 3). Since 25 is smaller than


35, we search the left half.
2. Now the array is [10, 22, 25], and the middle
element is 22 (index 1).
3. Since 25 is larger than 22, we search the right half, which
is [25].
4. The element 25 is found.

Thus, the element 25 is found at index 2.


PROGRAM:
#include <stdio.h>
int binarySearch(int arr[], int size, int target) {
int low = 0, high = size - 1;
while (low <= high) {
int mid = low + (high - low) / 2;
if (arr[mid] == target) {
return mid;
}
else if (arr[mid] < target) {
low = mid + 1;
}
else {
high = mid - 1;
}

}
return -1;
}
int main() {
int arr[] = {1, 3, 5, 7, 9, 11};
int size = sizeof(arr) / sizeof(arr[0]);
int target;
printf("Enter the element to search: ");
scanf("%d", &target);
int result = binarySearch(arr, size, target);
if (result != -1) {
printf("Element %d found at index %d.\n", target, result);
}
else {
printf("Element %d not found in the array.\n", target);
}
return 0;
}
OUTPUT:
Enter the element to search: 7
Element 7 found at index 3.
Advantages:

1. High Efficiency: Significantly faster than linear


search with a time complexity of O(logn)O(\log
n)O(logn).
2. Predictable Performance: The divide-and-conquer
approach ensures fewer comparisons.
3. Suitable for Large Datasets: Excellent performance
for large arrays when data is sorted.
Disadvantages:

1. Requires Sorted Data: Only works on sorted arrays;


preprocessing is required.
2. Not Suitable for Linked Structures: Binary search
relies on random access, which is not efficient in
linked lists.
3. Overhead of Sorting: If the dataset is not already
sorted, sorting adds extra overhead (O(nlogn)O(n \log
n)O(nlogn)).
Applications:

1. Searching in sorted datasets like arrays or files.


2. Used in search engines to locate information in sorted
indices.
3. Databases to perform efficient lookups.
4. Games and applications where sorted leader boards
or rankings are used.
5. Efficiently finding elements in read-only or static data.
When to Use Which?
 Use Linear Search:
o The dataset is small.

o The dataset is unsorted or unordered.

o Data is frequently modified, making sorting

impractical.
 Use Binary Search:
o The dataset is large.

o The data is sorted or can be sorted efficiently.

o Random access to elements (like arrays) is

available.

Feature Linear Search Binary Search


Best Case O(1)O(1)O(1) O(1)O(1)O(1)
O(logn)O(\log
Average Case O(n)O(n)O(n)
n)O(logn)
O(logn)O(\log
Worst Case O(n)O(n)O(n)
n)O(logn)
Works on
No Yes
Sorted Data
Implementation Simple Slightly complex
Data
Unsorted or sorted Must be sorted
Requirements
Slow for large Fast for large
Efficiency
datasets datasets
SORTING:
Sorting is the process of arranging elements in a
specific order, typically ascending or descending.
Sorting algorithms are fundamental in computer science
and are used in applications like searching, data
analysis, and optimizing systems.

Types of Sorting
 Bubble Sort
 Selection Sort
 Insertion Sort
 Merge Sort
 Quick Sort
1. Bubble Sort

 How it Works: Repeatedly compares adjacent


elements and swaps them if they are in the wrong
order.
 Time Complexity:
o Best Case: O(n)O(n)O(n)

o Worst Case: O(n2)O(n^2)O(n2)

 Space Complexity: O(1)O(1)O(1)


 Usage: Rarely used in practice due to inefficiency.
 Advantages:
1. Simple Implementation: Easy to implement and
understand.
2. Stable: Maintains the relative order of equal
elements.
3. Detects Sorted Data: Best case is O(n)O(n)O(n)
when the array is already sorted.
 Disadvantages:
1. Inefficient: Very slow for large datasets, with a worst-
case time complexity of O(n2)O(n^2)O(n2).
2. High Number of Comparisons and Swaps:
Inefficient due to repeated passes over the data.
3. Not Practical: Rarely used in real-world applications.
PROGRAM:
#include<stdio.h>
int main(){
int n=5,i,j;
int a[5]={44,11,33,22,56};
for(i=0;i<n-1;i++){
for(j=0;j<n-i-1;j++)
{
if(a[j]>a[j+1])
{
int temp;
temp=a[j];
a[j]=a[j+1];
a[j+1]=temp;
}
}
}
printf(“After bubble sorting”);
for(i=0;i<n;i++)
{
printf(“%d”,a[i]);
}
return 0;
}
OUTPUT:
11,22,33,44,56
Bubble Sort Steps:
1. Compare adjacent elements: Compare the first element
with the second. If the first element is greater than the
second, swap them.
2. Repeat for all pairs of adjacent elements: Move to the
next pair and repeat the comparison and swap if necessary.
3. Repeat the entire process for each element: After the
first pass, the largest element is in its correct position at the
end of the array.
4. Continue until the array is sorted: If no swaps were made
in a pass, the array is sorted.

Steps for Bubble Sort:


1. Start at the beginning of the array and compare each pair
of adjacent elements.
2. Swap the elements if the current element is greater than
the next element.
3. Repeat the process for all elements in the array, each time
reducing the range of comparison as the largest element
moves to the end.
4. Stop when no more swaps are needed.
Example Problem:
Sort the array {64, 25, 12, 22, 11} using Bubble Sort.
Step-by-Step Breakdown
Step 1: Initial Array

Initial array: {64, 25, 12, 22, 11}


1. First Pass:
o Compare 64 and 25. Since 64 > 25, swap them.

The array becomes: {25, 64, 12, 22, 11}.


o Compare 64 and 12. Since 64 > 12, swap them.

The array becomes: {25, 12, 64, 22, 11}.


o Compare 64 and 22. Since 64 > 22, swap them.

The array becomes: {25, 12, 22, 64, 11}.


o Compare 64 and 11. Since 64 > 11, swap them.

The array becomes: {25, 12, 22, 11, 64}.


o After the first pass, the largest element 64 is in its

correct position.
2. Second Pass:
o Compare 25 and 12. Since 25 > 12, swap them.

The array becomes: {12, 25, 22, 11, 64}.


o Compare 25 and 22. Since 25 > 22, swap them.

The array becomes: {12, 22, 25, 11, 64}.


o Compare 25 and 11. Since 25 > 11, swap them.

The array becomes: {12, 22, 11, 25, 64}.


o After the second pass, the second largest element 25

is in its correct position.


3. Third Pass:
o Compare 12 and 22. No swap needed since 12 <

22.
o Compare 22 and 11. Since 22 > 11, swap them.

The array becomes: {12, 11, 22, 25, 64}.


oAfter the third pass, the third largest element 22 is in
its correct position.
4. Fourth Pass:
o Compare 12 and 11. Since 12 > 11, swap them.

The array becomes: {11, 12, 22, 25, 64}.


o After the fourth pass, no further swaps are needed,

and the array is sorted.

Final Sorted Array: {11, 12, 22, 25, 64}

2. Selection Sort

 How it Works: Selects the smallest (or largest)


element and places it in its correct position.
 Time Complexity:
o Best/Worst Case: O(n2)O(n^2)O(n2)

 Space Complexity: O(1)O(1)O(1)


 Usage: Used for small datasets or when memory is
limited.
 Advantages:
1. Simplicity: Easy to implement and understand.
2. Memory Efficient: Does not require additional
memory; space complexity is O(1)O(1)O(1).
3. Works Well for Small Datasets: Can be useful for
small arrays where performance is not critical.
 Disadvantages:
1. Inefficient for Large Datasets: Time complexity is
O(n2)O(n^2)O(n2) for all cases.
2. Not Stable: May not maintain the relative order of
equal elements.
3. No Early Exit: Always iterates through the entire
dataset, even if sorted.
PROGRAM:
#include <stdio.h>
void selectionSort(int arr[], int n) {
for (int i = 0; i < n - 1; i++) {
int minIndex = i;
for (int j = i + 1; j < n; j++) {
if (arr[j] < arr[minIndex]) {
minIndex = j;
}
}
int temp = arr[minIndex];
arr[minIndex] = arr[i];
arr[i] = temp;
}
}
int main() {
int arr[] = {5, 2, 9, 1, 5, 6};
int n = sizeof(arr) / sizeof(arr[0]);
selectionSort(arr, n);
for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
return 0;
}
OUTPUT:
125569
Selection Sort Steps:
1. Start with the first element: The first element is
considered to be the smallest.
2. Find the smallest element in the unsorted portion:
Compare the selected element with all the remaining
elements.
3. Swap the smallest element with the current element:
Once the smallest element is found, swap it with the first
unsorted element.
4. Repeat: Move to the next element and repeat the process
for the remaining unsorted part of the array.
Steps for Insertion Sort:
1. Start with the second element and compare it with the
first element.
2. Insert the element into the sorted part of the array.
3. Shift the elements to make room for the new element if
necessary.
4. Repeat the process for all elements until the entire array is
sorted.
Example Problem:
Sort the array {64, 25, 12, 22, 11} using Insertion
Sort.
Step-by-Step Breakdown
Step 1: Initial Array

Initial array: {64, 25, 12, 22, 11}


1. Start with the second element 25:
o Compare 25 with 64. Since 25 < 64, shift 64 to the

right.
o Insert 25 in the correct position. The array becomes

{25, 64, 12, 22, 11}.


2. Move to the third element 12:
o Compare 12 with 64. Since 12 < 64, shift 64 to the

right.
o Compare 12 with 25. Since 12 < 25, shift 25 to the

right.
o Insert 12 in the correct position. The array becomes
{12, 25, 64, 22, 11}.
3. Move to the fourth element 22:
o Compare 22 with 64. Since 22 < 64, shift 64 to the

right.
o Compare 22 with 25. Since 22 < 25, shift 25 to the

right.
o Insert 22 in the correct position. The array becomes

{12, 22, 25, 64, 11}.


4. Move to the fifth element 11:
o Compare 11 with 64. Since 11 < 64, shift 64 to the

right.
o Compare 11 with 25. Since 11 < 25, shift 25 to the

right.
o Compare 11 with 22. Since 11 < 22, shift 22 to the

right.
o Compare 11 with 12. Since 11 < 12, shift 12 to the

right.
o Insert 11 in the correct position. The array becomes

{11, 12, 22, 25, 64}.

Final Sorted Array: {11, 12, 22, 25, 64}


3. Insertion Sort

 How it Works: Builds the sorted array one element at


a time by comparing and inserting elements into their
correct position.
 Time Complexity:
o Best Case: O(n)O(n)O(n) (nearly sorted input)

o Worst Case: O(n2)O(n^2)O(n2)

 Space Complexity: O(1)O(1)O(1)


 Usage: Efficient for small or nearly sorted datasets.
 Advantages:
1. Efficient for Small or Nearly Sorted Data: Best
case is O(n)O(n)O(n) when the data is already sorted
or almost sorted.
2. Stable: Maintains the relative order of equal
elements.
3. In-Place Sorting: Requires no additional memory.
 Disadvantages:
1. Inefficient for Large Datasets: Worst-case time
complexity is O(n2)O(n^2)O(n2).
2. Not Suitable for Large Datasets: Performance
degrades significantly as the array size increases.
PROGRAM:
#include <stdio.h>
void insertionSort(int arr[], int n) {
for (int i = 1; i < n; i++) {
int key = arr[i];
int j = i - 1;
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j];
j--;
}
arr[j + 1] = key;
}
}
int main() {
int arr[] = {5, 2, 9, 1, 5, 6};
int n = sizeof(arr) / sizeof(arr[0]);
insertionSort(arr, n);
for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
return 0;
}
OUTPUT:
125569
Insertion Sort Steps:
1. Start with the second element: Consider the first element
as already sorted.
2. Compare the current element with the sorted part: For
each new element, compare it with the elements before it.
3. Shift elements: If the current element is smaller than the
element being compared, shift the larger element to the
right.
4. Insert the current element: Place the current element in
its correct position in the sorted part of the array.
5. Repeat: Repeat steps 2 to 4 for each element until the
whole array is sorted.

Steps for Insertion Sort:


1. Start with the second element and compare it with the
first element.
2. Insert the element into the sorted part of the array.
3. Shift the elements to make room for the new element if
necessary.
4. Repeat the process for all elements until the entire array is
sorted.
Example Problem:
Sort the array {64, 25, 12, 22, 11} using Insertion
Sort.
Step-by-Step Breakdown
Step 1: Initial Array

Initial array: {64, 25, 12, 22, 11}


1. Start with the second element 25:
o Compare 25 with 64. Since 25 < 64, shift 64 to the

right.
o Insert 25 in the correct position. The array becomes

{25, 64, 12, 22, 11}.


2. Move to the third element 12:
o Compare 12 with 64. Since 12 < 64, shift 64 to the

right.
o Compare 12 with 25. Since 12 < 25, shift 25 to the

right.
o Insert 12 in the correct position. The array becomes

{12, 25, 64, 22, 11}.


3. Move to the fourth element 22:
o Compare 22 with 64. Since 22 < 64, shift 64 to the

right.
o Compare 22 with 25. Since 22 < 25, shift 25 to the

right.
o Insert 22 in the correct position. The array becomes

{12, 22, 25, 64, 11}.


4. Move to the fifth element 11:
o Compare 11 with 64. Since 11 < 64, shift 64 to the
right.
o Compare 11 with 25. Since 11 < 25, shift 25 to the
right.
o Compare 11 with 22. Since 11 < 22, shift 22 to the
right.
o Compare 11 with 12. Since 11 < 12, shift 12 to the
right.
o Insert 11 in the correct position. The array becomes
{11, 12, 22, 25, 64}.

Final Sorted Array: {11, 12, 22, 25, 64}

4. Merge Sort

 How it Works: Divides the array into halves,


recursively sorts them, and merges the sorted halves.
 Time Complexity: O(nlog n)O(n \log n)O(nlogn)
 Space Complexity: O(n)O(n)O(n)
 Usage: Preferred for large datasets or when stable
sorting is needed.
 Advantages:
1. Stable: Maintains the relative order of equal
elements.
2. Consistent Time Complexity: Always runs in
O(nlog n)O(n \log n)O(nlogn), regardless of the input
data.
3. Efficient for Large Datasets: Performs well on
datasets too large to fit in memory (external sorting).
4. Divide and Conquer: Naturally suited for parallel
processing.
 Disadvantages:
1. High Memory Usage: Requires additional space
O(n)O(n)O(n) for temporary arrays.
2. Slower for Small Datasets: Overhead of merging
makes it slower for small arrays.
3. Complex Implementation: More challenging to
implement compared to simpler algorithms like bubble
or insertion sort.

PROGRAM:
#include <stdio.h>
void merge(int arr[], int left, int mid, int right) {
int n1 = mid - left + 1;
int n2 = right - mid;
int L[n1], R[n2];
for (int i = 0; i < n1; i++) {
L[i] = arr[left + i];
}
for (int j = 0; j < n2; j++) {
R[j] = arr[mid + 1 + j];
}

int i = 0, j = 0, k = left;
while (i < n1 && j < n2) {
if (L[i] <= R[j]) {
arr[k] = L[i];
i++;
} else {
arr[k] = R[j];
j++;
}
k++;
}
while (i < n1) {
arr[k] = L[i];
i++;
k++;
}

while (j < n2) {


arr[k] = R[j];
j++;
k++;
}
}
void mergeSort(int arr[], int left, int right) {
if (left < right) {
int mid = left + (right - left) / 2;
mergeSort(arr, left, mid);
mergeSort(arr, mid + 1, right);
merge(arr, left, mid, right);
}
}
int main() {
int arr[] = {5, 2, 9, 1, 5, 6};
int n = sizeof(arr) / sizeof(arr[0]);
mergeSort(arr, 0, n - 1);
for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
return 0;
}

OUTPUT:
125569
Quick Sort Steps:
1. Choose a Pivot: Select an element from the array as a pivot
(it can be any element, typically the last element or the
middle element).
2. Partitioning: Rearrange the array so that:
o Elements smaller than the pivot come before it.

o Elements larger than the pivot come after it.

3. Recursively Apply: Recursively apply the above steps to


the sub-arrays on the left and right of the pivot.

Merge Sort Algorithm


1. Divide: Split the array into two halves.
2. Conquer: Recursively sort each half.
3. Combine: Merge the two sorted halves to produce a
sorted array.
Steps for Merge Sort
1. Base Case: If the array has one or zero elements, it is
already sorted, so return the array.
2. Divide the Array: Divide the array into two halves.
3. Recursively Sort: Sort both halves by calling merge sort on
them.
4. Merge the Sorted Halves: Combine the two sorted halves
into one sorted array.
Example Problem:
Sort the array {64, 25, 12, 22, 11} using Merge Sort.
Step-by-Step Breakdown
Step 1: Initial Array

Initial array: {64, 25, 12, 22, 11}

1. Divide the Array into Two Halves:


o Left half: {64, 25, 12}

o Right half: {22, 11}

Step 2: Sorting Left Half {64, 25, 12}

1. Divide {64, 25, 12} into two halves:


o Left half: {64}

o Right half: {25, 12}


2. Sorting Left Half {64}:
o It's already sorted since it has one element.

3. Sorting Right Half {25, 12}:


o Divide it into {25} and {12}.

o Both are sorted since they have only one element

each.
o Merge {25} and {12}: Compare elements:

 12 < 25, so merge them as {12, 25}.

4. Merge {64} and {12, 25}:


o Compare the elements:

 12 < 64, so 12 goes into the merged array.

 25 < 64, so 25 goes into the merged array.

 Finally, 64 goes into the merged array.

o Result of merging: {12, 25, 64}

Step 3: Sorting Right Half {22, 11}

1. Divide {22, 11} into two halves:


o Left half: {22}

o Right half: {11}

2. Sorting Left Half {22}:


o It's already sorted since it has one element.

3. Sorting Right Half {11}:


o It's already sorted since it has one element.

4. Merge {22} and {11}:


o Compare the elements:

 11 < 22, so 11 goes into the merged array.

 Then 22 goes into the merged array.

o Result of merging: {11, 22}


Step 4: Merging the Two Sorted Halves

1. Merge the two sorted halves {12, 25, 64} and {11,
22}:
o Compare the elements:

 11 < 12, so 11 goes into the merged array.

 12 < 22, so 12 goes into the merged array.

 22 < 25, so 22 goes into the merged array.

 25 < 64, so 25 goes into the merged array.

 Finally, 64 goes into the merged array.

o Final merged sorted array: {11, 12, 22, 25,

64}

5. Quick Sort

 How it Works: Selects a pivot, partitions the array


into two halves around the pivot, and sorts them
recursively.
 Time Complexity:
o Best/Average Case: O(nlogn)O(n \log n)O(nlogn)

o Worst Case: O(n2)O(n^2)O(n2) (can be avoided

with good pivot selection)


 Space Complexity: O(logn)O(\log n)O(logn)
(recursive stack).
 Usage: One of the fastest sorting algorithms for large
datasets.
 Advantages:
o Highly Efficient: Average time complexity is
O(nlogn)O(n \log n)O(nlogn), making it faster
than many other algorithms.
o In-Place Sorting: Requires little additional

memory, O(logn)O(\log n)O(logn) for the


recursive stack.
o Flexible Partitioning: Can be optimized with

good pivot selection (e.g., random pivot or


median-of-three).
 Disadvantages:
o Worst-Case Performance: Can degrade to

O(n2)O(n^2)O(n2) if the pivot is poorly chosen


(e.g., always choosing the first or last element in
a nearly sorted array).
o Not Stable: Does not maintain the relative order

of equal elements.
o Recursive Nature: Can cause stack overflow for

very large datasets if not implemented with tail


recursion.
PROGRAM:
#include <stdio.h>
int partition(int arr[], int low, int high) {
int pivot = arr[high];
int i = low - 1;
for (int j = low; j < high; j++) {
if (arr[j] < pivot) {
i++;
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}
int temp = arr[i + 1];
arr[i + 1] = arr[high];
arr[high] = temp;
return i + 1;
}
void quickSort(int arr[], int low, int high) {
if (low < high) {
int pi = partition(arr, low, high);
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}
int main() {
int arr[] = {5, 2, 9, 1, 5, 6};
int n = sizeof(arr) / sizeof(arr[0]);
quickSort(arr, 0, n - 1);
for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
return 0;
}
OUTPUT:
125569
Quick Sort Steps:
1. Pick a pivot element: Choose an element from the array as
the pivot. Different strategies can be used to select the
pivot, such as picking the first element, the last element, the
middle element, or a random element.
2. Partition the array: Re-arrange the array so that all
elements smaller than the pivot are placed before it, and all
elements greater than the pivot are placed after it. The pivot
will be placed at its correct position.
3. Recursively sort the sub-arrays: Apply the same process
to the sub-arrays to the left and right of the pivot.
4. Stop when sub-arrays have fewer than two elements:
Once the sub-arrays contain only one element or none, the
array is sorted.
Steps for Quick Sort:
1. Base Case: If the array has one or zero elements, it is
already sorted.
2. Partitioning: Select a pivot and rearrange the elements.
3. Recursive Sorting: Recursively sort the two sub-arrays
formed by partitioning.
Example Problem:
Sort the array {64, 25, 12, 22, 11} using Quick Sort.
Step-by-Step Breakdown
Step 1: Initial Array

Initial array: {64, 25, 12, 22, 11}

1. Choose a Pivot: Let's choose the last element, 11, as the


pivot.
2. Partitioning:
o Initially, we have i = -1 (an index for elements

smaller than the pivot).


o We will iterate through the array and compare each

element with the pivot:


 Compare 64 with 11: No swap, since 64 > 11.

 Compare 25 with 11: No swap, since 25 > 11.

 Compare 12 with 11: No swap, since 12 > 11.


 Compare 22 with 11: No swap, since 22 > 11.
 Compare 11 with the pivot: No swap needed
because it's the pivot.

After partitioning, we swap the pivot 11 with 64 (the


element at i+1), resulting in the partitioned array {11,
25, 12, 22, 64}.

Now, the pivot 11 is in its correct position, and we


recursively apply Quick Sort to the sub-arrays left and right
of the pivot.
Step 2: Sorting the Left Sub-array {}

Since the left sub-array is empty ({}), we don't need to do


anything for it.
Step 3: Sorting the Right Sub-array {25, 12, 22, 64}

1. Choose a Pivot: Let's choose 64 as the pivot.


2. Partitioning:
o Initially, i = 3.

o Iterate through the array to compare with the pivot

64:
 Compare 25 with 64: Swap 25 with itself (since

25 < 64).
 Compare 12 with 64: Swap 12 with itself (since

12 < 64).
 Compare 22 with 64: Swap 22 with itself (since

22 < 64).
After partitioning, we swap the pivot 64 with 64 (the
element at i+1), resulting in the partitioned array {11,
12, 22, 25, 64}.

Now, the pivot 64 is in its correct position, and we


recursively apply Quick Sort to the sub-arrays left and right
of the pivot.
Step 4: Sorting the Left Sub-array {25, 12, 22}

1. Choose a Pivot: Let's choose 22 as the pivot.


2. Partitioning:
o Initially, i = 1.

o Compare 25 with 22: No swap (since 25 > 22).

o Compare 12 with 22: Swap 12 with 25 because 12

< 22.

After partitioning, the array becomes {11, 12, 22,


25, 64}.

The pivot 22 is now in its correct position.

Step 5: Sorting the Sub-array {25}

Since the sub-array {25} contains only one element, it is


already sorted.
Final Sorted Array: {11, 12, 22, 25, 64}

Comparison of Sorting Algorithms


Algorithm Best Case Worst Case Stability
Bubble O(n2)O(n^2)O(n2
O(n)O(n)O(n) Stable
Sort )
Selection O(n2)O(n^2)O(n2
O(n2)O(n^2)O(n2) Not Stable
Sort )
Insertion O(n2)O(n^2)O(n2
O(n)O(n)O(n) Stable
Sort )
Merge O(nlogn)O(n \log O(nlogn)O(n \log
Stable
Sort n)O(nlogn) n)O(nlogn)
O(nlogn)O(n \log O(n2)O(n^2)O(n2
Quick Sort Not Stable
n)O(nlogn) )

Which Algorithm to Use?


 Small datasets: Insertion Sort or Selection Sort.
 Nearly sorted datasets: Insertion Sort.
 Large datasets:
o If memory is not a concern: Merge Sort.

o If memory is limited: Quick Sort (with optimized

pivot selection).
 Real-time systems: Quick Sort, as it is faster on
average.
Comparison Table

Algorithm Advantages Disadvantages


Slow, inefficient, not
Bubble Simple, stable, detects
suitable for large
Sort sorted data
datasets
Selection Memory efficient, simple Not stable, slow for
Sort for small data large datasets
Insertion Efficient for small/nearly Inefficient for large
Sort sorted data, stable datasets
Worst-case
Fast, efficient for large
Quick Sort O(n2)O(n^2)O(n2), not
datasets, in-place
stable, recursive
High memory usage,
Consistent O(nlog n)O(n
Merge Sort slower for small
\log n)O(nlogn), stable
datasets

Applications of Sorting
1. Searching: Sorted arrays allow for efficient searching
using algorithms like binary search.
2. Data Organization: Sorting is fundamental in
organizing data for processing.
3. Data Compression: Helps in data deduplication and
compression techniques.
4. Databases: Sorting optimizes query performance.
5. Computational Geometry: Sorting is used in solving
problems like finding the convex hull.
6. Machine Learning: Sorting is used for preprocessing
data or ranking results.

You might also like