0% found this document useful (0 votes)
17 views

DSA Unit 4

Best notes for dsa basic level

Uploaded by

Shivam kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

DSA Unit 4

Best notes for dsa basic level

Uploaded by

Shivam kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

FUNDAMENTALS OF DATA STRUCTURES

UNIT-4

Sorting Algorithms
Sorting is the process of arranging the elements of an array so that they can be placed either in
ascending or descending order. For example, consider an array A = {A1, A2, A3, A4, ?? An },
the array is called to be in ascending order if element of A are arranged like A1 > A2 > A3 > A4
> A5 > ? > An .

Consider an array;
int A[10] = { 5, 4, 10, 2, 30, 45, 34, 14, 18, 9 )
The Array sorted in ascending order will be given as;
A[] = { 2, 4, 5, 9, 10, 14, 18, 30, 34, 45 }
There are many techniques by which sorting can be performed. In this section of the tutorial, we
will discuss each method in detail.

Types of Sorting Algorithms


Sorting algorithms are described in the following table along with the description.

SN Sorting Description
Algorithms

1 Bubble Sort It is the simplest sort method which performs sorting by repeatedly
moving the largest element to the highest index of the array. It
comprises of comparing each element to its adjacent element and
replace them accordingly.

2 Insertion Sort As the name suggests, insertion sort inserts each element of the array
to its proper place. It is a very simple sort method which is used to
arrange the deck of cards while playing bridge.

3 Merge Sort Merge sort follows the divide and conquer approach in which the list
is first divided into the sets of equal elements and then each half of
the list is sorted by using merge sort. The sorted list is combined
again to form an elementary sorted array.

4 Quick Sort Quick sort is the most optimized sort algorithm which performs
sorting in O(n log n) comparisons. Like Merge sort, quick sort also
works by using divide and conquer approach.

5 Heap In the heap sort, Min heap or max heap is maintained from the
Sort array elements depending upon the choice and the elements are
sorted by deleting the root element of the heap.

6 Radix In Radix sort, the sorting is done as we do sort the names


Sort according to their alphabetical order. It is the linear sorting
algorithm used for Integers.

7 Selectio Selection sort finds the smallest element in the array and places it
n Sort on the first place on the list, then it finds the second smallest
element in the array and places it on the second place. This
process continues until all the elements are moved to their correct
ordering. It carries running time O(n2) which is worse than
insertion sort.

Bubble sort
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent
elements if they are in the wrong order. This algorithm is not suitable for large data sets as its
average and worst-case time complexity is quite high.3

Bubble Sort Algorithm


Let’s assume that arr is an array with n members (elements):

1. begin BubbleSort(arr)
2. for all array elements
3. if arr[i] > arr[i+1]
4. swap(arr[i], arr[i+1])
5. end if
6. end for
7. return arr
8. end BubbleSort

Implementation:
// C++ program for implementation
// of Bubble sort
#include <bits/stdc++.h>
using namespace std;

// A function to implement bubble sort


void bubbleSort(int arr[], int n)
{
int i, j;
for (i = 0; i < n - 1; i++)

// Last i elements are already


// in place
for (j = 0; j < n - i - 1; j++)
if (arr[j] > arr[j + 1])
swap(arr[j], arr[j + 1]);
}
// Function to print an array
void printArray(int arr[], int size)
{
int i;
for (i = 0; i < size; i++)
cout << arr[i] << " ";
cout << endl;
}

// Driver code
int main()
{
int arr[] = { 5, 1, 4, 2, 8};
int N = sizeof(arr) / sizeof(arr[0]);
bubbleSort(arr, N);
cout << "Sorted array: \n";
printArray(arr, N);
return 0;
}

Working of Bubble Sort algorithm:


Let's consider the following array as an example: arr[] = {5, 1, 4, 2, 8}

First Pass:
Bubble sort starts with the very first two elements, comparing them to check which one is
greater.
( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps since 5
> 1.
( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), the algorithm
does not swap them.

Second Pass:
Now, during second iteration it should look like this:
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 )
( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )

Third Pass:
Now, the array is already sorted, but our algorithm does not know if it is completed.
The algorithm needs one whole pass without any swap to know it is sorted.
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
Illustration:
Illustration of Bubble Sort

Insertion Sort
Insertion sort is a simple sorting algorithm that works similarly to the way you sort playing cards
in your hands. The array is virtually split into a sorted and an unsorted part. Values from the
unsorted part are picked and placed at the correct position in the sorted part.

Working of Insertion Sort algorithm:


Consider an example: arr[]: {12, 11, 13, 5, 6}

12 11 13 5 6

First Pass:
● Initially, the first two elements of the array are compared in insertion sort.

12 11 13 5 6
● Here, 12 is greater than 11 hence they are not in the ascending order and 12 is not at its
correct position. Thus, swap 11 and 12.
● So, for now 11 is stored in a sorted sub-array.

11 12 13 5 6

Second Pass:
● Now, move to the next two elements and compare them

11 12 13 5 6

● Here, 13 is greater than 12, thus both elements seems to be in ascending order, hence, no
swapping will occur. 12 also stored in a sorted sub-array along with 11

Third Pass:
● Now, two elements are present in the sorted sub-array which are 11 and 12
● Moving forward to the next two elements which are 13 and 5

11 12 13 5 6

● Both 5 and 13 are not present at their correct place so swap them

11 12 5 13 6

● After swapping, elements 12 and 5 are not sorted, thus swap again

11 5 12 13 6

● Here, again 11 and 5 are not sorted, hence swap again

5 11 12 13 6

● here, it is at its correct position

Fourth Pass:
● Now, the elements which are present in the sorted sub-array are 5, 11 and 12
● Moving to the next two elements 13 and 6

5 11 12 13 6

● Clearly, they are not sorted, thus perform swap between both

5 11 12 6 13
● Now, 6 is smaller than 12, hence, swap again

5 11 6 12 13

● Here, also swapping makes 11 and 6 unsorted hence, swap again

5 6 11 12 13

● Finally, the array is completely sorted.

Implementation

/ C++ program for insertion sort

#include <bits/stdc++.h>
using namespace std;

// Function to sort an array using


// insertion sort
void insertionSort(int arr[], int n)
{
int i, key, j;
for (i = 1; i < n; i++) {
key = arr[i];
j = i - 1;

while (j >= 0 && arr[j] > key) {


arr[j + 1] = arr[j];
j = j - 1;
}
arr[j + 1] = key;
}
}

// A utility function to print an array


// of size n
void printArray(int arr[], int n)
{
int i;
for (i = 0; i < n; i++)
cout << arr[i] << " ";
cout << endl;
}

// Driver code
int main()
{
int arr[] = { 12, 11, 13, 5, 6 };
int N = sizeof(arr) / sizeof(arr[0]);

insertionSort(arr, N);
printArray(arr, N);

return 0;
}
Illustrations:
Quick sort
Quicksort is a sorting algorithm based on the divide and conquer approach where an array is
divided into subarrays by selecting a pivot element (element selected from the array).
1. While dividing the array, the pivot element should be positioned in such a way that
elements less than the pivot are kept on the left side, and elements greater than the pivot
is on the right side of the pivot.
2. The left and right subarrays are also divided using the same approach. This process
continues until each subarray contains a single element.
3. At this point, elements are already sorted. Finally, elements are combined to form a
sorted array.

Working of Quick Sort algorithm:


To know the functioning of Quick sort, let’s consider an array arr[] = {10, 80, 30, 90, 40, 50, 70}
● Indexes: 0 1 2 3 4 5 6
● low = 0, high = 6, pivot = arr[h] = 70
● Initialize index of smaller element, i = -1

Step1
● Traverse elements from j = low to high-1
● j = 0: Since arr[j] <= pivot, do i++ and swap(arr[i], arr[j])
● i=0
● arr[] = {10, 80, 30, 90, 40, 50, 70} // No change as i and j are same
● j = 1: Since arr[j] > pivot, do nothing
Step2
● j = 2 : Since arr[j] <= pivot, do i++ and swap(arr[i], arr[j])
● i=1
● arr[] = {10, 30, 80, 90, 40, 50, 70} // We swap 80 and 30

Step3
● j = 3 : Since arr[j] > pivot, do nothing // No change in i and arr[]
● j = 4 : Since arr[j] <= pivot, do i++ and swap(arr[i], arr[j])
● i=2
● arr[] = {10, 30, 40, 90, 80, 50, 70} // 80 and 40 Swapped
Step 4
● j = 5 : Since arr[j] <= pivot, do i++ and swap arr[i] with arr[j]
● i=3
● arr[] = {10, 30, 40, 50, 80, 90, 70} // 90 and 50 Swapped

Step 5
● We come out of loop because j is now equal to high-1.
● Finally we place pivot at correct position by swapping arr[i+1] and arr[high] (or pivot)
● arr[] = {10, 30, 40, 50, 70, 90, 80} // 80 and 70 Swapped
Step 6
● Now 70 is at its correct place. All elements smaller than 70 are before it and all elements
greater than 70 are after it.
● Since quick sort is a recursive function, we call the partition function again at left and
right partitions

Step 7
● Again call function at right part and swap 80 and 90
#include <bits/stdc++.h>
using namespace std;
int partition(int arr[],int low,int high){
//choose the pivot
int pivot=arr[high];
//Index of smaller element and Indicate
//the right position of pivot found so far
int i=(low-1);
for(int j=low;j<=high;j++){
//If current element is smaller than the pivot
if(arr[j]<pivot){
//Increment index of smaller element
i++;
swap(arr[i],arr[j]); }}
swap(arr[i+1],arr[high]);
return (i+1);
}
// The Quicksort function Implement
void quickSort(int arr[],int low,int high){
// when low is less than high
if(low<high){
// pi is the partition return index of pivot
int pi=partition(arr,low,high);
//Recursion Call
//smaller element than pivot goes left and
//higher element goes right
quickSort(arr,low,pi-1);
quickSort(arr,pi+1,high);}}
int main() {
int arr[]={10,7,8,9,1,5};
int n=sizeof(arr)/sizeof(arr[0]);
// Function call
quickSort(arr,0,n-1);
//Print the sorted array
cout<<"Sorted Array\n";
for(int i=0;i<n;i++)
{
cout<<arr[i]<<" ";
}
return 0;
}
// This Code is Contributed By Diwakar Jha

Selection sort
Selection sort is another sorting technique in which we find the minimum element in every
iteration and place it in the array beginning from the first index. Thus, a selection sort also gets
divided into a sorted and unsorted subarray.

The following is the selection sort algorithm:

● Begin with the smallest (or largest, depending on the sorting order) element.
● In the unsorted segment, compare the minimum element to the following
element.
● If a smaller (or larger) element is discovered, the index of the minimum (or
maximum) element is updated.
● Continue comparing and updating the minimum (or maximum) element until the
unsorted segment is finished.
● Swap the identified minimum (or maximum) element with the unsorted part’s
initial element.
● Move the sorted part’s boundary one element to the right to expand it.
● Steps 2-6 should be repeated until the full list is sorted.

Working of Selection Sort algorithm:


Let's consider the following array as an example: arr[] = {64, 25, 12, 22, 11}

First pass:
For the first position in the sorted array, the whole array is traversed from index 0 to 4
sequentially. The first position where 64 is stored presently, after traversing the whole array it is
clear that 11 is the lowest value.

64 25 12 22 11

Thus, replace 64 with 11. After one iteration 11, which happens to be the least value in the array,
tends to appear in the first position of the sorted list.

11 25 12 22 64

Second Pass:
For the second position, where 25 is present, again traverse the rest of the array in a sequential
manner.

11 25 12 22 64

After traversing, we found that 12 is the second lowest value in the array and it should appear at
the second place in the array, thus swap these values.

11 12 25 22 64

Third Pass:
Now, for third place, where 25 is present again, traverse the rest of the array and find the third
least value present in the array.

11 12 25 22 64

While traversing, 22 came out to be the third least value and it should appear at the third place in
the array, thus swap 22 with element present at third position.
11 12 22 25 64

Fourth pass:
Similarly, for fourth position traverse the rest of the array and find the fourth least element in the
array
As 25 is the 4th lowest value hence, it will place at the fourth position.

11 12 22 25 64

Fifth Pass:
At last the largest value present in the array automatically gets placed at the last position in the
array The resulting array is the sorted array.

11 12 22 25 64
// C++ program for implementation of
// selection sort
#include <bits/stdc++.h>
using namespace std;

// Function for Selection sort


void selectionSort(int arr[], int n)
{
int i, j, min_idx;

// One by one move boundary of


// unsorted subarray
for (i = 0; i < n - 1; i++) {

// Find the minimum element in


// unsorted array
min_idx = i;
for (j = i + 1; j < n; j++) {
if (arr[j] < arr[min_idx])
min_idx = j;
}

// Swap the found minimum element


// with the first element
if (min_idx != i)
swap(arr[min_idx], arr[i]);
}
}

// Function to print an array


void printArray(int arr[], int size)
{
int i;
for (i = 0; i < size; i++) {
cout << arr[i] << " ";
cout << endl;
}
}

// Driver program
int main()
{
int arr[] = { 64, 25, 12, 22, 11 };
int n = sizeof(arr) / sizeof(arr[0]);

// Function Call
selectionSort(arr, n);
cout << "Sorted array: \n";
printArray(arr, n);
return 0;
}

Merge Sort
The Merge Sort algorithm is a sorting algorithm that is based on the Divide and Conquers
paradigm. In this algorithm, the array is initially divided into two equal halves and then they are
combined in a sorted manner.

Let’s see how Merge Sort uses Divide and Conquer:


The merge sort algorithm is an implementation of the divide and conquers technique. Thus, it
gets completed in three steps:
1. Divide: In this step, the array/list divides itself recursively into sub-arrays until the base
case is reached.
2. Conquer: Here, the sub-arrays are sorted using recursion.
3. Combine: This step makes use of the merge( ) function to combine the sub-arrays into the
final sorted array.

Working of Merge Sort algorithm:


To know the functioning of merge sort, lets consider an array arr[] = {38, 27, 43, 3, 9, 82, 10}
At first, check if the left index of array is less than the right index, if yes then calculate its mid
point
Now, as we already know that merge sort first divides the whole array iteratively into equal
halves, unless the atomic values are achieved.
Here, we see that an array of 7 items is divided into two arrays of size 4 and 3 respectively.

Now, again find that is left index is less than the right index for both arrays, if found yes, then
again calculate mid points for both the arrays.
Now, further divide these two arrays into further halves, until the atomic units of the array is
reached and further division is not possible.

After dividing the array into smallest units, start merging the elements again based on
comparison of size of elements
Firstly, compare the element for each list and then combine them into another list in a sorted
manner.
After the final merging, the list looks like this:

Radix Sort Algorithm


The key idea behind Radix Sort is to exploit the concept of place value. It assumes that sorting
numbers digit by digit will eventually result in a fully sorted list. Radix Sort can be performed
using different variations, such as Least Significant Digit (LSD) Radix Sort or Most Significant
Digit (MSD) Radix Sort.

How does Radix Sort Algorithm work?


To perform radix sort on the array [170, 45, 75, 90, 802, 24, 2, 66], we follow these steps:

Step 1: Find the largest element in the array, which is 802. It has three digits, so we will iterate
three times, once for each significant place.

Step 2: Sort the elements based on the unit place digits (X=0). We use a stable sorting technique,
such as counting sort, to sort the digits at each significant place.

Sorting based on the unit place:

● Perform counting sort on the array based on the unit place digits.
● The sorted array based on the unit place is [170, 90, 802, 2, 24, 45, 75, 66].
Step 3: Sort the elements based on the tens place digits.

Sorting based on the tens place:

● Perform counting sort on the array based on the tens place digits.
● The sorted array based on the tens place is [802, 2, 24, 45, 66, 170, 75, 90].

Step 4: Sort the elements based on the hundreds place digits.

Sorting based on the hundreds place:

● Perform counting sort on the array based on the hundreds place digits.
● The sorted array based on the hundreds place is [2, 24, 45, 66, 75, 90, 170, 802].
Step 5: The array is now sorted in ascending order.

The final sorted array using radix sort is [2, 24, 45, 66, 75, 90, 170, 802].

Heap sort
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is
similar to the selection sort where we first find the minimum element and place the minimum
element at the beginning. Repeat the same process for the remaining elements.
Working of Heap Sort algorithm:
To understand heap sort more clearly, let’s take an unsorted array and try to sort it using heap
sort. Consider the array: arr[] = {4, 10, 3, 5, 1}.
Build Complete Binary Tree: Build a complete binary tree from the array.

Build complete binary tree from the array


Transform into max heap: After that, the task is to construct a tree from that unsorted array and
try to convert it into max heap.
● To transform a heap into a max-heap, the parent node should always be greater than or
equal to the child nodes
● Here, in this example, as the parent node 4 is smaller than the child node 10, thus,
swap them to build a max-heap.
● Transform it into a max heap

● Now, as seen, 4 as a parent is smaller than the child 5, thus swap both of these again and
the resulted heap and array should be like this:
Make the tree a max heap
Perform heap sort: Remove the maximum element in each step (i.e., move it to the end position
and remove that) and then consider the remaining elements and transform it into a max heap.
● Delete the root element (10) from the max heap. In order to delete this node, try to swap
it with the last node, i.e. (1). After removing the root element, again heapify it to convert
it into max heap.
● Resulted heap and array should look like this:
Remove 10 and perform heapify
● Repeat the above steps and it will look like the following:
Remove 5 and perform heapify
● Now remove the root (i.e. 3) again and perform heapify.

Remove 4 and perform heapify


● Now when the root is removed once again it is sorted. and the sorted array will be like
arr[] = {1, 3, 4, 5, 10}.
The sorted array

Searching

Searching Algorithms are designed to check for an element or retrieve an element from any data
structure where it is stored.

Linear Search is defined as a sequential search algorithm that starts at one end and goes through
each element of a list until the desired element is found, otherwise the search continues till the
end of the data set.

Linear Search Algorithm

How Does Linear Search Algorithm Work?


In Linear Search Algorithm,
● Every element is considered as a potential match for the key and checked for the same.
● If any element is found equal to the key, the search is successful and the index of that
element is returned.
● If no element is found equal to the key, the search yields “No match found”.

For example: Consider the array arr[] = {10, 50, 30, 70, 80, 20, 90, 40} and key = 30
Step 1: Start from the first element (index 0) and compare key with each element (arr[i]).
● Comparing key with first element arr[0]. SInce not equal, the iterator moves to the next
element as a potential match.

Compare key with arr[0]


● Comparing key with next element arr[1]. SInce not equal, the iterator moves to the next
element as a potential match.

Compare key with arr[1]


Step 2: Now when comparing arr[2] with key, the value matches. So the Linear Search
Algorithm will yield a successful message and return the index of the element when key is found
(here 2).

Compare key with arr[2]

Implementation of Linear Search Algorithm:



// C++ code to linearly search x in arr[].

#include <bits/stdc++.h>
using namespace std;

int search(int arr[], int N, int x)


{
for (int i = 0; i < N; i++)
if (arr[i] == x)
return i;
return -1;
}

// Driver code
int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int N = sizeof(arr) / sizeof(arr[0]);

// Function call
int result = search(arr, N, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}

Output
Element is present at index 3

Complexity Analysis of Linear Search:


Time Complexity:
● Best Case: In the best case, the key might be present at the first index. So the best case
complexity is O(1)
● Worst Case: In the worst case, the key might be present at the last index i.e., opposite to
the end from which the search has started in the list. So the worst-case complexity is
O(N) where N is the size of the list.
● Average Case: O(N)
Auxiliary Space: O(1) as except for the variable to iterate through the list, no other variable is
used.
Advantages of Linear Search:
● Linear search can be used irrespective of whether the array is sorted or not. It can be used
on arrays of any data type.
● Does not require any additional memory.
● It is a well-suited algorithm for small datasets.

Drawbacks of Linear Search:


● Linear search has a time complexity of O(N), which in turn makes it slow for large
datasets.
● Not suitable for large arrays.

Binary Search

Binary Search is defined as a searching algorithm used in a sorted array by repeatedly dividing the search
interval in half. The idea of binary search is to use the information that the array is sorted and reduce the
time complexity to O(log N).

Example of Binary Search Algorithm

Conditions for when to apply Binary Search in a Data Structure:


To apply Binary Search algorithm:
● The data structure must be sorted.
● Access to any element of the data structure takes constant time.
Binary Search Algorithm:
In this algorithm,
● Divide the search space into two halves by finding the middle index “mid”.

Finding the middle index “mid” in Binary Search Algorithm


● Compare the middle element of the search space with the key.
● If the key is found at middle element, the process is terminated.
● If the key is not found at middle element, choose which half will be used as the next search space.
● If the key is smaller than the middle element, then the left side is used for next search.
● If the key is larger than the middle element, then the right side is used for next search.
● This process is continued until the key is found or the total search space is exhausted.

How does Binary Search work?


To understand the working of binary search, consider the following illustration:
Consider an array arr[] = {2, 5, 8, 12, 16, 23, 38, 56, 72, 91}, and the target = 23.
First Step: Calculate the mid and compare the mid element with the key. If the key is less than mid
element, move to left and if it is greater than the mid then move search space to the right.
● Key (i.e., 23) is greater than current mid element (i.e., 16). The search space moves to the right.

Binary Search Algorithm : Compare key with 16


● Key is less than the current mid 56. The search space moves to the left.
Binary Search Algorithm : Compare key with 56
Second Step: If the key matches the value of the mid element, the element is found and stop search.

How to Implement Binary Search?


The Binary Search Algorithm can be implemented in the following two ways
● Iterative Binary Search Algorithm
● Recursive Binary Search Algorithm
Implementation of Recursive Binary Search Algorithm:

// C++ program to implement recursive Binary Search
#include <bits/stdc++.h>
using namespace std;

// A recursive binary search function. It returns


// location of x in given array arr[l..r] is present,
// otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l) {
int mid = l + (r - l) / 2;

// If the element is present at the middle


// itself
if (arr[mid] == x)
return mid;

// If element is smaller than mid, then


// it can only be present in left subarray
if (arr[mid] > x)
return binarySearch(arr, l, mid - 1, x);

// Else the element can only be present


// in right subarray
return binarySearch(arr, mid + 1, r, x);
}

// We reach here when element is not


// present in array
return -1;
}

// Driver code
int main()
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n - 1, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}
Output
Element is present at index 3

Complexity Analysis of Binary Search:


● Time Complexity:
● Best Case: O(1)
● Average Case: O(log N)
● Worst Case: O(log N)
● Auxiliary Space: O(1), If the recursive call stack is considered then the auxiliary space
will be O(logN).

Advantages of Binary Search:


● Binary search is faster than linear search, especially for large arrays.
● More efficient than other searching algorithms with a similar time complexity, such as
interpolation search or exponential search.
● Binary search is well-suited for searching large datasets that are stored in external
memory, such as on a hard drive or in the cloud.

Drawbacks of Binary Search:


● The array should be sorted.
● Binary search requires that the data structure being searched be stored in contiguous
memory locations.
● Binary search requires that the elements of the array be comparable, meaning that they
must be able to be ordered.

Applications of Binary Search:


● Binary search can be used as a building block for more complex algorithms used in
machine learning, such as algorithms for training neural networks or finding the optimal
hyperparameters for a model.
● It can be used for searching in computer graphics such as algorithms for ray tracing or
texture mapping.
● It can be used for searching a database.

Unit 4 S14 - SLO1,SLO2


COMPARISON OF DIFFERENT SEARCH

Linear Search
Linear search is the simplest search algorithm. It works by iterating through a list of items,
one by one, until it finds the item it is searching for. If the item is not found, the algorithm
returns -1.

Binary Search

Binary search is a more efficient search algorithm than linear search. It works by dividing the
list in half, and then searching the half that is more likely to contain the item it is searching
for. This process is repeated until the item is found or the list is empty.

Comparison

Linear search is the simplest search algorithm, but it is also the least efficient. Binary search
is more efficient than linear search, but it is also more complex.

Here is a table that summarizes the key differences between linear search and binary search:

Feature Linear Search Binary Search

Time Complexity O(n) O(log n)

Space Complexity O(1) O(1)

Best Case O(1) O(1)

Worst Case O(n) O(log n)

Conclusion

Binary search is the more efficient search algorithm, but it is also more complex. Linear
search is the simplest search algorithm, but it is also the least efficient.

Hashing
Hashing refers to the process of generating a fixed-size output from an input of variable size
using the mathematical formulas known as hash functions. This technique determines an index or
location for the storage of an item in a data structure.
Need for Hash data structure
The amount of data on the internet is growing exponentially every day, making it difficult to
store it all effectively. In day-to-day programming, this amount of data might not be that big, but
still, it needs to be stored, accessed, and processed easily and efficiently. A very common data
structure that is used for such a purpose is the Array data structure.
Now the question arises if Array was already there, what was the need for a new data structure!
The answer to this is in the word “efficiency“. Though storing in Array takes O(1) time,
searching in it takes at least O(log n) time. This time appears to be small, but for a large data set,
it can cause a lot of problems and this, in turn, makes the Array data structure inefficient.
So now we are looking for a data structure that can store the data and search in it in constant
time, i.e. in O(1) time. This is how Hashing data structure came into play. With the introduction
of the Hash data structure, it is now possible to easily store data in constant time and retrieve it in
constant time as well.
Components of Hashing
There are majorly three components of hashing:

1. Key: A Key can be anything string or integer which is fed as input in the hash

function, the technique that determines an index or location for storage of an item in a
data structure.
2. Hash Function: The hash function receives the input key and returns the index of an

element in an array called a hash table. The index is known as the hash index.
3. Hash Table: Hash table is a data structure that maps keys to values using a special

function called a hash function. Hash stores the data in an associative manner in an
array where each data value has its own unique index.

Components of Hashing

What is Collision?
The hashing process generates a small number for a big key, so there is a possibility that two
keys could produce the same value. The situation where the newly inserted key maps to an
already occupied, and it must be handled using some collision handling technology.
Collision in Hashing

Advantages of Hashing in Data Structures


● Key-value support: Hashing is ideal for implementing key-value data structures.
● Fast data retrieval: Hashing allows for quick access to elements with constant-time
complexity.
● Efficiency: Insertion, deletion, and searching operations are highly efficient.
● Memory usage reduction: Hashing requires less memory as it allocates a fixed space
for storing elements.
● Scalability: Hashing performs well with large data sets, maintaining constant access
time.
● Security and encryption: Hashing is essential for secure data storage and integrity
verification.

You might also like