0% found this document useful (0 votes)
25 views

Unit-4 Pps

The document discusses linear and binary search algorithms. It provides details on how each algorithm works including pseudocode. Linear search sequentially checks each element to find a match, having complexity of O(n). Binary search divides the search space in half at each step based on comparing the target value to the middle element, having complexity of O(log n). The document provides examples and discusses when each type of search would be best used based on factors like dataset size and memory constraints.

Uploaded by

kushal gulia
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Unit-4 Pps

The document discusses linear and binary search algorithms. It provides details on how each algorithm works including pseudocode. Linear search sequentially checks each element to find a match, having complexity of O(n). Binary search divides the search space in half at each step based on comparing the target value to the middle element, having complexity of O(log n). The document provides examples and discusses when each type of search would be best used based on factors like dataset size and memory constraints.

Uploaded by

kushal gulia
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 32

Unit -4

What is searching?
Searching is the process of finding a particular item in a collection of items. A search typically
answers whether the item is present in the collection or not. Searching requires a key field
such as name, ID, code which is related to the target item. When the key field of a target item
is found, a pointer to the target item is returned. The pointer may be an address, an index into
a vector or array, or some other indication of where to find the target. If a matching key field
isn’t found, the user is informed.

The most common searching algorithms are:

 Linear search
 Binary search

Linear Search Algorithm

Linear Search Algorithm is the simplest search algorithm. In this search algorithm a sequential
search is made over all the items one by one to search for the targeted item. Each item is
checked in sequence until the match is found. If the match is found, particular item is returned
otherwise the search continues till the end.

Algorithm

LinearSearch ( Array A, Value x)

Step 1: Set i to 1

Step 2: if i > n then go to step 7

Step 3: if A[i] = x then go to step 6

Step 4: Set i to i + 1

Step 5: Go to Step 2

Step 6: Print Element x Found at index i and go to step 8

Step 7: Print element not found

Step 8: Exit
Linear Search Example

Let us take an example of an array A[7]={5,2,1,6,3,7,8}. Array A has 7 items. Let us assume
we are looking for 7 in the array. Targeted item=7.

Here, we have

A[7]={5,2,1,6,3,7,8}

X=7

At first, When i=0 (A[0]=5; X=7) not matched

i++ now, i=1 (A[1]=2; X=7) not matched

i++ now, i=2(A[2])=1; X=7)not matched

….

i++ when, i=5(A[5]=7; X=7) Match Found

Hence, Element X=7 found at index 5.

Linear search is rarely used practically. The time complexity of above algorithm is O(n).
1.1.A non-recursive program for Linear Search:

#include<stdio.h>
#include<conio.h>

main()
{
in tn um ber [2 5 ],n, da ta ,i, f la g=0;
clrscr();
prin tf ("\nEn ter the numb er o felements:");
scanf("%d",&n);
printf("\nEnter the elements:");
for (i= 0;i< n;i+ +)
scanf("%d",&number[i]);
printf("\nEnter the element to be Searched:");
scanf("%d",&data);
for(i = 0;i <n;i ++ )
{
if(number[i]==data)
{
f l a g =1 ;
break;
}
}
if(fla g==1)
prin tf ("\nData found at location :%d",i+1);
else
pr in tf ( "\nDa ta not f ou nd ");
}

Output:
Ente r the numb er of elements:5
Enter the elements:3
5
6
8
2
Enter theelement to be Searched: 6
Data found at loca tion :3

A Recursive program for linear search :

#include<stdio.h>
#include<conio.h>

voidlinear_search(inta[],intdata,intposition,intn)
{
if(position< n)
{
if(a[position]==data)
printf("\nDataFoundat%d",position);
else
lin ea r_se arc h( a ,d ata,po sitio n+1,n) ;
}
else
pr in tf (" \n Da ta no tf ou nd" );
}

vo id ma in()
{
inta [25],i,n ,data;
clrscr();
prin tf ("\nEn terthenumb erofelements:");
scanf("%d",&n);
pr in tf (" \n En te rt he el em en ts :" );
for (i= 0; i<n; i++ )
{
scanf("%d",&a[i]);
}
printf("\nEntertheelementtobeseached:");
s c a n f ( " %d " , & d a t a ) ;
l i n ea r _s e a rc h ( a , d a t a , 0 , n) ;
getch();
}
Output:
Ente r the numb er of elements:4
Enter the elements:3
4
6
8
Enter theelement to be Searched: 6
Data found at loca tion :3

Advantages of Linear Search:


 Linear search can be used irrespective of whether the array is sorted or not. It can be used on arrays of any data
type.
 Does not require any additional memory.
 It is a well-suited algorithm for small datasets.

Drawbacks of Linear Search:


 Linear search has a time complexity of O(N), which in turn makes it slow for large datasets.
 Not suitable for large arrays.

When to use Linear Search?


 When we are dealing with a small dataset.
 When you are searching for a dataset stored in contiguous memory.
Binary Search Algorithm

Binary Search Algorithm is fast according to run time complexity. This algorithm works on the basis of divide and conquer
rule. In this algorithm we have to sort the data collection in ascending order first then search for the targeted item by
comparing the middle most item of the collection. If match found, the index of item is returned. If the middle item is greater
than the targeted item, the item is searched in the sub-array to the left of the middle item. Otherwise, the item is searched
for in the sub-array to the right of the middle item. This process continues on the sub-array as well until the size of the sub
array reduces to zero.

Binary search is the search technique that works efficiently on sorted lists. Hence, to search an element
into some list using the binary search technique, we must ensure that the list is sorted.

Binary search follows the divide and conquer approach in which the list is divided into two halves, and
the item is compared with the middle element of the list. If the match is found then, the location of the
middle element is returned. Otherwise, we search into either of the halves depending upon the result
produced through the match.

NOTE: Binary search can be implemented on sorted array elements. If the list elements are not arranged in
a sorted manner, we have first to sort them.

Now, let's see the algorithm of Binary Search.

Algorithm
Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array, 'lower_bound' is the index of the first array el
ement, 'upper_bound' is the index of the last array element, 'item' is the value to search  
Step 1: set beg = lower_bound, end = upper_bound, mid = int ((beg+end))/2
Step 2: repeat steps 3 and 4 while beg <=end  and a[mid]!=item
Step 3: if item< a[mid]  then
set end = mid - 1  
else  
set beg = mid + 1  
[end of if]  
Step 4: int mid = int ((beg+end))/2
[end of step 2 loop]  
Step 5: if a[mid]=item then
Set loc=mid
Else
Set loc = NULL
[end of if]  
Step 6: exit  

Working of Binary search


Now, let's see the working of the Binary Search Algorithm.
To understand the working of the Binary search algorithm, let's take a sorted array. It will be easy to
understand the working of Binary search with an example.

There are two methods to implement the binary search algorithm -

o Iterative method

o Recursive method

The recursive method of binary search follows the divide and conquer approach.

Let the elements of array are -

Let the element to search is, K = 56

We have to use the below formula to calculate the mid of the array -

1. mid = (beg + end)/2  

So, in the given array -

beg = 0

end = 8

mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.


Now, the element to search is found. So algorithm will return the index of the element matched.

Binary Search complexity


Now, let's see the time complexity of Binary search in the best case, average case, and worst case. We
will also see the space complexity of Binary search.

1. Time Complexity

Case Time Complexity

Best Case O(1)

Average Case O(logn)

Worst Case O(logn)

o Best Case Complexity - In Binary search, best case occurs when the element to search is found in first
comparison, i.e., when the first middle element itself is the element to be searched. The best-case time
complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of Binary search is O(logn).

o Worst Case Complexity - In Binary search, the worst case occurs, when we have to keep reducing the search
space till it has only one element. The worst-case time complexity of Binary search is O(logn).

2. Space Complexity

Space Complexity O(1)

o The space complexity of binary search is O(1).

2.1.A non-recursive program for binary search:

#include<stdio.h>
#include<conio.h>

void main() {

int number[25],n,data,i, flag=0,low ,high,mid;


clrscr();
pr in tf (" \n En te r th e nu mb er o f e le me nts :" );
s c a n f ( " %d " , & n ) ;
printf("\nEnter the elements in ascending order:");
for (i= 0; i< n; i+ +)
scanf("%d",&number[i]);
printf("\nEnter the element to be searched:");
scanf("%d",&data);
low=0;high=n-1;
wh i l e ( l ow <=h i g h )
{
mid = ( lo w+ hi g h)/ 2 ;
if(num ber[ mi d]==data)
{
f l a g =1 ;
break;
}
else
{
if(data<number[mid])
high=mid-1;
else
low=mid+1;
}
}
if(flag== 1)
{
pr in tf ("\ nDa ta fo und at l ocation:%d",mid+1);
}
else {
printf("\nData No tFound");
}

A recursive program for binary search:


#include<stdio.h>
#include<conio.h>

Void bin _search (inta[], intdata,int low,int high)


{
int mid;
if (l ow<=h ig h)
{
mid= ( low+ high)/ 2;
if(a[mid]==data)
pr intf ("\nElem ent f ou nd at lo cation: %d",mid+ 1 );
else
{
if(data<a[mid])
bin_search(a,data,low,mid-1);
else

b in_s earc h(a ,d at a, mi d+1, hi gh );


}
}
else
pr in tf (" \n El em en t no t fo und ");
}
voidma in()
{
int a [25], i, n,data ;
clrscr();
pr in tf (" \n En te r th e nu mb er o f e le me nts :" );
s c a n f ( " %d " , & n ) ;
printf ("\nEnter the element s in ascending order:");
for (i= 0; i<n; i++ )
scanf("%d",&a[i]);
printf("\nEnter the element to be searched:");
s c a n f ( " %d " , & d a t a ) ;
b i n _s e a rc h ( a , d a t a , 0 , n - 1 ) ;
getch();
}

Output:
Ente r the numb er of elements:4
Enter the elements in ascending order :3
4
6
8
Enter the element to be Searched: 6
Data found at loca tion :3
3 BubbleSort:

The bubble sort is easy to understand an d program. The basic idea of bubble sort is to
pass through the file sequentially several times. In each p ass, we compare each element
in the file with its success or i .e.,X[i] with X[i+1] and interchange two element when they
are not in proper order. We will illustrate this sorting technique by taking a specific
example. Bubble sort is also called as exchange sort.

Example:

Consider the arrayx[n] which is stored in memory as shown below:

X[0] X[2] X[4] X[5]

33 44 22 11 66 55

Suppose we want our array to be stored in ascending order. Then we pass through the
array 5 times as described below:

Pass1:(first element is compared with all other elements).

We compare X[i] and X[i+1] for i=0,1,2,3 ,and4, and interchange X[i] and X[i+1]
ifX[i]>X[i+1]. The process is shown below:
X[0] X[1] X[2] X[3] X[5]
X[4]
33 44 22 11 55 66
44 33
22 33
11 33
55
44 22 11 33 55 66

The biggest number 66 is moved to(bubble dup)the right most position in the array.
Pass2:(second element is compared).

i. e. ,w e com pare X[ i] wi th X[ i+1] f or i=0, 1, 2, an d3 an d i nt erch ang e X [i ] an d X [i +1 ] if


X[i]>X[i+1]. The process isshown b elow:

X[0] X[1] X[2] X[3] X[5]


X[4]
44 22 11 33 55 66
22 44
11 44
33 44
55
22 11 33 44 55

The second biggest number 55 is moved now to X[4].

Pass3:(third element is compared).

We repeat the same process, but this time we leave both X[4] and X[5]. By doing this,
we move the third biggest number 44 to X[3].

X[0] X[3] Remarks

22 11 33 44
11 22
22 33
33 44
11 22 33 44

Pass4:(fourth element is compared).

We repeat the process leaving X[3] X[4], and X[5]. By doing this ,we move the fourth
biggest number 33 to X[2].

Remarks
11 22
11 22
22

Pass5:(fifthelementiscompared).

We repeat the process leaving X [2], X[3], X[4], and X[5]. By doing this ,we move the
fifth biggest numbe 22 to X[1]. At thistime ,we will have the smallest number 11 in
X[0].Thus, we see tha we can sort the array of size 6 in 5 passes.

For an array of size n, we required (n-1) passes.


3.1. ProgramforBubbleSort:

# i nc l u d e <s t d i o . h >
# i nc l u d e <co n i o . h >
voidbubblesort(intx[],intn)
{
i n ti , j , te m p ;
for ( i=0 ;i<n ; i++ )
{
for(j=0;j<n i-1;j++)
{
if(x[j]>x[j+1])
{
te mp = x [ j] ;
x[j]=x[j+1];
x[j+1]=temp;
}
}
}
}

main()
{
in t i, n, x[ 25] ;
clrscr();
pr in tf (" \n En te rt he nu mb ero fe le me nt s: ") ;
s c a n f ( " %d " , & n ) ;
printf("\nEnterData:");
for ( i=0 ;i<n ; i++ )
scanf("%d",&x[i]);
b ubbleso rt(x ,n);
printf("\nArrayElementsaftersorting:");
for( i= 0; i< n; i+ + )
printf("%5d",x[i]);
}

Algorithm
This algorithm sorts the array arr with  n elements.  
STEP1: initialization
Set i=0
Step2: Repeat steps 3 to 5 unti i<n
Step3: set j=0
Step4: Repeat step 5 until j<n-i-1
Step5:  if arr[j] > arr[j+1]  
          Set temp =arr[j]
Set arr[j] = arr[j+1]
Set arr[j+1]=temp 
       end if  
Step6:Exit 
   
Bubble sort complexity
Now, let's see the time complexity of bubble sort in the best case, average case, and worst case. We
will also see the space complexity of bubble sort.

1. Time Complexity

Case Time Complexity

Best Case O(n)

Average Case O(n2)

Worst Case O(n2)

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. The best-case time complexity of bubble sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of bubble
sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order, but
its elements are in descending order. The worst-case time complexity of bubble sort is O(n2).

2. Space Complexity

Space Complexity O(1)

Stable YES

o The space complexity of bubble sort is O(1). It is because, in bubble sort, an extra variable is
required for swapping.
o The space complexity of optimized bubble sort is O(2). It is because two extra variables are
required in optimized bubble sort.

Now, let's discuss the optimized bubble sort algorithm.


Quick sort

QuickSort is a sorting algorithm based on the Divide and Conquer algorithm  that picks an
element as a pivot and partitions the given array around the picked pivot by placing the pivot
in its correct position in the sorted array.

How does QuickSort work?


The key process in quickSort is a partition(). The target of partitions is to place the pivot (any
element can be chosen to be a pivot) at its correct position in the sorted array and put all
smaller elements to the left of the pivot, and all greater elements to the right of the pivot.
This partition is done recursively which finally sorts the array. See the below image for a
better understanding.

Choice of Pivot:
There are many different choices for picking pivots. 
 Always pick the first element as a pivot .
 Always pick the last element as a pivot (implemented below)
 Pick a random element as a pivot .
 Pick the middle as the pivot.

Algorithm of quick sort


Quick_sort (a,l,h)
Where
a represent the list of elements.
l represent the position of the first element in the list (only at the starting point, it’s value
change during the execution of the function)
h represent the position of the last element in the list (only at starting point, the value of
it’s changes during the execution of the function)
step1: [initially]
low=l
high =h
key =a[(l+h)/2][middle element of the element of the list]
step2:Repeat through step 7while (low<=high)
step3: Repeat step 4 while (a([low]<key))
step4: low =low+1
step5: Repeat step 6 while (a([high]< key))
step6: high = high -1
step7: if (low <=high)
i)temp =a[low]
ii) a[low]=a[high]
iii) a[high] =temp
iv)low =low+1
v) high =high-1
step8: if (l<high)Quick_sort(a,l,high)
step9: if (low<h)Quick sort (a,low,h)
step10: exit

EXAMPLE :Consider: arr[] = {10, 80, 30, 90, 40, 50, 70}
 Indexes:  0   1   2   3   4   5   6 
 low = 0, high =  6, pivot = arr[h] = 70
 Initialize index of smaller element, i = -1

Compare pivot with first element

 Traverse elements from j = low to high-1


 j = 0: Since arr[j] <= pivot, do i++ and swap(arr[i], arr[j])
 i = 0 
 arr[] = {10, 80, 30, 90, 40, 50, 70} // No change as i and j are same
 j = 1: Since arr[j] > pivot, do nothing
Compare pivot with arr[1]

 j = 2 : Since arr[j] <= pivot, do i++ and swap(arr[i], arr[j])


 i=1
 arr[] = {10, 30, 80, 90, 40, 50, 70} // We swap 80 and 30 

Compare pivot with arr[2]

 j = 3 : Since arr[j] > pivot, do nothing // No change in i and arr[]


 j = 4 : Since arr[j] <= pivot, do i++ and swap(arr[i], arr[j])
 i=2
 arr[] = {10, 30, 40, 90, 80, 50, 70} // 80 and 40 Swapped

Compare pivot with arr[4]

 j = 5 : Since arr[j] <= pivot, do i++ and swap arr[i] with arr[j] 
 i = 3 
 arr[] = {10, 30, 40, 50, 80, 90, 70} // 90 and 50 Swapped 
Compare pivot with arr[6]

 We come out of loop because j is now equal to high-1.


 Finally we place pivot at correct position by swapping arr[i+1] and arr[high] (or
pivot) 
 arr[] = {10, 30, 40, 50, 70, 90, 80} // 80 and 70 Swapped 

Swap arr[i+1] with pivot

 Now 70 is at its correct place. All elements smaller than 70 are before it and all elements
greater than 70 are after it.
 Since quick sort is a recursive function, we call the partition function again at left and right
partitions

Recursively sort the left side of pivot

 Again call function at right part and swap 80 and 90


Recursively sort the right side of pivot

Program to implement QuickSort:


Follow the below steps to implement the algorithm:
 Create a recursive function (say quicksort()) to implement the quicksort.
 Partition the range to be sorted (initially the range is from 0 to N-1) and return the correct
position of the pivot (say pi).
 Select the rightmost value of the range to be the pivot.
 Iterate from the left and compare the element with the pivot and perform the
partition as shown above.
 Return the correct position of the pivot.
 Recursively call the quicksort for the left and the right part of the pi.
Below is the implementation of the Quicksort:
// C code to implement quicksort
 
#include <stdio.h>
 
// Function to swap two elements
void swap(int* a, int* b)
{
    int t = *a;
    *a = *b;
    *b = t;
}
 // Partition the array using the last element as the pivot
int partition(int arr[], int low, int high)
{
    // Choosing the pivot
    int pivot = arr[high];
     
    // Index of smaller element and indicates
    // the right position of pivot found so far
    int i = (low - 1);
 
    for (int j = low; j <= high - 1; j++) {
         
        // If current element is smaller than the pivot
        if (arr[j] < pivot) {
             
            // Increment index of smaller element
            i++;
 swap(&arr[i], &arr[j]);
        }
}
    swap(&arr[i + 1], &arr[high]);
    return (i + 1);
}
 
// The main function that implements QuickSort
  
// arr[] --> Array to be sorted,
// low --> Starting index,
// high --> Ending index
void quickSort(int arr[], int low, int high)
{
    if (low < high) {
         
        // pi is partitioning index, arr[p]
        // is now at right place
        int pi = partition(arr, low, high);
         
        // Separately sort elements before
        // partition and after partition
        quickSort(arr, low, pi - 1);
        quickSort(arr, pi + 1, high);
    }
}
 
// Driver code
int main()
{
    int arr[] = { 10, 7, 8, 9, 1, 5 };
    int N = sizeof(arr) / sizeof(arr[0]);
   
    // Function call
    quickSort(arr, 0, N - 1);
    printf("Sorted array: \n");
    for (int i = 0; i < N; i++)
        printf("%d ", arr[i]);
    return 0;
}

Output
Sorted array:
1 5 7 8 9 10
Insertion sort
Insertion sort is a simple sorting algorithm that works similar to the way you sort playing
cards in your hands. The array is virtually split into a sorted and an unsorted part. Values
from the unsorted part are picked and placed at the correct position in the sorted part.

Characteristics of Insertion Sort:


 This algorithm is one of the simplest algorithm with simple implementation
 Basically, Insertion sort is efficient for small data values
 Insertion sort is adaptive in nature, i.e. it is appropriate for data sets which are already
partially sorted.

Working of Insertion Sort algorithm:


Consider an example: arr[]: {12, 11, 13, 5, 6}
   12       11       13      5      6   

First Pass:
 Initially, the first two elements of the array are compared in insertion sort.

   12    11    13    5    6

 Here, 12 is greater than 11 hence they are not in the ascending order and 12 is not at its
correct position. Thus, swap 11 and 12.
 So, for now 11 is stored in a sorted sub-array.
   11    12    13    5    6

Second Pass:
  Now, move to the next two elements and compare them

   11    12    13    5    6

 Here, 13 is greater than 12, thus both elements seems to be in ascending order, hence,
no swapping will occur. 12 also stored in a sorted sub-array along with 11

Third Pass:
 Now, two elements are present in the sorted sub-array which are 11 and 12
 Moving forward to the next two elements which are 13 and 5

   11    12    13    5    6

 Both 5 and 13 are not present at their correct place so swap them
   11    12    5    13    6

 After swapping, elements 12 and 5 are not sorted, thus swap again
   11    5    12    13    6

 Here, again 11 and 5 are not sorted, hence swap again


   5    11    12    13    6
 Here, 5 is at its correct position

Fourth Pass:
 Now, the elements which are present in the sorted sub-array are 5, 11 and 12
 Moving to the next two elements 13 and 6

   5    11    12    13    6

 Clearly, they are not sorted, thus perform swap between both
   5    11    12    6    13

 Now, 6 is smaller than 12, hence, swap again


 
   5    11    6    12
13   

 Here, also swapping makes 11 and 6 unsorted hence, swap again


     
   5    11
6    12    13   

 Finally, the array is completely sorted.

Illustrations:

 
Program for insertion sort
#include <stdio.h>  
  
void insert(int a[], int n) /* function to sort an aay with insertion sort */  
{  
    int i, j, temp;  
    for (i = 1; i < n; i++) {  
        temp = a[i];  
        j = i - 1;  
  
        while(j>=0 && temp <= a[j])  /* Move the elements greater than temp to one 
position ahead from their current position*/  
        {    
            a[j+1] = a[j];     
            j = j-1;    
        }    
        a[j+1] = temp;    
    }  
}  
  
void printArr(int a[], int n) /* function to print the array */  
{  
    int i;  
    for (i = 0; i < n; i++)  
        printf("%d ", a[i]);  
}  
  
int main()  
{  
    int a[] = { 12, 31, 25, 8, 32, 17 };  
    int n = sizeof(a) / sizeof(a[0]);  
    printf("Before sorting array elements are - \n");  
    printArr(a, n);  
    insert(a, n);  
    printf("\nAfter sorting array elements are - \n");    
    printArr(a, n);  
  
    return 0;  
}    
Algorithm

Insertion sort (A[maxsize],item)


Let a be an array of n elements which we want to sort temp be a temporary variable to
interchange the two values .k be the total no. of passes and j be another control variable
Step1: set k=1
Step2:for k=1 to n-1
Set temp =a[k]
Set j=k-1
While temp <a[j] and (j>=0) perform the following steps.
Set a[j+1]=a[j]
[end of loop structure]
Assign the value of temp to a[j+1]
[end of for loop structure]
Step3: exit

Time Complexity of Insertion Sort


 The worst case time complexity of Insertion sort is O(N^2)
 The average case time complexity of Insertion sort is O(N^2)
 The time complexity of the best case is O(N).

Space Complexity of Insertion Sort


The auxiliary space complexity of Insertion Sort’s Recursive Approach
is O(n) due to the recursion stack.

Tableforallalgorithmscomplexity:-
Order of Complexity

Order of Complexity is a term used in computer science to measure the efficiency of


an algorithm or a program. It refers to the amount of time and resources required to
solve a problem or perform a task. In programming, the Order of Complexity is
usually expressed in terms of Big O notation, which gives an upper bound on the
time or space requirements of an algorithm. In this article, we will discuss the Order
of Complexity in the C programming language and its significance.

Order of Complexity in C Programming Language:


In C programming, the Order of Complexity of an algorithm depends on the number
of operations performed by the program. For example, if we have an array of size n
and we want to search for a particular element in the array, the Order of Complexity
of the algorithm will depend on the number of elements in the array. If we perform
a Linear Search through the array, the Order of Complexity will be O(n), which
means that the time taken to search for the element will increase linearly with the
size of the array. If we use a Binary Search Algorithm instead, the Order of
Complexity will be O(log n), which means that the time taken to search for the
element will increase logarithmically with the size of the array.

Similarly, the Order of Complexity of other algorithms, such as Sorting


Algorithms, Graph Algorithms, and Dynamic Programming Algorithms also
depends on the number of operations the program performs. The Order of
Complexity of these algorithms can be expressed using Big O notation.

Let's take a look at some common orders of complexity and their corresponding
algorithms:

o O(1) - Constant Time Complexity:

This means that the algorithm takes a constant amount of time, regardless of the
input size. For example, accessing an element in an array takes O(1) time, as the
element can be accessed directly using its index.

o O(log n) - Logarithmic Time Complexity:

This means that the algorithm's time taken increases logarithmically with the input
size. This is commonly seen in Divide-and-Conquer Algorithms like Binary Search,
which divide the input into smaller parts to solve the problem.

o O(n) - Linear Time Complexity:

This means that the algorithm's time taken increases linearly with the input size.
Examples of such algorithms are Linear Search and Bubble Sort.
o O(n log n) - Linearithmic Time Complexity:

This means that the algorithm's time taken increases by n multiplied by the
logarithm of n. Examples of such algorithms are Quicksort and Mergesort.

o O(n^2) - Quadratic Time Complexity:

This means that the algorithm's time taken increases quadratically with the input
size. Examples of such algorithms are Bubble Sort and Insertion Sort.

o O(2^n) - Exponential Time Complexity:

This means that the algorithm's time taken doubles with each increase in the input
size. This is commonly seen in Recursive Algorithms like the Fibonacci Series.

It is important to know that the Order of Complexity only provides an upper bound
on the time taken by the algorithm. The actual time taken may be much less than
this bound, depending on the input data and the implementation of the algorithm.

In C programming, the Order of Complexity of an algorithm can be determined by


analyzing the code and counting the number of operations performed. For example,
if we have a loop that iterates through an array of size n, the time complexity of the
loop will be O(n). Similarly, if we have a recursive function that calls itself k times,
the time complexity of the function will be O(2^k).

To optimize the performance of a program, it is important to choose algorithms with


a lower Order of Complexity. For example, if we need to sort an array, we should use
a Sorting algorithm with a lower order of complexity, such
as Quicksort or Mergesort, rather than Bubble Sort, which has a higher order of
complexity.

There are two such methods used, time complexity and space complexity which are


discussed below:

Time complexity is the amount of time taken to run an algorithm. It is the measure
of the number of elementary operations performed by an algorithm and an estimate of
the time required for that operation. It also depends on external factors such as the
compiler, the processor’s speed, etc.

If we ask you how much time you can add the first five natural numbers? (you start
counting 1+2+3+4+5). Assume that, it took you 3 seconds. But how will you calculate
this for the computer? We cannot! And thus, computer scientists have come up with an
approach to calculate a rough estimate of the time taken to execute a code, called Time
Complexity.

Consider an algorithm which searches a container from first to last looking for one data
item :
In that case :

 The Best case complexity is O(1) - the item being searched for is found in the
very first item in the container. In this case the time complexity is not
dependent on the amount of data in the container.
 The Worst case is that every item will need to be looked at - in this case the
time complexity is O(n) (where n is the number of data items).
 The Average case is that the item is found approximately 1/2 way through - in
which case the time complexity is again O(n); it isn’t O(n/2) since in big O
notation constants are removed.
Note here that the average and worst cases are the same - O(n) - this is not uncommon
(in fact most algorithms have this characteristic) and should not cause any significant
concern. All it means is that in theory both the graphs of average and worst case
execution times will be linearly dependent on n.

Space complexity is the amount of memory space an algorithm/program uses


during its entire execution. It measures the number of variables created to store values,
including both the inputs and the outputs.

In simple terms, it is a rough estimation of how much storage your code will take in
RAM.

Anyone dreaming of getting into a product-based industry should not just be able to
write code but write efficient code which takes the least time and memory to execute.
So, let’s begin to establish a solid foundation for this concept.

Asymptotic Notations

Asymptotic Analysis is defined as the big idea that handles the above issues
in analyzing algorithms. In Asymptotic Analysis, we evaluate the
performance of an algorithm in terms of input size (we don’t measure
the actual running time). We calculate, how the time (or space) taken by an
algorithm increases with the input size. 
Asymptotic notation is a way to describe the running time or space
complexity of an algorithm based on the input size. It is commonly used in
complexity analysis to describe how an algorithm performs as the size of the
input grows.
The main idea of asymptotic analysis is to have a measure of the efficiency
of algorithms that don’t depend on machine-specific constants and don’t
require algorithms to be implemented and time taken by programs to be
compared. Asymptotic notations are mathematical tools to represent the
time complexity of algorithms for asymptotic analysis.
.Asymptotic Notations are programming languages that allow you to analyze
an algorithm’s running time by identifying its behavior as its input size grows.
.This is also referred to as an algorithm’s growth rate.
.You can’t compare two algorithm’s head to head.
.You compare space and time complexity using asymptotic analysis.
.It compares two algorithms based on changes in their performance as the
input size is increased or decreased.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)

1. Theta Notation (Θ-Notation) :


Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an
algorithm.
.Theta (Average Case) You add the running times for each possible input
combination and take the average in the average case.
Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Θ(g), if there are constants c1, c2 > 0 and a natural
number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0

Theta notation

Mathematical Representation of Theta notation:


Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1
* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}
Note: Θ(g) is a set

The above expression can be described as if f(n) is theta of g(n), then the
value f(n) is always between c1 * g(n) and c2 * g(n) for large values of n (n ≥
n0). The definition of theta also requires that f(n) must be non-negative for
values of n greater than n0.
The execution time serves as both a lower and upper bound on the
algorithm’s time complexity. 
It exist as both, most, and least boundaries for a given input value.
A simple way to get the Theta notation of an expression is to drop low-order
terms and ignore leading constants. For example, Consider the
expression 3n3 + 6n2 + 6000 = Θ(n3), the dropping lower order terms is
always fine because there will always be a number(n) after which Θ(n 3) has
higher values than Θ(n2) irrespective of the constants involved. For a given
function g(n), we denote Θ(g(n)) is following set of functions. Examples :
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2)

Note: Θ provides exact bounds.

2. Big-O Notation (O-notation) :


Big-O notation represents the upper bound of the running time of an
algorithm. Therefore, it gives the worst-case complexity of an algorithm.
.It is the most widely used notation for Asymptotic analysis.
.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time
complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-Oh(Worst Case) It is defined as the condition that allows an algorithm to
complete statement execution in the shortest amount of time possible.
 
If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist
a positive constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
It returns the highest possible output value (big-O)for a given input.
The execution time serves as an upper bound on the algorithm’s time
complexity.

Mathematical Representation of Big-O Notation:


O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤
cg(n) for all n ≥ n0 }
For example, Consider the case of Insertion Sort. It takes linear time in the
best case and quadratic time in the worst case. We can safely say that the
time complexity of the Insertion sort is O(n 2). 
Note: O(n2) also covers linear time. 

If we use Θ notation to represent the time complexity of Insertion sort, we


have to use two statements for best and worst cases: 
 The worst-case time complexity of Insertion Sort is Θ(n 2).
 The best case time complexity of Insertion Sort is Θ(n). 
The Big-O notation is useful when we only have an upper bound on the time
complexity of an algorithm. Many times we easily find an upper bound by
simply looking at the algorithm.  
 Examples :
{ 100 , log (2000) , 10^4 } belongs to O(1)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)
U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2) 
Note: Here, U represents union, we can write it in these manner
because O provides exact or upper bounds .

3. Omega Notation (Ω-Notation):


Omega notation represents the lower bound of the running time of an
algorithm. Thus, it provides the best case complexity of an algorithm.
The execution time serves as a lower bound on the algorithm’s time
complexity.
It is defined as the condition that allows an algorithm to complete
statement execution in the shortest amount of time.
Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Ω(g), if there is a constant c > 0 and a natural number
n0 such that c*g(n) ≤ f(n) for all n ≥ n0

Mathematical Representation of Omega notation :


Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤
f(n) for all n ≥ n0 }
Let us consider the same Insertion sort example here. The time complexity
of Insertion Sort can be written as Ω(n), but it is not very useful information
about insertion sort, as we are generally interested in worst-case and
sometimes in the average case. 

Examples :
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)
U { 100 , log (2000) , 10^4 } belongs to Ω(1)
Note: Here, U represents union, we can write it in these manner
because Ω provides exact or lower bounds.

Write a C program to calculate the root of a Quadratic Equation

#include<stdio.h>
#include<math.h>
Void main()

{
int a,b,c,d;

float x1,x2;

printf("Input the value of a,b&c:");

scanf("%d%d%d",&a,&b,&c);
d=b*b-4*a*c;
if(d==0)

{
printf("Both roots are equal.\n");

x1=-b/(2.0*a);
x2=x1;
printf("First Root Root1=%f\n",x1);

printf("Second Root Root2=%f\n",x2);


}
elseif(d>0)

{
printf("Both roots are real and diff-2\n");

x1=(-b+sqrt(d))/(2*a);
x2=(-b-sqrt(d))/(2*a);
printf("First Root Root1=%f\n",x1);

printf("Second Root root2=%f\n",x2);


}
else
printf("Root are imeainary;\n No Solution.\n");

}
Output:
Input the value ofa,b&c:157

Root are imaginary;

No Solution.
Flowchart

Flowchart:CalculaterootofQuadraticEquation.

You might also like