Unit-4 Pps
Unit-4 Pps
What is searching?
Searching is the process of finding a particular item in a collection of items. A search typically
answers whether the item is present in the collection or not. Searching requires a key field
such as name, ID, code which is related to the target item. When the key field of a target item
is found, a pointer to the target item is returned. The pointer may be an address, an index into
a vector or array, or some other indication of where to find the target. If a matching key field
isn’t found, the user is informed.
Linear search
Binary search
Linear Search Algorithm is the simplest search algorithm. In this search algorithm a sequential
search is made over all the items one by one to search for the targeted item. Each item is
checked in sequence until the match is found. If the match is found, particular item is returned
otherwise the search continues till the end.
Algorithm
Step 1: Set i to 1
Step 4: Set i to i + 1
Step 5: Go to Step 2
Step 8: Exit
Linear Search Example
Let us take an example of an array A[7]={5,2,1,6,3,7,8}. Array A has 7 items. Let us assume
we are looking for 7 in the array. Targeted item=7.
Here, we have
A[7]={5,2,1,6,3,7,8}
X=7
….
Linear search is rarely used practically. The time complexity of above algorithm is O(n).
1.1.A non-recursive program for Linear Search:
#include<stdio.h>
#include<conio.h>
main()
{
in tn um ber [2 5 ],n, da ta ,i, f la g=0;
clrscr();
prin tf ("\nEn ter the numb er o felements:");
scanf("%d",&n);
printf("\nEnter the elements:");
for (i= 0;i< n;i+ +)
scanf("%d",&number[i]);
printf("\nEnter the element to be Searched:");
scanf("%d",&data);
for(i = 0;i <n;i ++ )
{
if(number[i]==data)
{
f l a g =1 ;
break;
}
}
if(fla g==1)
prin tf ("\nData found at location :%d",i+1);
else
pr in tf ( "\nDa ta not f ou nd ");
}
Output:
Ente r the numb er of elements:5
Enter the elements:3
5
6
8
2
Enter theelement to be Searched: 6
Data found at loca tion :3
#include<stdio.h>
#include<conio.h>
voidlinear_search(inta[],intdata,intposition,intn)
{
if(position< n)
{
if(a[position]==data)
printf("\nDataFoundat%d",position);
else
lin ea r_se arc h( a ,d ata,po sitio n+1,n) ;
}
else
pr in tf (" \n Da ta no tf ou nd" );
}
vo id ma in()
{
inta [25],i,n ,data;
clrscr();
prin tf ("\nEn terthenumb erofelements:");
scanf("%d",&n);
pr in tf (" \n En te rt he el em en ts :" );
for (i= 0; i<n; i++ )
{
scanf("%d",&a[i]);
}
printf("\nEntertheelementtobeseached:");
s c a n f ( " %d " , & d a t a ) ;
l i n ea r _s e a rc h ( a , d a t a , 0 , n) ;
getch();
}
Output:
Ente r the numb er of elements:4
Enter the elements:3
4
6
8
Enter theelement to be Searched: 6
Data found at loca tion :3
Binary Search Algorithm is fast according to run time complexity. This algorithm works on the basis of divide and conquer
rule. In this algorithm we have to sort the data collection in ascending order first then search for the targeted item by
comparing the middle most item of the collection. If match found, the index of item is returned. If the middle item is greater
than the targeted item, the item is searched in the sub-array to the left of the middle item. Otherwise, the item is searched
for in the sub-array to the right of the middle item. This process continues on the sub-array as well until the size of the sub
array reduces to zero.
Binary search is the search technique that works efficiently on sorted lists. Hence, to search an element
into some list using the binary search technique, we must ensure that the list is sorted.
Binary search follows the divide and conquer approach in which the list is divided into two halves, and
the item is compared with the middle element of the list. If the match is found then, the location of the
middle element is returned. Otherwise, we search into either of the halves depending upon the result
produced through the match.
NOTE: Binary search can be implemented on sorted array elements. If the list elements are not arranged in
a sorted manner, we have first to sort them.
Algorithm
Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array, 'lower_bound' is the index of the first array el
ement, 'upper_bound' is the index of the last array element, 'item' is the value to search
Step 1: set beg = lower_bound, end = upper_bound, mid = int ((beg+end))/2
Step 2: repeat steps 3 and 4 while beg <=end and a[mid]!=item
Step 3: if item< a[mid] then
set end = mid - 1
else
set beg = mid + 1
[end of if]
Step 4: int mid = int ((beg+end))/2
[end of step 2 loop]
Step 5: if a[mid]=item then
Set loc=mid
Else
Set loc = NULL
[end of if]
Step 6: exit
o Iterative method
o Recursive method
The recursive method of binary search follows the divide and conquer approach.
1. mid = (beg + end)/2
beg = 0
end = 8
1. Time Complexity
o Best Case Complexity - In Binary search, best case occurs when the element to search is found in first
comparison, i.e., when the first middle element itself is the element to be searched. The best-case time
complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of Binary search is O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs, when we have to keep reducing the search
space till it has only one element. The worst-case time complexity of Binary search is O(logn).
2. Space Complexity
#include<stdio.h>
#include<conio.h>
void main() {
Output:
Ente r the numb er of elements:4
Enter the elements in ascending order :3
4
6
8
Enter the element to be Searched: 6
Data found at loca tion :3
3 BubbleSort:
The bubble sort is easy to understand an d program. The basic idea of bubble sort is to
pass through the file sequentially several times. In each p ass, we compare each element
in the file with its success or i .e.,X[i] with X[i+1] and interchange two element when they
are not in proper order. We will illustrate this sorting technique by taking a specific
example. Bubble sort is also called as exchange sort.
Example:
33 44 22 11 66 55
Suppose we want our array to be stored in ascending order. Then we pass through the
array 5 times as described below:
We compare X[i] and X[i+1] for i=0,1,2,3 ,and4, and interchange X[i] and X[i+1]
ifX[i]>X[i+1]. The process is shown below:
X[0] X[1] X[2] X[3] X[5]
X[4]
33 44 22 11 55 66
44 33
22 33
11 33
55
44 22 11 33 55 66
The biggest number 66 is moved to(bubble dup)the right most position in the array.
Pass2:(second element is compared).
We repeat the same process, but this time we leave both X[4] and X[5]. By doing this,
we move the third biggest number 44 to X[3].
22 11 33 44
11 22
22 33
33 44
11 22 33 44
We repeat the process leaving X[3] X[4], and X[5]. By doing this ,we move the fourth
biggest number 33 to X[2].
Remarks
11 22
11 22
22
Pass5:(fifthelementiscompared).
We repeat the process leaving X [2], X[3], X[4], and X[5]. By doing this ,we move the
fifth biggest numbe 22 to X[1]. At thistime ,we will have the smallest number 11 in
X[0].Thus, we see tha we can sort the array of size 6 in 5 passes.
# i nc l u d e <s t d i o . h >
# i nc l u d e <co n i o . h >
voidbubblesort(intx[],intn)
{
i n ti , j , te m p ;
for ( i=0 ;i<n ; i++ )
{
for(j=0;j<n i-1;j++)
{
if(x[j]>x[j+1])
{
te mp = x [ j] ;
x[j]=x[j+1];
x[j+1]=temp;
}
}
}
}
main()
{
in t i, n, x[ 25] ;
clrscr();
pr in tf (" \n En te rt he nu mb ero fe le me nt s: ") ;
s c a n f ( " %d " , & n ) ;
printf("\nEnterData:");
for ( i=0 ;i<n ; i++ )
scanf("%d",&x[i]);
b ubbleso rt(x ,n);
printf("\nArrayElementsaftersorting:");
for( i= 0; i< n; i+ + )
printf("%5d",x[i]);
}
Algorithm
This algorithm sorts the array arr with n elements.
STEP1: initialization
Set i=0
Step2: Repeat steps 3 to 5 unti i<n
Step3: set j=0
Step4: Repeat step 5 until j<n-i-1
Step5: if arr[j] > arr[j+1]
Set temp =arr[j]
Set arr[j] = arr[j+1]
Set arr[j+1]=temp
end if
Step6:Exit
Bubble sort complexity
Now, let's see the time complexity of bubble sort in the best case, average case, and worst case. We
will also see the space complexity of bubble sort.
1. Time Complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. The best-case time complexity of bubble sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of bubble
sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order, but
its elements are in descending order. The worst-case time complexity of bubble sort is O(n2).
2. Space Complexity
Stable YES
o The space complexity of bubble sort is O(1). It is because, in bubble sort, an extra variable is
required for swapping.
o The space complexity of optimized bubble sort is O(2). It is because two extra variables are
required in optimized bubble sort.
QuickSort is a sorting algorithm based on the Divide and Conquer algorithm that picks an
element as a pivot and partitions the given array around the picked pivot by placing the pivot
in its correct position in the sorted array.
Choice of Pivot:
There are many different choices for picking pivots.
Always pick the first element as a pivot .
Always pick the last element as a pivot (implemented below)
Pick a random element as a pivot .
Pick the middle as the pivot.
EXAMPLE :Consider: arr[] = {10, 80, 30, 90, 40, 50, 70}
Indexes: 0 1 2 3 4 5 6
low = 0, high = 6, pivot = arr[h] = 70
Initialize index of smaller element, i = -1
j = 5 : Since arr[j] <= pivot, do i++ and swap arr[i] with arr[j]
i = 3
arr[] = {10, 30, 40, 50, 80, 90, 70} // 90 and 50 Swapped
Compare pivot with arr[6]
Now 70 is at its correct place. All elements smaller than 70 are before it and all elements
greater than 70 are after it.
Since quick sort is a recursive function, we call the partition function again at left and right
partitions
Output
Sorted array:
1 5 7 8 9 10
Insertion sort
Insertion sort is a simple sorting algorithm that works similar to the way you sort playing
cards in your hands. The array is virtually split into a sorted and an unsorted part. Values
from the unsorted part are picked and placed at the correct position in the sorted part.
First Pass:
Initially, the first two elements of the array are compared in insertion sort.
Here, 12 is greater than 11 hence they are not in the ascending order and 12 is not at its
correct position. Thus, swap 11 and 12.
So, for now 11 is stored in a sorted sub-array.
11 12 13 5 6
Second Pass:
Now, move to the next two elements and compare them
Here, 13 is greater than 12, thus both elements seems to be in ascending order, hence,
no swapping will occur. 12 also stored in a sorted sub-array along with 11
Third Pass:
Now, two elements are present in the sorted sub-array which are 11 and 12
Moving forward to the next two elements which are 13 and 5
Both 5 and 13 are not present at their correct place so swap them
11 12 5 13 6
After swapping, elements 12 and 5 are not sorted, thus swap again
11 5 12 13 6
Fourth Pass:
Now, the elements which are present in the sorted sub-array are 5, 11 and 12
Moving to the next two elements 13 and 6
Clearly, they are not sorted, thus perform swap between both
5 11 12 6 13
Illustrations:
Program for insertion sort
#include <stdio.h>
void insert(int a[], int n) /* function to sort an aay with insertion sort */
{
int i, j, temp;
for (i = 1; i < n; i++) {
temp = a[i];
j = i - 1;
while(j>=0 && temp <= a[j]) /* Move the elements greater than temp to one
position ahead from their current position*/
{
a[j+1] = a[j];
j = j-1;
}
a[j+1] = temp;
}
}
void printArr(int a[], int n) /* function to print the array */
{
int i;
for (i = 0; i < n; i++)
printf("%d ", a[i]);
}
int main()
{
int a[] = { 12, 31, 25, 8, 32, 17 };
int n = sizeof(a) / sizeof(a[0]);
printf("Before sorting array elements are - \n");
printArr(a, n);
insert(a, n);
printf("\nAfter sorting array elements are - \n");
printArr(a, n);
return 0;
}
Algorithm
Tableforallalgorithmscomplexity:-
Order of Complexity
Let's take a look at some common orders of complexity and their corresponding
algorithms:
This means that the algorithm takes a constant amount of time, regardless of the
input size. For example, accessing an element in an array takes O(1) time, as the
element can be accessed directly using its index.
This means that the algorithm's time taken increases logarithmically with the input
size. This is commonly seen in Divide-and-Conquer Algorithms like Binary Search,
which divide the input into smaller parts to solve the problem.
This means that the algorithm's time taken increases linearly with the input size.
Examples of such algorithms are Linear Search and Bubble Sort.
o O(n log n) - Linearithmic Time Complexity:
This means that the algorithm's time taken increases by n multiplied by the
logarithm of n. Examples of such algorithms are Quicksort and Mergesort.
This means that the algorithm's time taken increases quadratically with the input
size. Examples of such algorithms are Bubble Sort and Insertion Sort.
This means that the algorithm's time taken doubles with each increase in the input
size. This is commonly seen in Recursive Algorithms like the Fibonacci Series.
It is important to know that the Order of Complexity only provides an upper bound
on the time taken by the algorithm. The actual time taken may be much less than
this bound, depending on the input data and the implementation of the algorithm.
Time complexity is the amount of time taken to run an algorithm. It is the measure
of the number of elementary operations performed by an algorithm and an estimate of
the time required for that operation. It also depends on external factors such as the
compiler, the processor’s speed, etc.
If we ask you how much time you can add the first five natural numbers? (you start
counting 1+2+3+4+5). Assume that, it took you 3 seconds. But how will you calculate
this for the computer? We cannot! And thus, computer scientists have come up with an
approach to calculate a rough estimate of the time taken to execute a code, called Time
Complexity.
Consider an algorithm which searches a container from first to last looking for one data
item :
In that case :
The Best case complexity is O(1) - the item being searched for is found in the
very first item in the container. In this case the time complexity is not
dependent on the amount of data in the container.
The Worst case is that every item will need to be looked at - in this case the
time complexity is O(n) (where n is the number of data items).
The Average case is that the item is found approximately 1/2 way through - in
which case the time complexity is again O(n); it isn’t O(n/2) since in big O
notation constants are removed.
Note here that the average and worst cases are the same - O(n) - this is not uncommon
(in fact most algorithms have this characteristic) and should not cause any significant
concern. All it means is that in theory both the graphs of average and worst case
execution times will be linearly dependent on n.
In simple terms, it is a rough estimation of how much storage your code will take in
RAM.
Anyone dreaming of getting into a product-based industry should not just be able to
write code but write efficient code which takes the least time and memory to execute.
So, let’s begin to establish a solid foundation for this concept.
Asymptotic Notations
Asymptotic Analysis is defined as the big idea that handles the above issues
in analyzing algorithms. In Asymptotic Analysis, we evaluate the
performance of an algorithm in terms of input size (we don’t measure
the actual running time). We calculate, how the time (or space) taken by an
algorithm increases with the input size.
Asymptotic notation is a way to describe the running time or space
complexity of an algorithm based on the input size. It is commonly used in
complexity analysis to describe how an algorithm performs as the size of the
input grows.
The main idea of asymptotic analysis is to have a measure of the efficiency
of algorithms that don’t depend on machine-specific constants and don’t
require algorithms to be implemented and time taken by programs to be
compared. Asymptotic notations are mathematical tools to represent the
time complexity of algorithms for asymptotic analysis.
.Asymptotic Notations are programming languages that allow you to analyze
an algorithm’s running time by identifying its behavior as its input size grows.
.This is also referred to as an algorithm’s growth rate.
.You can’t compare two algorithm’s head to head.
.You compare space and time complexity using asymptotic analysis.
.It compares two algorithms based on changes in their performance as the
input size is increased or decreased.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
Theta notation
The above expression can be described as if f(n) is theta of g(n), then the
value f(n) is always between c1 * g(n) and c2 * g(n) for large values of n (n ≥
n0). The definition of theta also requires that f(n) must be non-negative for
values of n greater than n0.
The execution time serves as both a lower and upper bound on the
algorithm’s time complexity.
It exist as both, most, and least boundaries for a given input value.
A simple way to get the Theta notation of an expression is to drop low-order
terms and ignore leading constants. For example, Consider the
expression 3n3 + 6n2 + 6000 = Θ(n3), the dropping lower order terms is
always fine because there will always be a number(n) after which Θ(n 3) has
higher values than Θ(n2) irrespective of the constants involved. For a given
function g(n), we denote Θ(g(n)) is following set of functions. Examples :
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2)
Examples :
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)
U { 100 , log (2000) , 10^4 } belongs to Ω(1)
Note: Here, U represents union, we can write it in these manner
because Ω provides exact or lower bounds.
#include<stdio.h>
#include<math.h>
Void main()
{
int a,b,c,d;
float x1,x2;
scanf("%d%d%d",&a,&b,&c);
d=b*b-4*a*c;
if(d==0)
{
printf("Both roots are equal.\n");
x1=-b/(2.0*a);
x2=x1;
printf("First Root Root1=%f\n",x1);
{
printf("Both roots are real and diff-2\n");
x1=(-b+sqrt(d))/(2*a);
x2=(-b-sqrt(d))/(2*a);
printf("First Root Root1=%f\n",x1);
}
Output:
Input the value ofa,b&c:157
No Solution.
Flowchart
Flowchart:CalculaterootofQuadraticEquation.