DS Unit 6 SearchSort
DS Unit 6 SearchSort
Searching is used to find the location where an element is available. There are two
types of search techniques. They are:
1. Bubble sort
2. Quick sort
3. Selection sort and
4. Heap sort
1. Internal sorting
2. External sorting
If all the elements to be sorted are present in the main memory then such sorting is
called internal sorting on the other hand, if some of the elements to be sorted are
kept on the secondary storage, it is called external sorting. Here we study only
internal sorting techniques.
Algorithm:
Let array a[n] stores n elements. Determine whether element ‘x’ is present or not.
linsrch(a[n], x)
{
index = 0;
flag = 0;
while (index < n) do
{
if (x == a[index])
{
flag = 1;
break;
}
index ++;
}
if(flag == 1)
printf(“Data found at %d position“, index);
else
printf(“data not found”);
Example 1:
Suppose we have the following unsorted list: 45, 39, 8, 54, 77, 38, 24, 16, 4, 7, 9, 20
For any element not in the list, we’ll look at 12 elements before failure.
Index 0 1 2 3 4 5 6 7 8
Elements -15 -6 0 7 9 23 54 82 101
# include <stdio.h>
# include <conio.h>
main()
{
int number[25], n, data, i, flag = 0;
clrscr();
printf("\n Enter the number of elements: ");
scanf("%d", &n);
printf("\n Enter the elements: ");
for(i = 0; i < n; i++)
scanf("%d", &number[i]);
printf("\n Enter the element to be Searched: ");
scanf("%d", &data);
for( i = 0; i < n; i++)
{
if(number[i] == data)
{
flag = 1;
break;
}
}
if(flag == 1)
printf("\n Data found at location: %d", i+1);
else
printf("\n Data not found ");
}
# include <stdio.h>
# include <conio.h>
void main()
{
int a[25], i, n, data;
clrscr();
printf("\n Enter the number of elements: ");
scanf("%d", &n);
printf("\n Enter the elements: ");
for(i = 0; i < n; i++)
{
scanf("%d", &a[i]);
}
printf("\n Enter the element to be seached: ");
scanf("%d", &data);
linear_search(a, data, 0, n);
getch();
}
If we have ‘n’ records which have been ordered by keys so that x1 < x2 < … < xn . When
we are given a element ‘x’, binary search is used to find the corresponding element
from the list. In case ‘x’ is present, we have to determine a value ‘j’ such that a[j] = x
(successful search). If ‘x’ is not in the list then j is to set to zero (un successful search).
In Binary search we jump into the middle of the file, where we find key a[mid], and
compare ‘x’ with a[mid]. If x = a[mid] then the desired record has been found.
If x < a[mid] then ‘x’ must be in that portion of the file that precedes a[mid]. Similarly,
if a[mid] > x, then further search is only necessary in that part of the file which follows
a[mid].
If we use recursive procedure of finding the middle key a[mid] of the un-searched
portion of a file, then every un-successful comparison of ‘x’ with a[mid] will eliminate
roughly half the un-searched portion from consideration.
Since the array size is roughly halved after each comparison between ‘x’ and a[mid],
and since an array of length ‘n’ can be halved only about log2n times before reaching a
trivial length, the worst case complexity of Binary search is about log2n.
Algorithm:
Let array a[n] of elements in increasing order, n ≥ 0, determine whether ‘x’ is present,
and if so, set j such that x = a[j] else return 0.
low and high are integer variables such that each time through the loop either ‘x’ is
found or low is increased by at least one or high is decreased by at least one. Thus we
have two sequences of integers approaching each other and eventually low will become
greater than high causing termination in a finite number of steps if ‘x’ is not present.
Example 1:
Index 1 2 3 4 5 6 7 8 9 10 11 12
Elements 4 7 8 9 16 20 24 38 39 45 54 77
20 – requires 1 comparison;
8 and 39 – requires 2 comparisons;
4, 9, 24, 54 – requires 3 comparisons and
7, 16, 38, 45, 77 – requires 4 comparisons
Summing the comparisons, needed to find all twelve items and dividing by 12, yielding
37/12 or approximately 3.08 comparisons per successful search on the average.
Example 2:
Index 0 1 2 3 4 5 6 7 8
Elements -15 -6 0 7 9 23 54 82 101
Solution:
Continuing in this manner the number of element comparisons needed to find each of
nine elements is:
Index 1 2 3 4 5 6 7 8 9
Elements -15 -6 0 7 9 23 54 82 101
Comparisons 3 2 3 4 1 3 2 3 4
There are ten possible ways that an un-successful search may terminate depending
upon the value of x.
If x < a(1), a(1) < x < a(2), a(2) < x < a(3), a(5) < x < a(6), a(6) < x < a(7) or a(7)
< x < a(8) the algorithm requires 3 element comparisons to determine that ‘x’ is not
present. For all of the remaining possibilities BINSRCH requires 4 element comparisons.
Thus the average number of element comparisons for an unsuccessful search is:
(3 + 3 + 3 + 4 + 4 + 3 + 3 + 3 + 4 + 4) / 10 = 34/10 = 3.4
Time Complexity:
The time complexity of binary search in a successful search is O(log n) and for an
unsuccessful search is O(log n).
# include <stdio.h>
# include <conio.h>
main()
{
int number[25], n, data, i, flag = 0, low, high, mid;
clrscr();
printf("\n Enter the number of elements: ");
scanf("%d", &n);
printf("\n Enter the elements in ascending order: ");
for(i = 0; i < n; i++)
scanf("%d", &number[i]);
printf("\n Enter the element to be searched: ");
scanf("%d", &data);
low = 0; high = n-1;
while(low <= high)
{
mid = (low + high)/2;
if(number[mid] == data)
{
flag = 1;
break;
}
else
{
if(data < number[mid])
high = mid - 1;
else
low = mid + 1;
}
}
if(flag == 1)
printf("\n Data found at location: %d", mid + 1);
else
printf("\n Data Not Found ");
}
# include <stdio.h>
# include <conio.h>
The bubble sort is easy to understand and program. The basic idea of bubble sort is to
pass through the file sequentially several times. In each pass, we compare each
element in the file with its successor i.e., X[i] with X[i+1] and interchange two element
when they are not in proper order. We will illustrate this sorting technique by taking a
specific example. Bubble sort is also called as exchange sort.
Example:
Suppose we want our array to be stored in ascending order. Then we pass through the
array 5 times as described below:
We compare X[i] and X[i+1] for i = 0, 1, 2, 3, and 4, and interchange X[i] and X[i+1]
if X[i] > X[i+1]. The process is shown below:
The biggest number 66 is moved to (bubbled up) the right most position in the array.
Lecture Notes 217 Dept. of Information Technology
Pass 2: (second element is compared).
We repeat the same process, but this time we don’t include X[5] into our comparisons.
i.e., we compare X[i] with X[i+1] for i=0, 1, 2, and 3 and interchange X[i] and X[i+1]
if X[i] > X[i+1]. The process is shown below:
33 22 11 44 55
22 33
11 33
33 44
44 55
22 11 33 44 55
We repeat the same process, but this time we leave both X[4] and X[5]. By doing this,
we move the third biggest number 44 to X[3].
22 11 33 44
11 22
22 33
33 44
11 22 33 44
We repeat the process leaving X[3], X[4], and X[5]. By doing this, we move the fourth
biggest number 33 to X[2].
11 22 33
11 22
22 33
We repeat the process leaving X[2], X[3], X[4], and X[5]. By doing this, we move the
fifth biggest number 22 to X[1]. At this time, we will have the smallest number 11 in
X[0]. Thus, we see that we can sort the array of size 6 in 5 passes.
#include <stdio.h>
#include <conio.h>
void bubblesort(int x[], int n)
{
int i, j, temp;
for (i = 0; i < n; i++)
{
for (j = 0; j < n–i-1 ; j++)
{
if (x[j] > x[j+1])
{
temp = x[j];
x[j] = x[j+1];
x[j+1] = temp;
}
}
}
}
main()
{
int i, n, x[25];
clrscr();
printf("\n Enter the number of elements: ");
scanf("%d", &n);
printf("\n Enter Data:");
for(i = 0; i < n ; i++)
scanf("%d", &x[i]);
bubblesort(x, n);
printf ("\n Array Elements after sorting: ");
for (i = 0; i < n; i++)
printf ("%5d", x[i]);
}
Time Complexity:
The bubble sort method of sorting an array of size n requires (n-1) passes and (n-1)
comparisons on each pass. Thus the total number of comparisons is (n-1) * (n-1) = n2
– 2n + 1, which is O(n2). Therefore bubble sort is very inefficient when there are more
elements to sorting.
Selection sort will not require no more than n-1 interchanges. Suppose x is an array of
size n stored in memory. The selection sort algorithm first selects the smallest element
in the array x and place it at array position 0; then it selects the next smallest element
in the array x and place it at array position 1. It simply continues this procedure until it
places the biggest element in the last position of the array.
The array is passed through (n-1) times and the smallest element is placed in its
respective position in the array as detailed below:
Pass 2: Leave the first element and find the location j of the smallest element in the
sub-array x[1], x[2], . . . . x[n-1], and then interchange x[1] with x[j]. Then
x[0], x[1] are sorted.
Pass 3: Leave the first two elements and find the location j of the smallest element in
the sub-array x[2], x[3], . . . . x[n-1], and then interchange x[2] with x[j].
Then x[0], x[1], x[2] are sorted.
Pass (n-1): Find the location j of the smaller of the elements x[n-2] and x[n-1], and
then interchange x[j] and x[n-2]. Then x[0], x[1], . . . . x[n-2] are sorted. Of
course, during this pass x[n-1] will be the biggest element and so the entire
array is sorted.
Time Complexity:
In general we prefer selection sort in case where the insertion sort or the bubble sort
requires exclusive swapping. In spite of superiority of the selection sort over bubble
sort and the insertion sort (there is significant decrease in run time), its efficiency is
also O(n2) for n data items.
Example:
Let us consider the following example with 9 elements to analyze selection Sort:
1 2 3 4 5 6 7 8 9 Remarks
# include<stdio.h>
# include<conio.h>
int a[25];
int main()
{
int num, i= 0;
clrscr();
printf( "Enter the number of elements: " );
scanf("%d", &num);
printf( "\nEnter the elements:\n" );
for(i=0; i < num; i++)
scanf( "%d", &a[i] );
selectionSort( 0, num - 1 );
printf( "\nThe elements after sorting are: " );
for( i=0; i< num; i++ )
printf( "%d ", a[i] );
return 0;
}
#include <stdio.h>
#include<conio.h>
main()
{
int i, n = 0;
clrscr();
printf (" Array Elements before sorting: ");
for (i=0; i<5; i++)
selectionSort( int n)
{
int k, p, temp, min;
if (n== 4)
return (-1);
min = x[n];
p = n;
for (k = n+1; k<5; k++)
{
if (x[k] <min)
{
min = x[k];
p = k;
}
}
temp = x[n]; /* interchange x[n] and x[p] */
x[n] = x[p];
x[p] = temp;
n++ ;
selectionSort(n);
}
The quick sort was invented by Prof. C. A. R. Hoare in the early 1960’s. It was one of
the first most efficient sorting algorithms. It is an example of a class of algorithms that
work by “divide and conquer” technique.
The quick sort algorithm partitions the original array by rearranging it into two groups.
The first group contains those elements less than some arbitrary chosen value taken
from the set, and the second group contains those elements greater than or equal to
the chosen value. The chosen value is known as the pivot element. Once the array has
been rearranged in this way with respect to the pivot, the same partitioning procedure
is recursively applied to each of the two subsets. When all the subsets have been
partitioned and rearranged, the original array is sorted.
The function partition() makes use of two pointers up and down which are moved
toward each other in the following fashion:
1. It terminates when the condition low >= high is satisfied. This condition will
be satisfied only when the array is completely sorted.
2. Here we choose the first element as the ‘pivot’. So, pivot = x[low]. Now it
calls the partition function to find the proper position j of the element x[low]
i.e. pivot. Then we will have two sub-arrays x[low], x[low+1], . . . . . . x[j-1]
and x[j+1], x[j+2], . . . x[high].
4. It calls itself recursively to sort the right sub-array x[j+1], x[j+2], . . x[high]
between positions j+1 and high.
Algorithm
Sorts the elements a[p], . . . . . ,a[q] which reside in the global array a[n] into
ascending order. The a[n + 1] is considered to be defined and must be greater than all
elements in a[n]; a[n + 1] = + ∝
quicksort (p, q)
{
if ( p < q ) then
{
call j = PARTITION(a, p, q+1); // j is the position of the partitioning element
call quicksort(p, j – 1);
call quicksort(j + 1 , q);
}
}
partition(a, m, p)
{
v = a[m]; up = m; down = p; // a[m] is the partition element
do
{
repeat
up = up + 1;
until (a[up] > v);
repeat
down = down – 1;
until (a[down] < v);
if (up < down) then call interchange(a, up, down);
} while (up > down);
a[m] = a[down];
a[down] = v;
return (down);
}
Example:
Select first element as the pivot element. Move ‘up’ pointer from left to right in search
of an element larger than pivot. Move the ‘down’ pointer from right to left in search of
an element smaller than pivot. If such elements are found, the elements are swapped.
This process continues till the ‘up’ pointer crosses the ‘down’ pointer. If ‘up’ pointer
crosses ‘down’ pointer, the position for pivot is found and interchange pivot and
element at ‘down’ position.
Let us consider the following example with 13 elements to analyze quick sort:
1 2 3 4 5 6 7 8 9 10 11 12 13 Remarks
38 08 16 06 79 57 24 56 02 58 04 70 45
swap up &
pivot up down
down
pivot 04 79
swap up &
pivot up down
down
pivot 02 57
swap pivot
pivot down up
& down
(24 08 16 06 04 02) 38 (56 57 58 79 70 45)
swap pivot
pivot down up
& down
(02 08 16 06 04) 24
pivot, swap pivot
up
down & down
02 (08 16 06 04)
swap up &
pivot up down
down
pivot 04 16
pivot down Up
swap pivot
(06 04) 08 (16)
& down
pivot down up
swap pivot
(04) 06
& down
04
pivot,
down,
up
16
pivot,
down,
up
(02 04 06 08 16 24) 38
02 04 06 08 16 24 38 45 56 57 58 70 79
# include<stdio.h>
# include<conio.h>
int main()
{
int num, i = 0;
clrscr();
printf( "Enter the number of elements: " );
scanf( "%d", &num);
printf( "Enter the elements: " );
for(i=0; i < num; i++)
scanf( "%d", &array[i] );
quicksort(0, num -1);
printf( "\nThe elements after sorting are: " );
do
{
do
up = up + 1;
while(array[up] < pivot );
do
down = down - 1;
while(array[down] > pivot);
Heap is a data structure, which permits one to insert elements into a set and also to
find the largest element efficiently. A data structure, which provides these two
operations, is called a priority queue.
A max heap is an almost complete binary tree such that the value of each node is
greater than or equal to those in its children.
95 15
85 45 45 25
75 25 35 15 55 65 35 75
A min heap is an almost complete binary tree such that the value of each node is less
than or equal to those in its children.
Since heap is a complete binary tree, a heap tree can be efficiently represented using
one dimensional array. This provides a very convenient way of figuring out where
children belong to.
The elements of the array can be thought of as lying in a tree structure. A heap tree
represented using a single array looks as follows:
x[ 1]
65 x[ 3]
x[ 2]
45 60
x[ 6] x[ 7]
x[ 4] 40 x[ 5] 2 5 50 55
x[ 8] 30 He a p T r e e
1. Insertion,
2. Deletion and
3. Merging.
This operation is used to insert a node into an existing heap tree satisfying the
properties of heap tree. Using repeated insertions of data, starting from an empty heap
tree, one can build up a heap tree.
Let us consider the heap (max) tree. The principle of insertion is that, first we have to
adjoin the data in the complete binary tree. Next, we have to compare it with the data
in its parent; if the value is greater than that at parent then interchange the values.
This will continue between two nodes on path from the newly inserted node to the root
node till we get a parent whose value is greater than its child or we reached the root.
For illustration, 35 is added as the right child of 80. Its value is compared with its
parent’s value, and to be a max heap, parent’s value greater than child’s value is
satisfied, hence interchange as well as further comparisons are no more required.
As another illustration, let us consider the case of insertion 90 into the resultant heap
tree. First, 90 will be added as left child of 40, when 90 is compared with 40 it requires
interchange. Next, 90 is compared with 80, another interchange takes place. Now, our
process stops here, as 90 is now in root node. The path on which these comparisons
and interchanges have taken places are shown by dashed line.
The algorithm Max_heap_insert to insert a data into a max heap tree is as follows:
Max_heap_insert (a, n)
{
//inserts the value in a[n] into the heap which is stored at a[1] to a[n-1]
int i, n;
i = n;
item = a[n];
while ( (i > 1) and (a[ ⎣ i/2 ⎦ ] < item ) do
{
a[i] = a[ ⎣ i/2 ⎦ ] ; // move the parent down
i = ⎣ i/2 ⎦ ;
}
a[i] = item ;
return true ;
}
Example:
Form a heap using the above algorithm for the data: 40, 80, 35, 90, 45, 50, 70.
1. Insert 40:
40
40
80 40
3. Insert 35:
80
40 35
4. Insert 90:
90
80 90
80
90
40 35 80 35
40
90 40
5. Insert 45:
90
80 35
40 45
6. Insert 50:
90 90
50
80 35 80 50
35
40 45 50 40 45 35
7. Insert 70:
90 90
70
80 50 80 70
50
40 45 35 70 40 45 35 50
Any node can be deleted from a heap tree. But from the application point of view,
deleting the root node has some special importance. The principle of deletion is as
follows:
• Replace the root node by the last node in the heap tree. Then re-heap the
tree as stated below:
• Let newly modified root node be the current node. Compare its value
with the value of its two child. Let X be the child whose value is the
largest. Interchange the value of X with the value of the current
node.
delmax (a, n, x)
// delete the maximum from the heap a[n] and store it in x
{
if (n = 0) then
{
write (“heap is empty”);
return false;
}
x = a[1]; a[1] = a[n];
adjust (a, 1, n-1);
return true;
}
adjust (a, i, n)
// The complete binary trees with roots a(2*i) and a(2*i + 1) are combined with a(i) to
form a single heap, 1 < i < n. No node has an address greater than n or less than 1. //
{
j = 2 *i ;
item = a[i] ;
while (j < n) do
{
if ((j < n) and (a (j) < a (j + 1)) then j Å j + 1;
// compare left and right child and let j be the larger child
if (item > a (j)) then break;
// a position for item is found
else a[ ⎣ j / 2 ⎦ ] = a[j] // move the larger child up a level
j = 2 * j;
}
a [ ⎣ j / 2 ⎦ ] = item;
}
Here the root node is 99. The last node is 26, it is in the level 3. So, 99 is replaced by
26 and this node with data 26 is removed from the tree. Next 26 at root node is
compared with its two child 45 and 63. As 63 is greater, they are interchanged. Now,
27 12 24 26
27 12 24
De l e t i n g t h e n o d e w it h d at a 9 9 Af t er De l e t i o n of n o d e w it h d at a 9 9
Consider, two heap trees H1 and H2. Merging the tree H2 with H1 means to include all
the node from H2 to H1. H2 may be min heap or max heap and the resultant tree will
be min heap if H1 is min heap else it will be max heap. Merging operation consists of
two steps: Continue steps 1 and 2 while H2 is not empty:
92 13
59 67 19 80
38 45 92 93 96
96
93 67
80 92 13 19
A heap sort algorithm works by first organizing the data to be sorted into a special type
of binary tree called a heap. Any kind of data can be sorted either in ascending order or
in descending order using heap tree. It does this with the following steps:
Algorithm:
This algorithm sorts the elements a[n]. Heap sort rearranges them in-place in non-
decreasing order. First transform the elements into a heap.
heapsort(a, n)
{
heapify(a, n);
for i = n to 2 by – 1 do
{
temp = a[i];
a[i] = a[1];
a[1] = temp;
adjust (a, 1, i – 1);
}
}
heapify (a, n)
//Readjust the elements in a[n] to form a heap.
{
for i Å ⎣ n/2 ⎦ to 1 by – 1 do adjust (a, i, n);
}
adjust (a, i, n)
// The complete binary trees with roots a(2*i) and a(2*i + 1) are combined with a(i) to
form a single heap, 1 < i < n. No node has an address greater than n or less than 1. //
{
j = 2 *i ;
item = a[i] ;
while (j < n) do
{
if ((j < n) and (a (j) < a (j + 1)) then j Å j + 1;
// compare left and right child and let j be the larger child
if (item > a (j)) then break;
// a position for item is found
else a[ ⎣ j / 2 ⎦ ] = a[j] // move the larger child up a level
j = 2 * j;
}
a [ ⎣ j / 2 ⎦ ] = item;
}
Each ‘n’ insertion operations takes O(log k), where ‘k’ is the number of elements in the
heap at the time. Likewise, each of the ‘n’ remove operations also runs in time O(log
k), where ‘k’ is the number of elements in the heap at the time.
Since we always have k ≤ n, each such operation runs in O(log n) time in the worst
case.
Thus, for ‘n’ elements it takes O(n log n) time, so the priority queue sorting algorithm
runs in O(n log n) time when we use a heap to implement the priority queue.
Example 1:
Form a heap from the set of elements (40, 80, 35, 90, 45, 50, 70) and sort the data
using heap sort.
Solution:
First form a heap tree from the given set of data and then sort by repeated deletion
operation:
40 40
80 35 80 70
90 45 50 70 90 45 50 35
90
40
40 70
90 70
80 45 50 35
80 45 50 35
90
80 70
40 45 50 35
80
35 80
45 35
80 70 45 70
40 45 50 90 40 35 50 90
35
2. Exchange root 80 with the last element 50 of the array and re-heapify
70
50 70
50
45 70 45 50
40 35 80 90 40 35 80 90
3. Exchange root 70 with the last element 35 of the array and re-heapify
50
35 50
35
45 50 45 35
40 70 80 90 40 70 80 90
4. Exchange root 50 with the last element 40 of the array and re-heapify
45
40 45
40
45 35 40 35
50 70 80 90 50 70 80 90
5. Exchange root 45 with the last element 35 of the array and re-heapify
40
35 40
35
40 45 35 45
50 70 80 90 50 70 80 90
6. Exchange root 40 with the last element 35 of the array and re-heapify
35
40 45
50 70 80 90
void main()
{
int i, n, a[20];
clrscr();
printf("\n How many element you want: ");
scanf("%d", &n);
printf("Enter %d elements: ", n);
for (i=1; i<=n; i++)
scanf("%d", &a[i]);
heapsort(n, a);
printf("\n The sorted elements are: \n");
for (i=1; i<=n; i++)
printf("%5d", a[i]);
getch();
}
Priority queue can be implemented using circular array, linked list etc. Another
simplified implementation is possible using heap tree; the heap, however, can be
represented using an array. This implementation is therefore free from the complexities
of circular array and linked list but getting the advantages of simplicities of array.
As heap trees allow the duplicity of data in it. Elements associated with their priority
values are to be stored in from of heap tree, which can be formed based on their
priority values. The top priority element that has to be processed first is at the root; so
it can be deleted and heap can be rebuilt to get the next element to be processed, and
so on. As an illustration, consider the following processes with their priorities:
Process P1 P2 P3 P4 P5 P6 P7 P8 P9 P10
Priority 5 4 3 4 5 5 3 2 1 5
These processes enter the system in the order as listed above at time 0, say. Assume
that a process having higher priority value will be serviced first. The heap tree can be
formed considering the process priority values. The order of servicing the process is
successive deletion of roots from the heap.
Exercises
1. Write a recursive “C” function to implement binary search and compute its
time complexity.
7. Here is an array which has just been partitioned by the first step of
quicksort: 3, 0, 2, 4, 5, 8, 7, 6, 9. Which of these elements could be the
pivot? (There may be more than one possibility!)
8. Show the result of inserting 10, 12, 1, 14, 6, 5, 8, 15, 3, 9, 7, 4, 11, 13,
and 2, one at a time, into an initially empty binary heap.
1. What is the worst-case time for serial search finding a single item in an [ D ]
array?
A. Constant time C. Logarithmic time
B. Quadratic time D. Linear time
2. What is the worst-case time for binary search finding a single item in an [ B ]
array?
A. Constant time C. Logarithmic time
B. Quadratic time D. Linear time
7. Selection sort and quick sort both fall into the same category of sorting [ B ]
algorithms. What is this category?
A. O(n log n) sorts C. Divide-and-conquer sorts
B. Interchange sorts D. Average time is quadratic
11. Suppose we are sorting an array of eight integers using quick sort, and we [ A ]
have just finished the first partitioning with the array looking like this:
2 5 1 7 9 12 11 10 Which statement is correct?
A. The pivot could be either the 7 or the 9.
B. The pivot is not the 7, but it could be the 9.
C. The pivot could be the 7, but it is not the 9.
D. Neither the 7 nor the 9 is the pivot
12. What is the worst-case time for heap sort to sort an array of n elements? [ C ]
A. O(log n) C. O(n log n)
B. O(n) D. O(n²)
13. Suppose we are sorting an array of eight integers using heap sort, and we [B ]
have just finished one of the reheapifications downward. The array now
looks like this: 6 4 5 1 2 7 8
How many reheapifications downward have been performed so far?
A. 1 C. 2
B. 3 or 4 D. 5 or 6
15. A min heap is the tree structure where smallest element is available at the [B ]
A. leaf C. intermediate parent
B. root D. any where
16. In the quick sort method , a desirable choice for the portioning element will [C ]
be
A. first element of list C. median of list
B. last element of list D. any element of list
19. Which among the following is fastest sorting technique (for unordered data) [ C ]
A. Heap sort C. Quick Sort
B. Selection Sort D. Bubble sort
20. In which searching technique elements are eliminated by half in each pass . [ C ]
A. Linear search C. Binary search
B. both D. none
24. The Max heap constructed from the list of numbers 30,10,80,60,15,55 is [ C ]
A. 60,80,55,30,10,15 C. 80,55,60,15,10,30
B. 80,60,55,30,10,15 D. none