0% found this document useful (0 votes)
12 views

Binary Search

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Binary Search

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Explain Time and Space Complexity Analysis for Linear Data Structures.

Space Complexity:
Space complexity of an algorithm represents the amount of memory space
needed the algorithm in its life cycle.

Space needed by an algorithm is equal to the sum of the following two


components A fixed part that is a space required to store certain data and
variables (i.e.
simple variables and constants, program size etc.), that are not dependent of
the size of the problem.

A variable part is a space required by variables, whose size is totally dependent


on the size of the problem. For example, recursion stack space, dynamic
memory allocation etc.

Consider an example of the sum of the first N numbers.


Here input value “n‟ is a constant of type integer and which will take the space
of 4 bytes. Similarly, “i‟ and “sum‟. Thus, a total space of 12 bytes.
Now removing the constants and keeping the highest power term we get, Space
complexity =O (1).
Consider another example of adding values to an Array.
Here fixed variables are “sum‟ and “i‟ of the integer type.
There‟s also a temporary or extra space used by the algorithm while
“return‟ is being executed. This temporary space is called auxiliary (Secondary)
space and is calculated as a fixed space.
Thus, the fixed part is 3 variables × 4 bytes each= 12 bytes.
The size of the array is variable with integer type each, thus taking the
space of 4xN.
Therefore, Total space = 4N+ 12
Removing the constants and keeping the highest power term we get Space
complexity = O(N).

Time Complexity:
Time complexity measures how many operations an algorithm completes in
relation to the size of the input. It aids in our analysis of the algorithm's
performance scaling with increasing input size. Big O notation (O()) is the
notation that is most frequently used to indicate temporal complexity. It offers
an upper bound on how quickly an algorithm's execution time will increase.
Best, Worst, and Average Case Complexity:
In analyzing algorithms, we consider three types of time complexity:
Best-case Complexity (O(best)): This represents the minimum time
required for an algorithm to complete when given the optimal input. It denotes
an algorithm operating at its peak efficiency under ideal
circumstances(conditions).
Worst-case Complexity (O(worst)): This denotes the maximum time an
algorithm will take to finish for any given input. It represents the scenario
where the algorithm encounters the most unfavorable input.

Average-case Complexity (O(average)): This estimates the typical running


time of an algorithm when averaged over all possible inputs. It provides a more
realistic evaluation of an algorithm's performance.

Time Complexity of Different Data Structures:


Here are the time complexities associated with common data structures:
Arrays:
Access: O(1)
Search: O(n)
Insertion (at the end): O(1)
Insertion (at the beginning or middle): O(n)
Deletion (from the end): O(1)
Deletion (from the beginning or middle): O(n)

Linked Lists:
Access: O(n)
Search: O(n)
Insertion (at the beginning): O(1)
Insertion (at the end, with a tail pointer): O(1) 
Insertion (at the end, without a tail pointer): O(n) 
Insertion (in the middle): O(n)
Deletion (from the beginning): O(1)
Deletion (from the end, with a tail pointer): O(1) 
Deletion (from the end, without a tail pointer): O(n) 
Deletion (from the middle): O(n)

Doubly Linked List:


Accessing an element by index: O(n)
Searching for an element: O(n)
Insertion (at the beginning): O(1)
Insertion (at the end, with a tail pointer): O(1) 
Insertion (at the end, without a tail pointer): O(n) 
Insertion (in the middle): O(n)
Deletion (from the beginning): O(1)
Deletion (from the end, with a tail pointer): O(1) 
Deletion (from the end, without a tail pointer): O(n) 
Deletion (from the middle): O(n)

Stacks:
Push: O(1)
Pop: O(1)
Peek: O(1)

Queues:
Enqueue: O(1)
Dequeue: O(1)
Peek: O(1)

Hash Tables:
Search: O(1) - on average, assuming a good hash function and minimal
collisions
Insertion: O(1) - on average, assuming a good hash function and minimal
collisions
Deletion: O(1) - on average, assuming a good hash function and minimal
collisions

Binary Search Trees (BSTs):


Search: O(log n) - on average for balanced BST, O(n) worst case for
unbalanced BST
Insertion: O(log n) - on average for balanced BST, O(n) worst case for
unbalanced BST
Deletion: O(log n) - on average for balanced BST, O(n) worst case for
unbalanced BST

Binary Search

For sorted arrays, binary search is more efficient than linear search.
The process starts from the middle of the input array:
 If the target equals the element in the middle, return its index.
 If the target is larger than the element in the middle, search the right half.
 If the target is smaller, search the left half.
In the following binarySearch() method, the two index variables first and
last indicates the searching boundary at each round.
1 int binarySearch(int arr[], int target)
2 {
3 int first = 0, last = arr.length - 1;
4
5 while (first <= last)
6 {
7 int mid = (first + last) / 2;
8 if (target == arr[mid])
9 return mid;
10 if (target > arr[mid])
11 first = mid + 1;
12 else
13 last = mid - 1;
14 }
15 return -1;
16 }

1 arr: {3, 9, 10, 27, 38, 43, 82}


2
3 target: 10
4 first: 0, last: 6, mid: 3, arr[mid]: 27 -- go left
5 first: 0, last: 2, mid: 1, arr[mid]: 9 -- go right
6 first: 2, last: 2, mid: 2, arr[mid]: 10 -- found
7
8 target: 40
9 first: 0, last: 6, mid: 3, arr[mid]: 27 -- go right
10 first: 4, last: 6, mid: 5, arr[mid]: 43 -- go left
11 first: 4, last: 4, mid: 4, arr[mid]: 38 -- go right
12 first: 5, last: 4 -- not found

Binary search divides the array in the middle at each round of the loop.
Suppose the array has length n and the loop runs in t rounds, then we have
n * (1/2)^t = 1 since at each round the array length is divided by 2. Thus t =
log(n). At each round, the loop body takes constant time. Therefore, binary
search runs in logarithmic time O(log n).

The following code implements binary search using recursion. To call


the method, we need provide with the boundary
indexes, for example,

1 binarySearch(arr, 0, arr.length - 1, target);


2 binarySearch(int arr[], int first, int last, int target)
3 {
4 if (first > last)
5 return -1;
6
7 int mid = (first + last) / 2;
8
9
10 if (target == arr[mid])
11 return mid;
12 if (target > arr[mid])
13 return binarySearch(arr, mid + 1, last, target);
14 // target < arr[mid]
15 return binarySearch(arr, first, mid - 1, target);
}

Algorithm for Bubble Sort:


We assume list is an array of n elements. We further assume that swap function
swaps the values of the given array elements.
begin BubbleSort(list)
for all elements of list
if list[i] > list[i+1]
swap(list[i], list[i+1])
end if
end for
return list
end BubbleSort

Time Complexity:
Best Case Complexity - It occurs when there is no sorting required, i.e. the
array is already sorted. The best-case time complexity of bubble sort is O(n).
Average Case Complexity - It occurs when the array elements are in jumbled
order that is not properly ascending and not properly descending. The average
case time complexity of bubble sort is O(n2).
Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order,
but its elements are in descending order. The worst-case time complexity of bubble sort is
O(n2).

You might also like