DS-Notes-Unit-1
DS-Notes-Unit-1
UNIT I
Introduction to Linear Data Structures: Definition and importance of linear data structures,
Abstract data types (ADTs) and their implementation, Overview of time and space complexity analysis
for linear data structures. Searching Techniques: Linear & Binary Search, Sorting Techniques: Bubble
sort, Selection sort, Insertion Sort.
1
Non-Primitive Data Structures
1. Non-Primitive Data Structures are those data structures derived from Primitive Data
Structures.
2. These data structures can't be manipulated or operated directly by machine-level instructions.
3. These data structures focus on forming a set of data elements that is either homogeneous (same
data type) or heterogeneous (different data types).
4. Based on the structure and arrangement of data, we can divide these data structures into two
sub- categories -
a. Linear Data Structures
b. Non-Linear Data Structures
(You can remember any one from the above two pictures)
Major Operations we can perform on Data Structures:
The major or the common operations that can be performed on the data structures are:
o Searching: We can search for any element in a data structure.
o Sorting: We can sort the elements of a data structure either in an ascending or
descending order.
o Insertion: We can also insert the new element in a data structure.
o Updation: We can also update the element, i.e., we can replace the element
with another element.
o Deletion: We can also perform the delete operation to remove the element
from the data structure.
2
Advantages of Data structures:
The following are the advantages of a data structure:
o Efficiency: If the choice of a data structure for implementing a particular ADT is
proper, it makes the program very efficient in terms of time and space.
o Reusability: The data structure provides reusability means that multiple client
programs can use the data structure.
o Abstraction: The data structure specified by an ADT also provides the level of
abstraction. The client cannot see the internal working of the data structure, so it
does not have to worry about the implementation part. The client can only see the
interface.
1) Linear data structure: In this type of Data structure the data elements are arranged
sequentially or linearly, in this type each element is attached to its previous and next adjacent
elements, is called a linear data structure.
Examples of linear data structures are array, stack, queue, linked list, etc.
Based on memory allocation, the Linear Data Structures are further classified into two types:
Static data structure: Static data structure has a fixed memory size. It is easy to access
the elements in a static data structure.
An example of this data structure is an array.
Dynamic data structure: In the dynamic data structure, the size is not fixed. It can be
randomly updated during the runtime. It is efficient by concerning the memory (space)
complexity of the code.
Examples of this data structure are queue, stack, linked list etc.
2) Non-linear data structure: In this type data elements are not placed sequentially or linearly
are called non-linear data structures.
In a non-linear data structure, we can’t traverse all the elements in a single run.
Examples of non-linear data structures are trees and graphs.
1. Array:
o An array is a collection of items stored at contiguous memory locations.
o Elements in an array are of the same type.
o Accessing elements is efficient because you can calculate their position by adding an offset
to the base memory location.
o Example: If you want to access the fourth element, you use the index notation (e.g., arr[4]).
o Arrays have a fixed size.
o Types of arrays include One Dimensional Array (2D Array) and Two Dimensional Arrays
(2D Array), Multi-Dimensional Arrays.
3
a. We can store a list of data elements belonging to the same data type.
b. Array acts as an auxiliary storage for other data structures.
c. The array also helps store data elements of a binary tree of the fixed count.
d. Array also acts as storage of matrices.
2. Queue:
o A queue is a linear structure that follows the First In First Out (FIFO) order.
o Elements are inserted at the rear and removed from the front.
o Think of it like a queue of consumers waiting for a resource—the first consumer served is the
one who arrived first.
o Queues can be implemented using linked lists or arrays.
o Types of queues include circular queues, priority queues, and doubly-ended queues.
3. Stack:
o A stack is a linear data structure where elements are inserted and deleted only from one
end (the top).
o It follows the Last In First Out (LIFO) principle.
o The last element inserted is the first to be removed.
o Stacks can be implemented using linked lists or arrays.
o Common operations are push (insertion) and pop (deletion).
4. Linked List:
o A linked list is a collection of nodes, where each node contains data and a reference to the
next node.
o Linked lists allow dynamic memory allocation.
o Types of linked lists include singly linked lists, doubly linked lists, and circular linked lists.
4
Remember, the choice between these data structures depends on the specific use case and the desired
behaviour. For example:
Use a queue when you need elements in the order they were added.
Use a stack when you want to reverse the order of elements.
Use an array when you need a fixed-size collection of elements.
Use a linked list when you require dynamic memory allocation and flexibility.
5
Task Management: Linear data structures like queues are applied in task management
systems. Queues allow tasks to be added in a First-In-First-Out (FIFO) manner, ensuring that
the oldest task is processed first.
Non-Linear Data Structures:
Non-Linear Data Structures are data structures. Here the data elements are not arranged in
sequential order. Here, the insertion and removal of data are not feasible in a linear manner. There
exists a hierarchical relationship between the individual data items.
Types of Non-Linear Data Structures
The following is the list of Non-Linear Data Structures that we generally use:
1. Trees
A Tree is a Non-Linear Data Structure and a hierarchy containing a collection of nodes such that
each node of the tree stores a value and a list of references to other nodes (the "children").
The Tree data structure is a specialized method to arrange and collect data in the computer to
be utilized more effectively. It contains a central node, structural nodes, and sub-nodes connected via
edges. We can also say that the tree data structure consists of roots, branches, and leaves connected.
Tree
Graphs:
A graph is a non-linear data structure that consists of vertices (also
known as nodes) and edges. Here are the key components:
1. Vertices (V): These are the fundamental units of the graph. Each vertex
represents an entity or an object. Vertices can be labelled or unlabelled.
2. Edges (E): Edges connect pairs of vertices. They represent relationships or
connections between nodes. Edges can be directed (with a specific direction)
or undirected (bidirectional).
6
Abstract data types (ADTs) and their implementation
Abstract Data Type (ADT):
Before knowing about Abstract Data Type, we need to remember the built in data types which
we already leaned. Data types such as int, float, double, long, etc. are considered to be in-built data
types and we can perform basic operations with them such as addition, subtraction, division,
multiplication, etc.
But now there may be a situation when we need operations for our user-defined data type
which have to be defined. These operations can be defined only as and when we require them.
We can create data structures along with their operations, these data structures are not in-built but
known as Abstract Data Type (ADT).
Abstract Data type (ADT) Definition: ADT is a type (or class) for objects
whose behaviour is defined by a set of values and a set of operations.
Or
Set of values and set of operations for a specific behaviour is called the ADT
(Abstract Data Type)
ADT only specify what operations are to be performed but not how these operations will be
implemented. It does not specify how data will be organized in memory and what algorithms will be
used for implementing the operations.
ADTs can be implemented using various data structures:
ADTs namely List ADT, Stack ADT, Queue ADT.
Array Implementation:
o Use an array to store elements in contiguous memory locations.
o Define an array of a fixed maximum length.
o Allocate storage for all elements before runtime.
o Index elements within the array to simulate the list.
o This approach is straightforward but has limitations on dynamic resizing.
Linked List Implementation:
o Use singly linked lists or doubly linked lists.
o Each node contains the data and a pointer to the next (and possibly previous) node.
7
o Allows dynamic resizing and efficient insertions/deletions.
o Requires additional memory for pointers.
Hash Table Implementation (for Sets):
o Create an array of “buckets” (lists).
o To add an element, hash it to find the appropriate bucket and insert it.
o To remove an element, locate the bucket and remove it if present.
1. List ADT:
A list typically stores data elements in a linear order.
The head structure contains information like the count, pointers, and an address
of a compare function (used for data comparison).
Each data node points to a data structure and has a self-referential pointer to the
next node in the list.
A list ADT is a type of data structure that encapsulates a collection of data elements with operations
to access, insert, remove, and replace them.
A list ADT can be implemented using different functions, such as arrays, linked lists, or
dynamic arrays. Some of the common operations associated with a list ADT are:
8
#include <stdio.h>
#include <stdlib.h>
struct Array
{
int* A;
int size;
int length;
};
void main( )
{
struct Array arr;
int i;
printf("Enter Size of an Array: ");
scanf("%d", &arr.size);
arr.A =(int*)malloc(sizeof(int)*arr.size);
arr.length = 0;
Operations on Arrays:
Array operations: We can perform different operations on the arrays like.
1) Display( ),
2) Append(n), and
3) Insert(index, n)
Example:
First, we need to create an array of size 10 and length 0 as there is no element in the array. For better
understanding, please have a look at the below image.
9
Then, we fill this array with some elements. i.e., 8, 3, 7, 12, 6, 9.
10
Then we are shifting 6 from index 4 to 5
Here finally, all elements have shifted and at the 4th index, we just insert our element.
code:
11
C program to implement the List ADT by using Arrays:
#include <stdio.h>
#include <stdlib.h>
struct Array
{
int* A;
int size;
int length;
};
void Display(struct Array arr)
{
printf("\nElements are\n");
for (int i = 0; i < arr.length; i++)
printf("%d ", arr.A[i]);
}
void Append(struct Array* arr, int x)
{
if (arr->length < arr->size)
arr->A[arr->length++] =
x;
}
void Insert(struct Array* arr, int index, int x)
{
if (index >= 0 && index <= arr->length)
{
for (int i = arr->length; i > index; i--)
{
arr->A[i] = arr->A[i - 1];
}
arr->A[index] = x;
arr->length++;
}
}
int main( )
{ struct Array arr;
printf("Enter Size of an Array: ");
scanf("%d", &arr.size);
arr.A = (int*)malloc(sizeof(int) * arr.size);
arr.length = 0;
printf("Enter Number of Elements: ");
scanf("%d", &arr.length);
printf("Enter All Elements: \n");
for (int i = 0; i < arr.length; i++)
{
scanf("%d", &arr.A[i]);
}
printf("Elements from the array:");
for(i=0;i<arr.length;i++)
{
printf("\n%d",arr.A[i]);
}
Append(&arr, 10);
Insert(&arr,0,12);
Display(arr);
getchar( );
}
12
2. Stack ADT
A stack is a linear data structure. It allows data to be accessed from the top only. It
simply has two operations: push (to insert data to the top of the stack) and pop (to
remove data from the stack). (Used to remove data from the stack top).
Some of the most essential operations defined in Stack ADT are listed below.
o push( ): When we insert an element in a stack then the operation is known
as a push. If the stack is full then the overflow condition occurs.
o pop( ): When we delete an element from the stack, the operation is known
as a pop. If the stack is empty means that no element exists in the stack,
this state is known as an underflow state.
o isEmpty( ): It determines whether the stack is empty or not.
o isFull( ): It determines whether the stack is full or not.'
o peek( ): It returns the element at the given position.
o count( ): It returns the total number of elements available in a stack.
o change( ): It changes the element at the given position.
o display( ): It prints all the elements available in the stack.
3. Queue ADT
A queue is a linear data structure that allows data to be accessed from both ends. There
are two main operations in the queue: Insertion: this operation inserts data to the back
of the queue. And Deletion: this operation is used to remove data from the front of the
queue.
1. A queue can be defined as an ordered list which enables insert operations to be
performed at one end called REAR and delete operations to be performed at another
end called FRONT.
2.Queue is referred to be as First In First Out list.
3.For example, people waiting in line for a rail ticket form a queue.
Some of the most essential operations defined in Queue ADT are listed below.
The fundamental operations that can be performed on queue are listed as follows -
13
o Enqueue: The Enqueue operation is used to insert the element at the rear end of
the queue. It returns void.
o Dequeue: It performs the deletion from the front-end of the queue. It also returns
the element which has been removed from the front-end. It returns an integer
value.
o Peek: This is the third operation that returns the element, which is pointed by the
front pointer in the queue but does not delete it.
o Queue overflow (isfull): It shows the overflow condition when the queue is
completely full.
o Queue underflow (isempty): It shows the underflow condition when the Queue
is empty, i.e., no elements are in the Queue.
Advantages of ADT in Data Structures
The advantages of ADT in Data Structures are:
Provides abstraction, which simplifies the complexity of the data structure and
allows users to focus on the functionality.
Enhances program modularity by allowing the data structure implementation to be
separate from the rest of the program.
Enables code reusability as the same data structure can be used in multiple
programs with the same interface.
Promotes the concept of data hiding by encapsulating data and operations into a
single unit, which enhances security and control over the data.
Supports polymorphism, which allows the same interface to be used with different
underlying data structures, providing flexibility and adaptability to changing
requirements.
14
Dataflow of an Algorithm:
o Problem: A problem can be a real-world problem or any instance from the real-
world problem. To solve the problem we need to create a program or the set of
instructions. The set of instructions / steps to solve the problem is known as an
algorithm.
o Algorithm: An algorithm will be designed for a problem which is a step by step
procedure.
o Input: After designing an algorithm, the required and the desired inputs are
provided to the algorithm.
o Processing unit: The input will be given to the processing unit, and the
processing unit will produce the desired output.
o Output: The output is the outcome or the result of the program.
2) Space Complexity: Space complexity of an algorithm is the amount of memory it needs to run
to complete the task.
The following factors will be considered for space complexity:
No. of variables, data structures, allocations, function calls.
Generally space occupied by the program as follows:
A fixed amount of memory space allotted to Data Types.
Code and Variables occupied in the program.
The space increases or decreases depending upon the program uses iterative or
recursive procedures.
Asymptotic Analysis:
Asymptotic analysis of an algorithm refers to defining the mathematical
foundation/framing of its run-time performance.
Using asymptotic analysis, we can conclude the best case, average case,
and worst case scenario of an algorithm.
Asymptotic analysis is input bound i.e., if there's no input to the
algorithm, it is concluded to work in a constant time. Other than the "input" all
other factors are considered constant.
15
Asymptotic analysis refers to computing the running time of any
operation in mathematical units of computation.
For example, the running time of one operation is computed as f(n) and
2
may be for another operation it is computed as g(n ).
This means the first operation running time will increase linearly with
the increase in n and the running time of the second operation will
increase exponentially when n increases. Similarly, the running time of
both operations will be nearly the same if n is significantly small.
Time complexity of an algorithm is classified in to three types:
Best Case − Minimum time required for program execution.
Average Case − Average time required for program execution.
Worst Case − Maximum time required for program execution.
Asymptotic Notations:
Different types of asymptotic notations are used to represent the
complexity of an algorithm. Following asymptotic notations are used to calculate
the running time complexity of an algorithm.
1. O − Big Oh Notation
2. Ω − Omega Notation
3. θ − Theta Notation
Execution time of an algorithm depends on the instruction set,
processor speed, disk I/O speed, etc. Hence, we estimate the efficiency of an
algorithm asymptotically.
16
A function f(n) can be represented is the Order of g(n) that is O(g(n)), if there exists a
value of positive integer n as n0 and a positive constant k such that −
O(f(n)) = f(n)⩽k.g(n) for n>n0 in all cases
Hence, function g(n) is an upper bound for function f(n), as g(n) grows faster than f(n).
Example
Let us consider a given function,
f(n)=4.n3+10.n2+5.n+1 Considering g(n)=n3
f(n)⩽5.g(n) for all the values of n>2
Hence, the complexity of f(n) can be represented as O(g(n)), i.e. O(n3)
2. Omega, Ω: Asymptotic Lower Bound
The notation Ω(n) is the formal way to express the lower bound of an algorithm's
running time. It measures the best case time complexity or the best amount of time
Ω(f(n))=g(n) ⩽ c.f(n)
an algorithm can possibly take to complete.
Or as follows
We say that f(n)=Ω(g(n)) when there exists constant c that f(n)⩾c.g(n) for all
sufficiently large value of n. Here n is a positive integer. It means function g is a lower
bound for function f ; after a certain value of n, f will never go below g.
Example
Let us consider a given function,
f(n)=4.n3+10.n2+5.n+1. Considering g(n)=n3,
f(n)⩾4.g(n) for all the values of n>0.
Hence, the complexity of f(n) can be represented as Ω(g(n)), i.e. Ω(n3)
17
3. Theta, θ: Asymptotic Tight Bound
The notation θ(n) is the formal way to express both the lower bound and the upper
bound of an algorithm's running time. Someone may confuse the theta notation as the
average case time complexity; while theta notation could be almost accurately used to
describe the average case, other notations could be used as well.
We say that f(n)=θ(g(n))
When there exist constants c1 and c2 that c1.g(n)⩽f(n)⩽c2.g(n) for all sufficiently
large value of n. Here n is a positive integer.
This means function g is a tight bound for function f.
Example
Let us consider a given function, f(n)=4.n3+10.n2+5.n+1
Considering g(n)=n3,
4.g(n)⩽f(n)⩽5.g(n) for all the large values of n.
Hence, the complexity of f(n) can be represented as θ(g(n)), i.e. θ(n3).
Searching: Searching is the process of finding some particular element in the list. If the element is
present in the list, then the process is called successful, and the process returns the location of that
element; otherwise (if not found), the search is called unsuccessful.
18
There are two popular search methods:
o Linear Search
o Binary Search.
1) Linear Search:
Linear search is also called as sequential search algorithm. It is the simplest searching algorithm.
Linear search is process of searching the required element from one end to another in a
sequential order. Means checking one by one in the list. It will check every element of the list until the
required element is found.
It is the simplest searching technique or algorithm.
Algorithm for Linear Search in C
Consider the key variable as the element we are searching for.
1. Check each element in the list by comparing it to the key.
3. If we reach the end of the list without finding the element equal to the key, return some value to
represent that the element is not found.
To understand the working of linear search algorithm, let's take an unsorted array. It will be easy to
understand the working of linear search with an example.
The value of K, i.e., 41, is not matched with the first element of the array. So, move to the next element.
And follow the same process until the respective element is found.
19
Now, the element to be searched is found. So algorithm will return the index of the element matched.
20
Advantages of Linear Search:
21
Linear search can be used irrespective of whether the array is sorted or not. It can
be used on arrays of any data type.
Does not require any additional memory.
It is a well-suited algorithm for small datasets.
Drawbacks of Linear Search:
Linear search has a time complexity of O(N), which in turn makes it slow for
large datasets. Not suitable for large arrays.
22
}
Output:
Enter the element to be search:41
The elements of the array are - 70, 40, 30, 11, 57, 41, 25, 14, 52
Element to be searched is -41
Element is present at 6 position of array
Binary Search:
Binary search is the search technique. It works efficiently on sorted lists. To search an element
in some list using the binary search technique, we must ensure that the list is sorted.
Binary search follows the divide and conquer approach. In this approach the list is divided
into two halves, and the item is compared with the middle element of the list. If the match is found
then, the location of the middle element is returned. If not matched, it will search into either of the
halves (right part or left part) depending upon the result produced through the match.
NOTE: Binary search can be implemented on sorted array elements. If the list elements are not
arranged in a sorted manner, we need to sort them first.
In this algorithm,
1) Divide the search space into two halves by finding the middle index “mid”.
2) Compare the middle element of the search space with the key.
3) If the key is found / matched with the middle element, the process is terminated.
4) If the key is not found at middle element, choose which half will be used as the next search
space.
If the key is smaller than the middle element, then the left side is used for next search.
If the key is larger than the middle element, then the right side is used for next search.
5) This process is continued until the key is found or the total search space is complete.
To understand the working of the Binary search algorithm, let's take a sorted array. It will be easy to
understand the working of Binary search with an example.
There are two methods to implement the binary search algorithm -
o Iterative method (with looping statement)
o Recursive method (with recursion concept)
The recursive method of binary search follows the divide and conquer approach.
Let the elements of array are:
23
So, in the given array -
beg = 0
end = 8
mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.
Now, the element to search is found. So algorithm will return the index of the element matched.
24
Best Case Complexity - In Binary search, best case occurs when the element to
o
search is found in first comparison, i.e., when the first middle element itself is the
element to be searched. The best-case time complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of Binary search is
O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs, when we have
to keep reducing the search space till it has only one element. The worst-case time
complexity of Binary search is O(logn).
2.Space Complexity
#include <stdio.h>
int binarySearch(int a[ ], int beg, int end, int val)
{
int mid;
if(end >= beg)
{ mid = (beg + end)/2;
/* if the item to be searched is present at middle */
if(a[mid] == val)
{
return mid+1;
}
/* if the item to be searched is smaller than middle, then it can only be in left subarray */
else if(a[mid] < val)
{
return binarySearch(a, mid+1, end, val);
}
/* if the item to be searched is greater than middle, then it can only be in right subarray */
else
25
{
return binarySearch(a, beg, mid-1, val);
}
}
return -1;
}
int main( )
{
int a[ ] = {11, 14, 25, 30, 40, 41, 52, 57, 70}; // given array
int val = 40; // value to be searched
int n = sizeof(a) / sizeof(a[0]); // size of array
int res = binarySearch(a, 0, n-1, val); // Store result
printf("The elements of the array are : ");
for (int i = 0; i < n; i++)
printf("%d ", a[i]);
printf("\nElement to be searched is : %d", val);
if (res = = -1)
printf("\nElement is not present in the array");
else
printf("\nElement is present at %d position of array", res);
return 0;
}
Output:
The elements of the array are: 11, 14, 25, 30, 40, 41, 52, 57, 70
Element to be searched is :40
Element is present at 5 position of array
Sorting Techniques:
A Sorting Algorithm/ Technique is used to rearrange a given array or list of elements into either
ascending order or descending order.
Here ascending or descending will be done according to a comparison operator on the elements.
We have different types of sorting techniques, they are as follows:
1) Bubble sort,
In your syllabus only
2) Selection sort,
3) Insertion Sort Bubble sort,
4) Merge Sort Selection sort and
5) Quick Sort Insertion sort topics are there.
6) Heap Sort
And few more
26
Bubble Sort: (also called as Exchange Sort):
Bubble sort is a simple sorting algorithm. This sorting algorithm is comparison-based algorithm.
Means each pair of adjacent elements is compared and the elements are swapped if they are not in order.
This algorithm is not suitable for large data sets as its average and worst case complexity are of
O(N^2)where n is the number of items.
27
After multiple iteration elements are arranged in the ascending order.
28
// C program for implementation of Bubble sort
#include <stdio.h>
// Swap function
void swap(int* arr, int i, int j)
{
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
// A function to implement bubble sort
void bubbleSort(int arr[ ], int n)
{
int i, j;
for (i = 0; i < n - 1; i++)
// Driver code
int main( )
{
int arr[ ] = { 5, 1, 4, 2, 8 };
int N = sizeof(arr) / sizeof(arr[0]);
printf("Array elements before sorting: ");
printArray(arr, N);
bubbleSort(arr, N);
printf("Sorted array: ");
printArray(arr, N);
return 0;
}
OUTPUT:
Array elements before sorting:5 1 4 2 8
Sorted array: 1 2 4 5 8
29
Bubble sort complexity
Now, let's see the time complexity of bubble sort in the best case, average case, and
worst case. We will also see the space complexity of bubble sort.
1.Time Complexity
Case Time Complexity
Stable YES
o The space complexity of bubble sort is O(1). It is because, in bubble sort, an extra
variable is required for swapping.
Advantages of Bubble Sort:
Bubble sort is easy to understand and implement.
It does not require any additional memory space.
It is a stable sorting algorithm, meaning that elements with the same key value
maintain their relative order in the sorted output.
Disadvantages of Bubble Sort:
Bubble sort has a time complexity of O(n2) which makes it very slow for large
data sets. Bubble sort is a comparison-based sorting algorithm, which means that it
requires a comparison operator to determine the relative order of elements in the
input data set. It can limit the efficiency of the algorithm in certain cases.
Selection Sort:
Selection sort is a simple sorting algorithm. The selection Sort algorithm is used to arrange a
list of elements in a particular order either in Ascending or in Descending.
In selection sort, the first element in the list is selected and it is compared repeatedly with
remaining all the elements in the list. If any element is smaller than the selected element (for
ascending order), then both are swapped. Then we select the element at second position in the list and
it is compared with remaining all elements in the list. If any element is smaller than the selected
element, then both are swapped. This procedure is repeated till the entire list is sorted.
This sorting algorithm is a comparison-based algorithm in which the list is divided into two
parts, the sorted part at the left end and the unsorted part at the right end. Initially, the sorted part is
empty and the unsorted part is the entire list.
30
This process will be started from left most element and continues moving unsorted array
boundary by one element to the right.
31
Selection sort complexity
Now, let's see the time complexity of selection sort in best case, average case, and in
worst case. We will also see the space complexity of the selection sort.
1. Time Complexity
Case Time Complexity
32
o Average Case Complexity - It occurs when the array elements are in jumbled
order that is not properly ascending and not properly descending. The average
case time complexity of selection sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be
sorted in reverse order. That means suppose you have to sort the array elements
in ascending order, but its elements are in descending order. The worst-case time
complexity of selection sort is O(n2).
2.Space Complexity
Stable YES
o The space complexity of selection sort is O(1). It is because, in selection sort, an
extra variable is required for swapping.
33
Example program for Selection Sort:
#include <stdio.h>
void selection(int arr[ ], int n)
{
int i, j, small;
for (i = 0; i < n-1; i++) // One by one move boundary of unsorted subarray
{
small = i; //minimum element in unsorted array
Insertion Sort:
Sorting is the process of arranging a list of elements in a particular order (Ascending or Descending).
Insertion sort algorithm arranges a list of elements in a particular order. In insertion sort
algorithm, every iteration moves an element from unsorted portion to sorted portion until all the
elements
are sorted in the list.
Step by Step Process:
The insertion sort algorithm is performed using following steps...
34
Step 1: Assume that first element in the list is in sorted portion of the list and remaining all
elements are in unsorted portion.
Step 2: Consider first element from the unsorted list and insert that element into the sorted list
in order specified.
Step 3: Repeat the above process until all the elements from the unsorted list are moved
into the sorted list.
( or )
35
#include <stdio.h>
int main( )
{ int n, i, j, temp;
int arr[64];
printf("Enter number of elements:\n");
scanf("%d", &n);
printf("Enter %d integers:\n", n);
for (i = 0; i < n; i++)
{
scanf("%d", &arr[i]);
}
for (i = 1 ; i <= n - 1; i++)
{
j = i;
while ( j > 0 && arr[j-1] > arr[j])
{
temp = arr[j];
arr[j] = arr[j-1];
arr[j-1] = temp;
j--;
}
}
printf("Sorted list in ascending order:\n");
for (i = 0; i <= n - 1; i++)
{ printf("%d\n", arr[i]);
36
}
return 0;
}
OUTPUT:
Enter number of elements:
5
Enter 5 integers:
7
59
12
1
64
Sorted list in ascending order:
1
7
12
59
64
Insertion sort complexity
Now, let's see the time complexity of insertion sort in best case, average case, and in
worst case. We will also see the space complexity of insertion sort.
1.Time Complexity
Case Time Complexity
Best Case O(n)
Average Case O(n2)
Worst Case O(n2)
Best Case Complexity - It occurs when there is no sorting required, i.e. the array
o
is already sorted. The best-case time complexity of insertion sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled
order that is not properly ascending and not properly descending. The average
case time complexity of insertion sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be
sorted in reverse order. That means suppose you have to sort the array elements
in ascending order, but its elements are in descending order. The worst-case time
complexity of insertion sort is O(n2).
2.Space Complexity
Space Complexity O(1)
Stable YES
37