ADSA
ADSA
A data structure is a particular way of organising data in a computer so that it can be used effectively. The
idea is to reduce the space and time complexities of different tasks.
An efficient data structure also uses minimum memory space and execution time to process the structure. A
data structure is not only used for organising the data. It is also used for processing, retrieving, and storing
data. There are different basic and advanced types of data structures that are used in almost every program or
software system that has been developed.
Need Of Data Structure:
The structure of the data and the synthesis of the algorithm are relative to each other. Data presentation must
be easy to understand so the developer, as well as the user, can make an efficient implementation of the
operation.
Data structures provide an easy way of organising, retrieving, managing, and storing data.
Here is a list of the needs for data.
Data structure modification is easy.
It requires less time.
Save storage memory space.
Data representation is easy.
Easy access to the large database
Classification/Types of Data Structures:
1. Linear Data Structure
2. Non-Linear Data Structure.
Linear Data Structure:
Elements are arranged in one dimension ,also known as linear dimension.
Example: lists, stack, queue, etc.
Non-Linear Data Structure
Elements are arranged in one-many, many-one and many-many dimensions.
Example: tree, graph, table, etc.
Most Popular Data Structures:
1. Array:
An array is a collection of data items stored at contiguous memory locations. The idea is to store multiple
items of the same type together. This makes it easier to calculate the position of each element by simply
adding an offset to a base value, i.e., the memory location of the first element of the array (generally denoted
by the name of the array).
2. Linked Lists:
Like arrays, Linked List is a linear data structure. Unlike arrays, linked list elements are not stored at a
contiguous location; the elements are linked using pointers.
3. Stack:
Stack is a linear data structure which follows a particular order in which the operations are performed. The
order may be LIFO(Last In First Out) or FILO(First In Last Out). In stack, all insertion and deletion are
permitted at only one end of the list.
Stack Operations:
push(): When this operation is performed, an element is inserted into the stack.
pop(): When this operation is performed, an element is removed from the top of the stack and is
returned.
top(): This operation will return the last inserted element that is at the top without removing it.
size(): This operation will return the size of the stack i.e. the total number of elements present in the
stack.
isEmpty(): This operation indicates whether the stack is empty or not.
4. Queue:
Like Stack, Queue is a linear structure which follows a particular order in which the operations are
performed. The order is First In First Out (FIFO). In the queue, items are inserted at one end and deleted from
the other end. A good example of the queue is any queue of consumers for a resource where the consumer
that came first is served first. The difference between stacks and queues is in removing. In a stack we remove
the item the most recently added; in a queue, we remove the item the least recently added.
Queue Operations:
Enqueue(): Adds (or stores) an element to the end of the queue..
Dequeue(): Removal of elements from the queue.
Peek() or front(): Acquires the data element available at the front node of the queue without deleting it.
rear(): This operation returns the element at the rear end without removing it.
isFull(): Validates if the queue is full.
isNull(): Checks if the queue is empty.
5. Binary Tree:
Unlike Arrays, Linked Lists, Stack and queues, which are linear data structures, trees are hierarchical data
structures. A binary tree is a tree data structure in which each node has at most two children, which are
referred to as the left child and the right child. It is implemented mainly using Links.
A Binary Tree is represented by a pointer to the topmost node in the tree. If the tree is empty, then the value
of root is NULL. A Binary Tree node contains the following parts.
1. Data . Pointer to left child Pointer to the right child
Linked List:
o Linked List can be defined as collection of objects called nodes that are randomly stored in the
memory.
o A node contains two fields i.e. data stored at that particular address and the pointer which contains the
address of the next node in the memory.
o The last node of the list contains pointer to the null.
o The list is not required to be contiguously present in the memory. The node can reside any where in the
memory and linked together to make a list. This achieves optimized utilization of space.
o list size is limited to the memory size and doesn't need to be declared in advance.
o Empty node can not be present in the linked list.
o We can store values of primitive types or objects in the singly linked list.
Till now, we were using array data structure to organize the group of elements that are to be stored individually
in the memory. However, Array has several advantages and disadvantages which must be known in order to
decide the data structure which will be used throughout the program.
Array contains following limitations:
1. The size of array must be known in advance before using it in the program.
2. Increasing size of the array is a time taking process. It is almost impossible to expand the size of the
array at run time.
3. All the elements in the array need to be contiguously stored in the memory. Inserting any element in
the array needs shifting of all its predecessors.
Linked list is the data structure which can overcome all the limitations of an array. Using linked list is useful
because,
1. It allocates the memory dynamically. All the nodes of linked list are non-contiguously stored in the
memory and linked together with the help of pointers.
2. Sizing is no longer a problem since we do not need to define its size at the time of declaration. List
grows as per the program's demand and limited to the available memory space.
Singly linked list can be defined as the collection of ordered set of elements. The number of elements may vary
according to need of the program. A node in the singly linked list consist of two parts: data part and link part.
Data part of the node stores actual information that is to be represented by the node while the link part of the
node stores the address of its immediate successor
One way chain or singly linked list can be traversed only in one direction. In other words, we can say that each
node contains only next pointer, therefore we can not traverse the list in the reverse direction.
Consider an example where the marks obtained by the student in three subjects are stored in a linked list as
shown in the figure. In the above figure, the arrow represents the links. The data part of every node
contains the marks obtained by the student in the different subject. The last node in the list is identified by the
null pointer which is present in the address part of the last node. We can have as many elements we require, in
the data part of the list.
Operations on Singly Linked List
There are various operations which can be performed on singly linked list. A list of all such operations is given
below.
Node Creation
1. struct node
2. {
3. int data;
4. struct node *next;
5. };
6. struct node *head, *ptr;
7. ptr = (struct node *)malloc(sizeof(struct node *));
Insertion
The insertion into a singly linked list can be performed at different positions. Based on the position of the new
node being inserted, the insertion is categorized into the following categories.
The Deletion of a node from a singly linked list can be performed at different positions. Based on the position of
the node being deleted, the operation is categorized into the following categories.
Doubly linked list is a complex type of linked list in which a node contains a pointer to the previous as well as
the next node in the sequence. Therefore, in a doubly linked list, a node consists of three parts: node data,
pointer to the next node in sequence (next pointer) , pointer to the previous node (previous pointer). A sample
node in a doubly linked list is shown in the figure.
A doubly linked list containing three nodes having numbers from 1 to 3 in their data part, is shown in the
following image.
1. struct node { struct node *prev; int data; struct node *next; }
The prev part of the first node and the next part of the last node will always contain null indicating end in each
direction.
In a singly linked list, we could traverse only in one direction, because each node contains address of the next
node and it doesn't have any record of its previous nodes. However, doubly linked list overcome this limitation
of singly linked list. Due to the fact that, each node of the list contains the address of its previous node, we can
find all the details about the previous node as well by using the previous address stored inside the previous part
of each node.
In the following image, the first element of the list that is i.e. 13 stored at address 1. The head pointer points to
the starting address 1. Since this is the first element being added to the list therefore the prev of the
list contains null. The next node of the list resides at address 4 therefore the first node contains 4 in its next
pointer.
We can traverse the list in this way until we find any node containing null or -1 in its next part.
Node Creation
1. struct node { struct node *prev; int data; struct node *next; }; struct node *head;
We traverse a circular singly linked list until we reach the same node where we started. The circular singly liked
list has no beginning and no ending. There is no null value present in the next part of any of the nodes.
Circular linked list are mostly used in task maintenance in operating systems. There are many examples where
circular linked list are being used in computer science including browser surfing where a record of pages visited
in the past by the user, is maintained in the form of circular linked lists and can be accessed again on clicking
the previous button.
In the following image, memory representation of a circular linked list containing marks of a student in 4
subjects. However, the image shows a glimpse of how the circular list is being stored in the memory. The start
or head of the list is pointing to the element with the index 1 and containing 13 marks in the data part and 4 in
the next part. Which means that it is linked with the node that is being stored at 4th index of the list.
Circular doubly linked list is a more complexed type of data structure in which a node contain pointers to its
previous node as well as the next node. Circular doubly linked list doesn't contain NULL in any of the node. The
last node of the list contains the address of the first node of the list. The first node of the list also contain address
of the last node in its previous pointer.
The following figure shows the way in which the memory is allocated for a circular doubly linked list. The
variable head contains the address of the first element of the list i.e. 1 hence the starting node of the list contains
data A is stored at address 1. Since, each node of the list is supposed to have three parts therefore, the starting
node of the list contains address of the last node i.e. 8 and the next node i.e. 4. The last node of the list that is
stored at address 8 and containing data as 6, contains address of the first node of the list as shown in the image
i.e. 1. In circular doubly linked list, the last node is identified by the address of the first node which is stored in
the next part of the last node therefore the node which contains the address of the first node, is actually the last
node of the list.
1. class LinearSearch {
2. static int linearSearch(int a[], int n, int val) {
3. // Going through array sequencially
4. for (int i = 0; i < n; i++)
5. {
6. if (a[i] == val)
7. return i+1;
8. }
9. return -1;
10. }
11. public static void main(String args[]) {
12. int a[] = {55, 29, 10, 40, 57, 41, 20, 24, 45}; // given array
13. int val = 10; // value to be searched
14. int n = a.length; // size of array
15. int res = linearSearch(a, n, val); // Store result
16. System.out.println();
17. System.out.print("The elements of the array are - ");
18. for (int i = 0; i < n; i++)
19. System.out.print(" " + a[i]);
20. System.out.println();
21. System.out.println("Element to be searched is - " + val);
22. if (res == -1)
23. System.out.println("Element is not present in the array");
24. else
25. System.out.println("Element is present at " + res +" position of array");
26. }
27. }
28. class BinarySearch {
29. static int binarySearch(int a[], int beg, int end, int val)
30. {
31. int mid;
32. if(end >= beg)
33. {
34. mid = (beg + end)/2;
35. if(a[mid] == val)
36. {
37. return mid+1
38. }
39. else if(a[mid] < val)
40. {
41. return binarySearch(a, mid+1, end, val);
42. }
43. else
44. {
45. return binarySearch(a, beg, mid-1, val);
46. }
47. }
48. return -1;
49. }
50. public static void main(String args[]) {
51. int a[] = {8, 10, 22, 27, 37, 44, 49, 55, 69};
52. int val = 37;
53. int n = a.length;
54. int res = binarySearch(a, 0, n-1, val);
55. System.out.print("The elements of the array are: ");
56. for (int i = 0; i < n; i++)
57. {
58. System.out.print(a[i] + " ");
59. }
60. System.out.println();
61. System.out.println("Element to be searched is: " + val);
62. if (res == -1)
63. System.out.println("Element is not present in the array");
64. else
65. System.out.println("Element is present at " + res + " position of array");
66. }
67. }
Bubble sort works on the repeatedly swapping of adjacent elements until they are not in the intended order. It is
called bubble sort
Although it is simple to use, it is primarily used as an educational tool because the performance of bubble sort is
poor in the real world. It is not suitable for large data sets. The average and worst-case complexity of Bubble
sort is O(n2), where n is a number of items.
Algorithm
In the algorithm given below, suppose arr is an array of n elements. The assumed swap function in the
algorithm will swap the values of given array elements.
1. begin BubbleSort(arr)
2. for all array elements
3. if arr[i] > arr[i+1]
4. swap(arr[i], arr[i+1])
5. end if end for return arr end BubbleSort
To understand the working of bubble sort algorithm, let's take an unsorted array. We are taking a short
and accurate array, as we know the complexity of bubble sort is O(n2).
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The
best-case time complexity of bubble sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of bubble sort
is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order.
That means suppose you have to sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of bubble sort is O(n2).
2. Space Complexity
o The space complexity of bubble sort is O(1). It is because, in bubble sort, an extra variable is required
for swapping.
o The space complexity of optimized bubble sort is O(2). It is because two extra variables are required in
optimized bubble sort.
o public class Bubble {
o static void print (int a[])
o {
o int n = a.length;
o int i;
o for (i = 0; i < n; i++)
o { System.out.print(a[i] + " ");
o } }
o static void bubbleSort (int a[])
o {
o int n = a.length;
o int i, j, temp;
o for (i = 0; i < n; i++)
o { for (j = i + 1; j < n; j++)
o {
o if (a[j] < a[i])
o {
o temp = a[i]; a[i] = a[j]; a[j] = temp;
o } } }}
o public static void main(String[] args) {
o int a[] = {35, 10, 31, 11, 26};
o Bubble b1 = new Bubble();
o System.out.println("Before sorting array elements are - ");
o b1.print(a);
o b1.bubbleSort(a);
o System.out.println();
o System.out.println("After sorting array elements are - ");
o b1.print(a);
o
o } }
o In selection sort, the smallest value among the unsorted elements of the array is selected in every pass
and inserted to its appropriate position into the array. It is also the simplest algorithm. It is an in-place
comparison sorting algorithm. In this algorithm, the array is divided into two parts, first is sorted part,
and another one is the unsorted part. Initially, the sorted part of the array is empty, and unsorted part is
the given array. Sorted part is placed at the left, while the unsorted part is placed at the right.
o In selection sort, the first smallest element is selected from the unsorted array and placed at the first
position. After that second smallest element is selected and placed in the second position. The process
continues until the array is entirely sorted.
o The average and worst-case complexity of selection sort is O(n2), where n is the number of items. Due
to this, it is not suitable for large data sets.
o In selection sort, the first smallest element is selected from the unsorted array and placed at the first
position. After that second smallest element is selected and placed in the second position. The process
continues until the array is entirely sorted.
o The average and worst-case complexity of selection sort is O(n2), where n is the number of items. Due
to this, it is not suitable for large data sets.
Algorithm
1. SELECTION SORT(arr, n)
2.
3. Step 1: Repeat Steps 2 and 3 for i = 0 to n-1
4. Step 2: CALL SMALLEST(arr, i, n, pos)
5. Step 3: SWAP arr[i] with arr[pos]
6. [END OF LOOP]
7. Step 4: EXIT
8.
9. SMALLEST (arr, i, n, pos)
10. Step 1: [INITIALIZE] SET SMALL = arr[i]
11. Step 2: [INITIALIZE] SET pos = i
12. Step 3: Repeat for j = i+1 to n
13. if (SMALL > arr[j])
14. SET SMALL = arr[j]
15. SET pos = j
16. [END OF if]
17. [END OF LOOP]
18. Step 4: RETURN pos
Working of Selection sort Algorithm
Now, let's see the working of the Selection sort Algorithm.
To understand the working of the Selection sort algorithm, let's take an unsorted array. It will be easier to
understand the Selection sort via an example.
Let the elements of array are -
Now, for the first position in the sorted array, the entire array is to be scanned sequentially.
At present, 12 is stored at the first position, after searching the entire array, it is found that 8 is the smallest
value. 12 29 25 8 37 17 40
So, swap 12 with 8. After the first iteration, 8 will appear at the first position in the sorted array
8 29 25 12 32 17 40.
For the second position, where 29 is stored presently, we again sequentially scan the rest of the items of
unsorted array. After scanning, we find that 12 is the second lowest element in the array that should be appeared
at second position. 8 29 25 12 32 17 40
Now, swap 29 with 12. After the second iteration, 12 will appear at the second position in the sorted array. So,
after two iterations, the two smallest values are placed at the beginning in a sorted way.
8 12 25 29 32 17 40
The same process is applied to the rest of the array elements. Now, we are showing a pictorial representation of
the entire sorting process.
Now, the array is completely sorted.
Now, let's see the time complexity of selection sort in best case, average case, and in worst case. We will also
see the space complexity of the selection sort.
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The
best-case time complexity of selection sort is O(n2).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of selection sort
is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order.
That means suppose you have to sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of selection sort is O(n2).
2. Space Complexity
Space Complexity O(1)
Quicksort is the widely used sorting algorithm that makes n log n comparisons in average case for sorting an
array of n elements. It is a faster and highly efficient sorting algorithm. This algorithm follows the divide and
conquer approach. Divide and conquer is a technique of breaking down the algorithms into subproblems, then
solving the subproblems, and combining the results back together to solve the original problem.
Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array into two sub-arrays such
that each element in the left sub-array is less than or equal to the pivot element and each element in the right
sub-array is larger than the pivot element.
Choosing the pivot
Picking a good pivot is necessary for the fast implementation of quicksort. However, it is typical to determine a
good pivot. Some of the ways of choosing a pivot are as follows -
o Pivot can be random, i.e. select the random pivot from the given array.
o Pivot can either be the rightmost element of the leftmost element of the given array.
o Select median as the pivot element.
Algorithm
Algorithm:
1. QUICKSORT (array A, start, end) { if (start < end) { p = partition(A, start, end)
QUICKSORT (A, start, p - 1) QUICKSORT (A, p + 1, end) } }
Working of Quick Sort Algorithm
Now, let's see the working of the Quicksort Algorithm.
To understand the working of quick sort, let's take an unsorted array. It will make the concept more clear and
understandable.
Let the elements of array are -24 9 29 14 19 27
In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24, a[right] = 27 and
a[pivot] = 24.
Since, pivot is at left, so algorithm starts from right and move towards left. 24 ( left,pivot
) 9 29 14 19 27(right)
Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. –
24(left/pivot),9,29 14 19 27
Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm starts from left and
moves to right.
As a[pivot] > a[left], so algorithm moves one position to right as -19 9 29 14 24(piv/right) 27
Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm moves one position to right
as -19 9 29 (lrft)14 24(piv/rit) 27
Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap a[pivot] and a[left], now pivot
is at left, i.e. -19 9 24(piv/left) 14 29(right) 27
Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24, a[right] = 29, and
a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to left, as –
19 9 24(piv/left) 14(right) 29 27
Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot] and a[right], now
pivot is at right, i.e. - 19 9 14(left) 24(left/right) 29 27
Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts from left and move to
right. 19 9 14 24 (left/right/pivot) 29 27
Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the same element. It
represents the termination of procedure.
Element 24, which is the pivot element is placed at its exact position.
Elements that are right side of element 24 are greater than it, and the elements that are left side of element 24 are
smaller than it.
Now, in a similar manner, quick sort algorithm is separately applied to the left and right sub-arrays. After
sorting gets done, the array will be – 9 14 19 27 29
Quicksort complexity
Now, let's see the time complexity of quicksort in best case, average case, and in worst case. We will also see
the space complexity of quicksort.
1. Time Complexity
o Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is the middle
element or near to the middle element. The best-case time complexity of quicksort is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of quicksort
is O(n*logn).
o Worst Case Complexity - In quick sort, worst case occurs when the pivot element is either greatest or
smallest element. Suppose, if the pivot element is always the last element of the array, the worst case
would occur when the given array is sorted already in ascending or descending order. The worst-case
time complexity of quicksort is O(n2).
2. Space Complexity
Space Complexity O(n*logn)
Stable NO
o The space complexity of quicksort is O(n*logn).
Binary Tree
The Binary tree means that the node can have maximum two children. Here, binary name itself suggests that
'two'; therefore, each node can have either 0, 1 or 2 children.
o The above tree is a binary tree because each node contains the utmost two children. The logical
representation of the above tree is given below:
In the above tree, node 1 contains two pointers, i.e., left and a right pointer pointing to the left and right node
respectively. The node 2 contains both the nodes (left and right node); therefore, it has two pointers (left and
right). The nodes 3, 5 and 6 are the leaf nodes, so all these nodes contain NULL pointer on both left and right
parts.
The term 'tree traversal' means traversing or visiting each node of a tree. There is a single way to traverse the
linear data structure such as linked list, queue, and stack. Whereas, there are multiple ways to traverse a tree that
are listed as follows -
o Preorder traversal
o Inorder traversal
o Postorder traversal
So, in this article, we will discuss the above-listed techniques of traversing a tree. Now, let's start discussing the
ways of tree traversal.
Preorder traversal
This technique follows the 'root left right' policy. It means that, first root node is visited after that the left subtree
is traversed recursively, and finally, right subtree is recursively traversed. As the root node is traversed before
(or pre) the left and right subtree, it is called preorder traversal.
So, in a preorder traversal, each node is visited before both of its subtrees.
The applications of preorder traversal include -
o It is used to create a copy of the tree.
o It can also be used to get the prefix expression of an expression tree.
Algorithm
1. Until all nodes of the tree are not visited
2.
3. Step 1 - Visit the root node
4. Step 2 - Traverse the left subtree recursively.
5. Step 3 - Traverse the right subtree recursively.
Postorder traversal
This technique follows the 'left-right root' policy. It means that the first left subtree of the root node is traversed,
after that recursively traverses the right subtree, and finally, the root node is traversed. As the root node is
traversed after (or post) the left and right subtree, it is called postorder traversal.
So, in a postorder traversal, each node is visited after both of its subtrees.
The applications of postorder traversal include -
o It is used to delete the tree.
o It can also be used to get the postfix expression of an expression tree.
Algorithm
Until all nodes of the tree are not visited
1. Step 1 - Traverse the left subtree recursively.
2. Step 2 - Traverse the right subtree recursively.
3. Step 3 - Visit the root node.
Inorder traversal
This technique follows the 'left root right' policy. It means that first left subtree is visited after that root node is
traversed, and finally, the right subtree is traversed. As the root node is traversed between the left and right
subtree, it is named inorder traversal.
So, in the inorder traversal, each node is visited in between of its subtrees.
The applications of Inorder traversal includes -
o It is used to get the BST nodes in increasing order.
o It can also be used to get the prefix expression of an expression tree