0% found this document useful (0 votes)
23 views

ADSA

Advance data structure in M.tech
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

ADSA

Advance data structure in M.tech
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

What is Data Structure?

A data structure is a particular way of organising data in a computer so that it can be used effectively. The
idea is to reduce the space and time complexities of different tasks.
An efficient data structure also uses minimum memory space and execution time to process the structure. A
data structure is not only used for organising the data. It is also used for processing, retrieving, and storing
data. There are different basic and advanced types of data structures that are used in almost every program or
software system that has been developed.
Need Of Data Structure:
The structure of the data and the synthesis of the algorithm are relative to each other. Data presentation must
be easy to understand so the developer, as well as the user, can make an efficient implementation of the
operation.
Data structures provide an easy way of organising, retrieving, managing, and storing data.
Here is a list of the needs for data.
 Data structure modification is easy.
 It requires less time.
 Save storage memory space.
 Data representation is easy.
 Easy access to the large database
Classification/Types of Data Structures:
1. Linear Data Structure
2. Non-Linear Data Structure.
Linear Data Structure:
 Elements are arranged in one dimension ,also known as linear dimension.
 Example: lists, stack, queue, etc.
Non-Linear Data Structure
 Elements are arranged in one-many, many-one and many-many dimensions.
 Example: tree, graph, table, etc.
Most Popular Data Structures:
1. Array:
An array is a collection of data items stored at contiguous memory locations. The idea is to store multiple
items of the same type together. This makes it easier to calculate the position of each element by simply
adding an offset to a base value, i.e., the memory location of the first element of the array (generally denoted
by the name of the array).

2. Linked Lists:
Like arrays, Linked List is a linear data structure. Unlike arrays, linked list elements are not stored at a
contiguous location; the elements are linked using pointers.
3. Stack:
Stack is a linear data structure which follows a particular order in which the operations are performed. The
order may be LIFO(Last In First Out) or FILO(First In Last Out). In stack, all insertion and deletion are
permitted at only one end of the list.
Stack Operations:
 push(): When this operation is performed, an element is inserted into the stack.
 pop(): When this operation is performed, an element is removed from the top of the stack and is
returned.
 top(): This operation will return the last inserted element that is at the top without removing it.
 size(): This operation will return the size of the stack i.e. the total number of elements present in the
stack.
 isEmpty(): This operation indicates whether the stack is empty or not.
4. Queue:
Like Stack, Queue is a linear structure which follows a particular order in which the operations are
performed. The order is First In First Out (FIFO). In the queue, items are inserted at one end and deleted from
the other end. A good example of the queue is any queue of consumers for a resource where the consumer
that came first is served first. The difference between stacks and queues is in removing. In a stack we remove
the item the most recently added; in a queue, we remove the item the least recently added.

Queue Operations:
 Enqueue(): Adds (or stores) an element to the end of the queue..
 Dequeue(): Removal of elements from the queue.
 Peek() or front(): Acquires the data element available at the front node of the queue without deleting it.
 rear(): This operation returns the element at the rear end without removing it.
 isFull(): Validates if the queue is full.
 isNull(): Checks if the queue is empty.
5. Binary Tree:
Unlike Arrays, Linked Lists, Stack and queues, which are linear data structures, trees are hierarchical data
structures. A binary tree is a tree data structure in which each node has at most two children, which are
referred to as the left child and the right child. It is implemented mainly using Links.
A Binary Tree is represented by a pointer to the topmost node in the tree. If the tree is empty, then the value
of root is NULL. A Binary Tree node contains the following parts.
1. Data . Pointer to left child Pointer to the right child
Linked List:
o Linked List can be defined as collection of objects called nodes that are randomly stored in the
memory.
o A node contains two fields i.e. data stored at that particular address and the pointer which contains the
address of the next node in the memory.
o The last node of the list contains pointer to the null.

Uses of Linked List

o The list is not required to be contiguously present in the memory. The node can reside any where in the
memory and linked together to make a list. This achieves optimized utilization of space.
o list size is limited to the memory size and doesn't need to be declared in advance.
o Empty node can not be present in the linked list.
o We can store values of primitive types or objects in the singly linked list.

Why use linked list over array?

Till now, we were using array data structure to organize the group of elements that are to be stored individually
in the memory. However, Array has several advantages and disadvantages which must be known in order to
decide the data structure which will be used throughout the program.
Array contains following limitations:
1. The size of array must be known in advance before using it in the program.
2. Increasing size of the array is a time taking process. It is almost impossible to expand the size of the
array at run time.
3. All the elements in the array need to be contiguously stored in the memory. Inserting any element in
the array needs shifting of all its predecessors.

Linked list is the data structure which can overcome all the limitations of an array. Using linked list is useful
because,

1. It allocates the memory dynamically. All the nodes of linked list are non-contiguously stored in the
memory and linked together with the help of pointers.
2. Sizing is no longer a problem since we do not need to define its size at the time of declaration. List
grows as per the program's demand and limited to the available memory space.

Singly linked list or One way chain

Singly linked list can be defined as the collection of ordered set of elements. The number of elements may vary
according to need of the program. A node in the singly linked list consist of two parts: data part and link part.
Data part of the node stores actual information that is to be represented by the node while the link part of the
node stores the address of its immediate successor
One way chain or singly linked list can be traversed only in one direction. In other words, we can say that each
node contains only next pointer, therefore we can not traverse the list in the reverse direction.
Consider an example where the marks obtained by the student in three subjects are stored in a linked list as
shown in the figure. In the above figure, the arrow represents the links. The data part of every node
contains the marks obtained by the student in the different subject. The last node in the list is identified by the
null pointer which is present in the address part of the last node. We can have as many elements we require, in
the data part of the list.
Operations on Singly Linked List

There are various operations which can be performed on singly linked list. A list of all such operations is given
below.

Node Creation
1. struct node
2. {
3. int data;
4. struct node *next;
5. };
6. struct node *head, *ptr;
7. ptr = (struct node *)malloc(sizeof(struct node *));
Insertion

The insertion into a singly linked list can be performed at different positions. Based on the position of the new
node being inserted, the insertion is categorized into the following categories.

Deletion and Traversing

The Deletion of a node from a singly linked list can be performed at different positions. Based on the position of
the node being deleted, the operation is categorized into the following categories.

Doubly linked list

Doubly linked list is a complex type of linked list in which a node contains a pointer to the previous as well as
the next node in the sequence. Therefore, in a doubly linked list, a node consists of three parts: node data,
pointer to the next node in sequence (next pointer) , pointer to the previous node (previous pointer). A sample
node in a doubly linked list is shown in the figure.
A doubly linked list containing three nodes having numbers from 1 to 3 in their data part, is shown in the
following image.

In C, structure of a node in doubly linked list can be given as :

1. struct node { struct node *prev; int data; struct node *next; }

The prev part of the first node and the next part of the last node will always contain null indicating end in each
direction.

In a singly linked list, we could traverse only in one direction, because each node contains address of the next
node and it doesn't have any record of its previous nodes. However, doubly linked list overcome this limitation
of singly linked list. Due to the fact that, each node of the list contains the address of its previous node, we can
find all the details about the previous node as well by using the previous address stored inside the previous part
of each node.

Memory Representation of a doubly linked list


Memory Representation of a doubly linked list is shown in the following image. Generally, doubly linked list
consumes more space for every node and therefore, causes more expansive basic operations such as insertion
and deletion. However, we can easily manipulate the elements of the list since the list maintains pointers in both
the directions (forward and backward).

In the following image, the first element of the list that is i.e. 13 stored at address 1. The head pointer points to
the starting address 1. Since this is the first element being added to the list therefore the prev of the
list contains null. The next node of the list resides at address 4 therefore the first node contains 4 in its next
pointer.

We can traverse the list in this way until we find any node containing null or -1 in its next part.

Operations on doubly linked list

Node Creation

1. struct node { struct node *prev; int data; struct node *next; }; struct node *head;

Circular Singly Linked List


In a circular Singly linked list, the last node of the list contains a pointer to the first node of the list. We can have
circular singly linked list as well as circular doubly linked list.

We traverse a circular singly linked list until we reach the same node where we started. The circular singly liked
list has no beginning and no ending. There is no null value present in the next part of any of the nodes.

Circular linked list are mostly used in task maintenance in operating systems. There are many examples where
circular linked list are being used in computer science including browser surfing where a record of pages visited
in the past by the user, is maintained in the form of circular linked lists and can be accessed again on clicking
the previous button.

Memory Representation of circular linked list:

In the following image, memory representation of a circular linked list containing marks of a student in 4
subjects. However, the image shows a glimpse of how the circular list is being stored in the memory. The start
or head of the list is pointing to the element with the index 1 and containing 13 marks in the data part and 4 in
the next part. Which means that it is linked with the node that is being stored at 4th index of the list.

Circular Doubly Linked List

Circular doubly linked list is a more complexed type of data structure in which a node contain pointers to its
previous node as well as the next node. Circular doubly linked list doesn't contain NULL in any of the node. The
last node of the list contains the address of the first node of the list. The first node of the list also contain address
of the last node in its previous pointer.

A circular doubly linked list is shown in the following figure.


Due to the fact that a circular doubly linked list contains three parts in its structure therefore, it demands more
space per node and more expensive basic operations. However, a circular doubly linked list provides easy
manipulation of the pointers and the searching becomes twice as efficient.

Memory Management of Circular Doubly linked list

The following figure shows the way in which the memory is allocated for a circular doubly linked list. The
variable head contains the address of the first element of the list i.e. 1 hence the starting node of the list contains
data A is stored at address 1. Since, each node of the list is supposed to have three parts therefore, the starting
node of the list contains address of the last node i.e. 8 and the next node i.e. 4. The last node of the list that is
stored at address 8 and containing data as 6, contains address of the first node of the list as shown in the image
i.e. 1. In circular doubly linked list, the last node is identified by the address of the first node which is stored in
the next part of the last node therefore the node which contains the address of the first node, is actually the last
node of the list.

Write a program to implement linear search in Java.

1. class LinearSearch {
2. static int linearSearch(int a[], int n, int val) {
3. // Going through array sequencially
4. for (int i = 0; i < n; i++)
5. {
6. if (a[i] == val)
7. return i+1;
8. }
9. return -1;
10. }
11. public static void main(String args[]) {
12. int a[] = {55, 29, 10, 40, 57, 41, 20, 24, 45}; // given array
13. int val = 10; // value to be searched
14. int n = a.length; // size of array
15. int res = linearSearch(a, n, val); // Store result
16. System.out.println();
17. System.out.print("The elements of the array are - ");
18. for (int i = 0; i < n; i++)
19. System.out.print(" " + a[i]);
20. System.out.println();
21. System.out.println("Element to be searched is - " + val);
22. if (res == -1)
23. System.out.println("Element is not present in the array");
24. else
25. System.out.println("Element is present at " + res +" position of array");
26. }
27. }
28. class BinarySearch {
29. static int binarySearch(int a[], int beg, int end, int val)
30. {
31. int mid;
32. if(end >= beg)
33. {
34. mid = (beg + end)/2;
35. if(a[mid] == val)
36. {
37. return mid+1
38. }
39. else if(a[mid] < val)
40. {
41. return binarySearch(a, mid+1, end, val);
42. }
43. else
44. {
45. return binarySearch(a, beg, mid-1, val);
46. }
47. }
48. return -1;
49. }
50. public static void main(String args[]) {
51. int a[] = {8, 10, 22, 27, 37, 44, 49, 55, 69};
52. int val = 37;
53. int n = a.length;
54. int res = binarySearch(a, 0, n-1, val);
55. System.out.print("The elements of the array are: ");
56. for (int i = 0; i < n; i++)
57. {
58. System.out.print(a[i] + " ");
59. }
60. System.out.println();
61. System.out.println("Element to be searched is: " + val);
62. if (res == -1)
63. System.out.println("Element is not present in the array");
64. else
65. System.out.println("Element is present at " + res + " position of array");
66. }
67. }

Bubble sort works on the repeatedly swapping of adjacent elements until they are not in the intended order. It is
called bubble sort

Although it is simple to use, it is primarily used as an educational tool because the performance of bubble sort is
poor in the real world. It is not suitable for large data sets. The average and worst-case complexity of Bubble
sort is O(n2), where n is a number of items.

Bubble short is majorly used where -

o complexity does not matter


o simple and shortcode is preferred

Algorithm
In the algorithm given below, suppose arr is an array of n elements. The assumed swap function in the
algorithm will swap the values of given array elements.

1. begin BubbleSort(arr)
2. for all array elements
3. if arr[i] > arr[i+1]
4. swap(arr[i], arr[i+1])
5. end if end for return arr end BubbleSort

Working of Bubble sort Algorithm

To understand the working of bubble sort algorithm, let's take an unsorted array. We are taking a short
and accurate array, as we know the complexity of bubble sort is O(n2).

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The
best-case time complexity of bubble sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of bubble sort
is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order.
That means suppose you have to sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of bubble sort is O(n2).

2. Space Complexity
o The space complexity of bubble sort is O(1). It is because, in bubble sort, an extra variable is required
for swapping.
o The space complexity of optimized bubble sort is O(2). It is because two extra variables are required in
optimized bubble sort.
o public class Bubble {
o static void print (int a[])
o {
o int n = a.length;
o int i;
o for (i = 0; i < n; i++)
o { System.out.print(a[i] + " ");
o } }
o static void bubbleSort (int a[])
o {
o int n = a.length;
o int i, j, temp;
o for (i = 0; i < n; i++)
o { for (j = i + 1; j < n; j++)
o {
o if (a[j] < a[i])
o {
o temp = a[i]; a[i] = a[j]; a[j] = temp;
o } } }}
o public static void main(String[] args) {
o int a[] = {35, 10, 31, 11, 26};
o Bubble b1 = new Bubble();
o System.out.println("Before sorting array elements are - ");
o b1.print(a);
o b1.bubbleSort(a);
o System.out.println();
o System.out.println("After sorting array elements are - ");
o b1.print(a);
o
o } }

o In selection sort, the smallest value among the unsorted elements of the array is selected in every pass
and inserted to its appropriate position into the array. It is also the simplest algorithm. It is an in-place
comparison sorting algorithm. In this algorithm, the array is divided into two parts, first is sorted part,
and another one is the unsorted part. Initially, the sorted part of the array is empty, and unsorted part is
the given array. Sorted part is placed at the left, while the unsorted part is placed at the right.

o In selection sort, the first smallest element is selected from the unsorted array and placed at the first
position. After that second smallest element is selected and placed in the second position. The process
continues until the array is entirely sorted.

o The average and worst-case complexity of selection sort is O(n2), where n is the number of items. Due
to this, it is not suitable for large data sets.

o In selection sort, the first smallest element is selected from the unsorted array and placed at the first
position. After that second smallest element is selected and placed in the second position. The process
continues until the array is entirely sorted.

o The average and worst-case complexity of selection sort is O(n2), where n is the number of items. Due
to this, it is not suitable for large data sets.

Selection sort is generally used when -

o A small array is to be sorted


o Swapping cost doesn't matter
o It is compulsory to check all elements

Now, let's see the algorithm of selection sort.

Algorithm

1. SELECTION SORT(arr, n)
2.
3. Step 1: Repeat Steps 2 and 3 for i = 0 to n-1
4. Step 2: CALL SMALLEST(arr, i, n, pos)
5. Step 3: SWAP arr[i] with arr[pos]
6. [END OF LOOP]
7. Step 4: EXIT
8.
9. SMALLEST (arr, i, n, pos)
10. Step 1: [INITIALIZE] SET SMALL = arr[i]
11. Step 2: [INITIALIZE] SET pos = i
12. Step 3: Repeat for j = i+1 to n
13. if (SMALL > arr[j])
14. SET SMALL = arr[j]
15. SET pos = j
16. [END OF if]
17. [END OF LOOP]
18. Step 4: RETURN pos
Working of Selection sort Algorithm
Now, let's see the working of the Selection sort Algorithm.
To understand the working of the Selection sort algorithm, let's take an unsorted array. It will be easier to
understand the Selection sort via an example.
Let the elements of array are -
Now, for the first position in the sorted array, the entire array is to be scanned sequentially.
At present, 12 is stored at the first position, after searching the entire array, it is found that 8 is the smallest
value. 12 29 25 8 37 17 40
So, swap 12 with 8. After the first iteration, 8 will appear at the first position in the sorted array
8 29 25 12 32 17 40.
For the second position, where 29 is stored presently, we again sequentially scan the rest of the items of
unsorted array. After scanning, we find that 12 is the second lowest element in the array that should be appeared
at second position. 8 29 25 12 32 17 40
Now, swap 29 with 12. After the second iteration, 12 will appear at the second position in the sorted array. So,
after two iterations, the two smallest values are placed at the beginning in a sorted way.
8 12 25 29 32 17 40
The same process is applied to the rest of the array elements. Now, we are showing a pictorial representation of
the entire sorting process.
Now, the array is completely sorted.

Selection sort complexity

Now, let's see the time complexity of selection sort in best case, average case, and in worst case. We will also
see the space complexity of the selection sort.

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The
best-case time complexity of selection sort is O(n2).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of selection sort
is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order.
That means suppose you have to sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of selection sort is O(n2).

2. Space Complexity
Space Complexity O(1)

Quicksort is the widely used sorting algorithm that makes n log n comparisons in average case for sorting an
array of n elements. It is a faster and highly efficient sorting algorithm. This algorithm follows the divide and
conquer approach. Divide and conquer is a technique of breaking down the algorithms into subproblems, then
solving the subproblems, and combining the results back together to solve the original problem.
Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array into two sub-arrays such
that each element in the left sub-array is less than or equal to the pivot element and each element in the right
sub-array is larger than the pivot element.
Choosing the pivot
Picking a good pivot is necessary for the fast implementation of quicksort. However, it is typical to determine a
good pivot. Some of the ways of choosing a pivot are as follows -
o Pivot can be random, i.e. select the random pivot from the given array.
o Pivot can either be the rightmost element of the leftmost element of the given array.
o Select median as the pivot element.
Algorithm
Algorithm:
1. QUICKSORT (array A, start, end) { if (start < end) { p = partition(A, start, end)
QUICKSORT (A, start, p - 1) QUICKSORT (A, p + 1, end) } }
Working of Quick Sort Algorithm
Now, let's see the working of the Quicksort Algorithm.
To understand the working of quick sort, let's take an unsorted array. It will make the concept more clear and
understandable.
Let the elements of array are -24 9 29 14 19 27
In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24, a[right] = 27 and
a[pivot] = 24.
Since, pivot is at left, so algorithm starts from right and move towards left. 24 ( left,pivot
) 9 29 14 19 27(right)

Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. –
24(left/pivot),9,29 14 19 27

Now, a[left] = 24, a[right] = 19, and a[pivot] = 24.


Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and pivot moves to right, as –
19(left) 9 29 14 24 27(pivot/right)

Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm starts from left and
moves to right.
As a[pivot] > a[left], so algorithm moves one position to right as -19 9 29 14 24(piv/right) 27

Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm moves one position to right
as -19 9 29 (lrft)14 24(piv/rit) 27

Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap a[pivot] and a[left], now pivot
is at left, i.e. -19 9 24(piv/left) 14 29(right) 27

Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24, a[right] = 29, and
a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to left, as –
19 9 24(piv/left) 14(right) 29 27

Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot] and a[right], now
pivot is at right, i.e. - 19 9 14(left) 24(left/right) 29 27

Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts from left and move to
right. 19 9 14 24 (left/right/pivot) 29 27

Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the same element. It
represents the termination of procedure.
Element 24, which is the pivot element is placed at its exact position.
Elements that are right side of element 24 are greater than it, and the elements that are left side of element 24 are
smaller than it.

Now, in a similar manner, quick sort algorithm is separately applied to the left and right sub-arrays. After
sorting gets done, the array will be – 9 14 19 27 29

Quicksort complexity
Now, let's see the time complexity of quicksort in best case, average case, and in worst case. We will also see
the space complexity of quicksort.
1. Time Complexity
o Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is the middle
element or near to the middle element. The best-case time complexity of quicksort is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of quicksort
is O(n*logn).
o Worst Case Complexity - In quick sort, worst case occurs when the pivot element is either greatest or
smallest element. Suppose, if the pivot element is always the last element of the array, the worst case
would occur when the given array is sorted already in ascending or descending order. The worst-case
time complexity of quicksort is O(n2).
2. Space Complexity
Space Complexity O(n*logn)

Stable NO
o The space complexity of quicksort is O(n*logn).
Binary Tree

The Binary tree means that the node can have maximum two children. Here, binary name itself suggests that
'two'; therefore, each node can have either 0, 1 or 2 children.

o The above tree is a binary tree because each node contains the utmost two children. The logical
representation of the above tree is given below:

In the above tree, node 1 contains two pointers, i.e., left and a right pointer pointing to the left and right node
respectively. The node 2 contains both the nodes (left and right node); therefore, it has two pointers (left and
right). The nodes 3, 5 and 6 are the leaf nodes, so all these nodes contain NULL pointer on both left and right
parts.

Properties of Binary Tree


o At each level of i, the maximum number of nodes is 2i.
o The height of the tree is defined as the longest path from the root node to the leaf node. The tree which
is shown above has a height equal to 3. Therefore, the maximum number of nodes at height 3 is equal
to (1+2+4+8) = 15. In general, the maximum number of nodes possible at height h is (2 0 + 21 + 22+
….2h) = 2h+1 -1.
o The minimum number of nodes possible at height h is equal to h+1.
o If the number of nodes is minimum, then the height of the tree would be maximum. Conversely, if the
number of nodes is maximum, then the height of the tree would be minimum.
o If there are 'n' number of nodes in the binary tree.
o The minimum height can be computed as:
o As we know that,
o n = 2h+1 -1
o n+1 = 2h+1
o Taking log on both the sides,
o log2(n+1) = log2(2h+1)
o log2(n+1) = h+1
o h = log2(n+1) - 1
o The maximum height can be computed as:
o As we know that,
o n = h+1
o h= n-1

The term 'tree traversal' means traversing or visiting each node of a tree. There is a single way to traverse the
linear data structure such as linked list, queue, and stack. Whereas, there are multiple ways to traverse a tree that
are listed as follows -
o Preorder traversal
o Inorder traversal
o Postorder traversal
So, in this article, we will discuss the above-listed techniques of traversing a tree. Now, let's start discussing the
ways of tree traversal.
Preorder traversal
This technique follows the 'root left right' policy. It means that, first root node is visited after that the left subtree
is traversed recursively, and finally, right subtree is recursively traversed. As the root node is traversed before
(or pre) the left and right subtree, it is called preorder traversal.
So, in a preorder traversal, each node is visited before both of its subtrees.
The applications of preorder traversal include -
o It is used to create a copy of the tree.
o It can also be used to get the prefix expression of an expression tree.
Algorithm
1. Until all nodes of the tree are not visited
2.
3. Step 1 - Visit the root node
4. Step 2 - Traverse the left subtree recursively.
5. Step 3 - Traverse the right subtree recursively.

Postorder traversal
This technique follows the 'left-right root' policy. It means that the first left subtree of the root node is traversed,
after that recursively traverses the right subtree, and finally, the root node is traversed. As the root node is
traversed after (or post) the left and right subtree, it is called postorder traversal.
So, in a postorder traversal, each node is visited after both of its subtrees.
The applications of postorder traversal include -
o It is used to delete the tree.
o It can also be used to get the postfix expression of an expression tree.
Algorithm
Until all nodes of the tree are not visited
1. Step 1 - Traverse the left subtree recursively.
2. Step 2 - Traverse the right subtree recursively.
3. Step 3 - Visit the root node.
Inorder traversal
This technique follows the 'left root right' policy. It means that first left subtree is visited after that root node is
traversed, and finally, the right subtree is traversed. As the root node is traversed between the left and right
subtree, it is named inorder traversal.
So, in the inorder traversal, each node is visited in between of its subtrees.
The applications of Inorder traversal includes -
o It is used to get the BST nodes in increasing order.
o It can also be used to get the prefix expression of an expression tree

1. Until all nodes of the tree are not visited


2. Step 1 - Traverse the left subtree recursively.
3. Step 2 - Visit the root node.
4. Step 3 - Traverse the right subtree recursively.
It is a recursive algorithm to search all the vertices of a tree data structure or a graph. The depth-first search
(DFS) algorithm starts with the initial node of graph G and goes deeper until we find the goal node or the node
with no children.
Because of the recursive nature, stack data structure can be used to implement the DFS algorithm. The process
of implementing the DFS is similar to the BFS algorithm.
The step by step process to implement the DFS traversal is given as follows -
1. First, create a stack with the total number of vertices in the graph.
2. Now, choose any vertex as the starting point of traversal, and push that vertex into the stack.
3. After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to the top of the
stack.
4. Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the stack's top.
5. If no vertex is left, go back and pop a vertex from the stack.
6. Repeat steps 2, 3, and 4 until the stack is empty.
Applications of DFS algorithm
The applications of using the DFS algorithm are given as follows -
o DFS algorithm can be used to implement the topological sorting.
o It can be used to find the paths between two vertices.
o It can also be used to detect cycles in the graph.
o DFS algorithm is also used for one solution puzzles.
o DFS is used to determine if a graph is bipartite or not.
Algorithm
Step 1: SET STATUS = 1 (ready state) for each node in G
Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until STACK is empty
Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS = 1) and set their
STATUS = 2 (waiting state)
[END OF LOOP]
Step 6: EXIT
Complexity of Depth-first search algorithm
The time complexity of the DFS algorithm is O(V+E), where V is the number of vertices and E is the number of
edges in the graph.
The space complexity of the DFS algorithm is O(V).
Breadth-first search is a graph traversal algorithm that starts traversing the graph from the root node and
explores all the neighboring nodes. Then, it selects the nearest node and explores all the unexplored nodes.
While using BFS for traversal, any node in the graph can be considered as the root node.
There are many ways to traverse the graph, but among them, BFS is the most commonly used approach. It is a
recursive algorithm to search all the vertices of a tree or graph data structure. BFS puts every vertex of the graph
into two categories - visited and non-visited. It selects a single node in a graph and, after that, visits all the nodes
adjacent to the selected node.
Applications of BFS algorithm
The applications of breadth-first-algorithm are given as follows -
o BFS can be used to find the neighboring locations from a given source location.
o In a peer-to-peer network, BFS algorithm can be used as a traversal method to find all the neighboring
nodes. Most torrent clients, such as BitTorrent, uTorrent, etc. employ this process to find "seeds" and
"peers" in the network.
o BFS can be used in web crawlers to create web page indexes. It is one of the main algorithms that can
be used to index web pages. It starts traversing from the source page and follows the links associated
with the page. Here, every web page is considered as a node in the graph.
o BFS is used to determine the shortest path and minimum spanning tree.
o BFS is also used in Cheney's technique to duplicate the garbage collection.
o It can be used in ford-Fulkerson method to compute the maximum flow in a flow network.
Algorithm
The steps involved in the BFS algorithm to explore a graph are given as follows -
ADVERTISEMENT
Step 1: SET STATUS = 1 (ready state) for each node in G
Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until QUEUE is empty
Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).
Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1) and set
their STATUS = 2
(waiting state)
[END OF LOOP]
Step 6: EXIT
Complexity of BFS algorithm
Time complexity of BFS depends upon the data structure used to represent the graph. The time complexity of
BFS algorithm is O(V+E), since in the worst case, BFS algorithm explores every node and edge. In a graph, the
number of vertices is O(V), whereas the number of edges is O(E).
The space complexity of BFS can be expressed as O(V), where V is the number of vertices.

You might also like