Ad3271 - Data Structure Design Lab - Final
Ad3271 - Data Structure Design Lab - Final
Year / Sem : I / II
LIST OF EXPERIMENTS
Implementation of searching
7. and sorting algorithms
Implementation of Binary
10.
Search Trees
Implementation of minimum
14.
spanning tree algorithms
Ex. No:1 Implement simple ADTs as Python classes
Aim:
To Implement simple ADTs as Python classes using Stack, Queue, List using python.
Theory:
An ADT is a programmer defined data type that is more abstract than the data
types provided by the programming language. ADT's provide a representation for
data entities in the problem domain, e.g., Customer, Vehicle, Parts List, Address
Book.
Algorithm:
Coding :
Stack:
stack = []
stack.append(a')
stack.append('b')
stack.append('c')
print('Initial stack')
print(stack)
print('\nElements poped from stack:')
print(stack.pop()) print(stack.pop())
print(stack.pop())
print('\nStack after elements are poped:')
print(stack)
Queue:
queue = []
queue.append('a')
queue.append('b')
queue.append('c')
print("Initial queue")
print(queue)
print("\nElements dequeued from queue")
print(queue.pop(0))
print(queue.pop(0))
print(queue.pop(0))
print("\nQueue after removing elements")
print(queue)
List:
List = [1,2,3,4]
print("Initial List: ")
print(List)
List.extend([8, 'Geeks', 'Always'])
print("\nList after performing Extend Operation: ")
print(List)
Output:
Stack:
Initial stack
['a', 'b', 'c']
Elements poped from
stack: c b a
Stack after elements are poped: []
Queue:
['a', 'b', 'c']
Elements dequeued from queue
a
b
c
Queue after removing elements: []
List:
Initial List:
[1, 2, 3, 4]
List after performing Extend Operation: [1, 2, 3, 4, 8, 'Geeks', 'Always']
Result:
Thus the Implementation of simple ADTs as Python classes was executed successfully.
Viva:
• What is ADT?
we can create data structures along with their operations, and such data
structures that are not in-built are known as Abstract Data Type (ADT).
Aim:
Theory:
Algorithm:
Step 1:Input the 'n' value until which the Fibonacci series has to be generated
Step 7:sum = a + b
Step 10:Else
Coding:
No = 10
num1, num2 = 0, 1
count = 0
if No <= 0:
print("Invalid Number")
elif No == 1:
print("Fibonacci sequence for limit of ",No,":")
print(num1)
else:
print("Fibonacci sequence:")
while count < No:
print(num1)
nth = num1 + num2
num1 = num2
num2 = nth
count += 1
Output:
Fibonacci sequence:
0
1
1
2
3
5
8
13
21
34
Result:
Thus the Implementation of recursive algorithms in Python using Fibonacci series was
executed successfully.
Real Time application:
• Recursive algorithms can be used for sorting and searching data structures such as
linked lists, binary trees, and graphs.
• They are also often used for string manipulation tasks such as checking if two strings
are anagrams or finding the longest common subsequence between two strings.
• Another common use case for recursion is traversing complex data structures such as
JSON objects or directory trees on file systems.
• Finally, they can also be used to generate fractals or create animation effects using
recursive rendering techniques.
Viva:
• What is Recursive?
The process in which a function calls itself directly or indirectly is called recursion.
Aim:
To Implement List ADT using Python arrays
CO2: Design, implement, and analyse linear data structures, such as lists,
queues, and stacks, according to the needs of different applications.
Theory:
Algorithm:
class node:
def __init__(self, data):
self.data=data
self.next=None
def add(data):
nn=node(0)
nn.data=data
nn.next=None
return nn def
printarray(a,n):
i=0
while(i<n):
print(a[i], end = " ")
i=i+1
def findlength(head):
cur=head
count=0
while(cur!=None):
count=count+1
cur=cur.next
return count
def convertarr(head):
len=findlength(head)
arr=[]
index=0 cur=head
while(cur!=None):
arr.append(cur.data)
cur=cur.next
printarray(arr, len)
head=node(0)
head=add(6)
head.next = add(4)
head.next.next = add(3)
head.next.next.next = add(4)
convertarr(head)
Output:
[6,4,3,4]
[6 4 3 4]
Result:
Thus the implementation of List in arrays was executed successfully.
An array gets used for storing data for mathematical computations. Besides, it
helps in image processing, record management, and ordering boxes. Besides,
book pages are also examples of the application of data structures arrays.
Viva:
• What is ADT?
we can create data structures along with their operations, and such
data structures that are not in-built are known as Abstract Data Type
(ADT).
Aim:
CO2: Design, implement, and analyse linear data structures, such as lists, queues, and stacks,
according to the needs of different applications.
Theory:
• In the linked-list implementation, one pointer must be stored for every item in the list,
while the array stores only the items themselves.
• On the other hand, the space used for a linked list is always proportional to the number
of items in the list.
Algorithm:
Coding:
List = [1,2,3,4]
print("Initial List: ")
print(List)
List.extend([8, 'Geeks', 'Always'])
print("\nList after performing Extend Operation: ")
print(List)
List = []
print("Blank List: ")
print(List)
List = [10, 20, 14]
print("\nList of numbers: ")
print(List)
List = ["Geeks", "For", "Geeks"]
print("\nList Items: ")
print(List[0])
print(List[2])
Adding the elements: List = [1,2,3,4]
print("Initial List: ")
print(List)
List.insert(3, 12)
List.insert(0, 'Geeks')
print("\nList after performing Insert Operation: ")
print(List)
List = [1, 2, 3, 4, 5,
6, 7, 8, 9, 10,
11, 12] print("Intial
List: ") print(List)
List.remove(5)
List.remove(6)
print("\nList after Removal of two elements: ")
print(List)
for i in range(1, 5):
List.remove(i)
print("\nList after Removing a range of elements: ")
print(List)
List = [['Geeks', 'For'] , ['Geeks']]
print("\nMulti-Dimensional List: ")
print(List)
Output:
Result:
Thus the list was created,inserted,removed and extend the element was executed
successfully.
Viva:
Aim:
CO2: Design, implement, and analyse linear data structures, such as lists, queues, and stacks,
according to the needs of different applications.
Theory:
The most fundamental operations in the stack ADT include: push(), pop(), peek(),
isFull(), isEmpty(). These are all built-in operations to carry out data manipulation and to
check the status of the stack. Stack uses pointers that always point to the topmost element
within the stack, hence called as the top pointer.
Algorithm:
Coding:
Stack:
stack = []
stack.append('a')
stack.append('b')
stack.append('c')
print('Initial stack')
print(stack)
print('\nElements poped from stack:')
print(stack.pop())
print(stack.pop())
print(stack.pop())
print('\nStack after elements are poped:')
print(stack)
Queue:
queue = []
queue.append('a')
queue.append('b')
queue.append('c')
print("Initial queue")
print(queue)
print("\nElements dequeued from queue")
print(queue.pop(0))
print(queue.pop(0))
print(queue.pop(0))
print("\nQueue after removing elements")
print(queue)
Output:
Initial stack
['a', 'b', 'c']
Elements poped from
stack: c b a
Stack after elements are poped: []
Result:
Stacks are used in various algorithms, data manipulation procedures and system
architecture - like process scheduling in operating systems. Real-world examples include the
'undo' function in software applications following the 'LIFO' principle and a web browser's
back button function using stack to track visited sites.
Viva:
Aim:
CO2: Design, implement, and analyse linear data structures, such as lists, queues, and stacks,
according to the needs of different applications.
Theory:
Using a linked list, we can perform various operations on polynomials, such as adding,
subtracting, multiplying, and dividing them. Linked lists can also be used to perform arithmetic
operations on long integers.
Algorithm:
Coding:
Output:
First polynomial is
5 + 0x^1 + 10x^2 + 6x^3
Second polynomial is
1 + 2x^1 + 4x^2
Sum polynomial is
6 + 2x^1 + 14x^2 + 6x^3
Result:
Viva:
• What are the reasons for choosing linked list to solve polynomial addition?
Linked lists provide an elegant solution for efficiently handling polynomials due to
their dynamic memory allocation and straightforward implementation.
Aim:
CO2: Design, implement, and analyse linear data structures, such as lists, queues, and stacks,
according to the needs of different applications.
Theory:
Postfix notation, also known as Reverse Polish Notation (RPN), offers a solution to the
problems associated with infix notation. In postfix notation, the operators are placed after their
corresponding operands, eliminating the need for parentheses and reducing ambiguity.
Algorithm:
Coding:
class Conversion:
self.array = []
self.output = []
self.precedence = {'+':1, '-':1, '*':2, '/':2, '^':3}
def isEmpty(self):
return True if
self.top == -1
else False
def peek(self):
return self.array[-1]
def pop(self):
if not self.isEmpty():
self.top -= 1
return self.array.pop()
else:
return "$"
for i in exp:
if self.isOperand(i):
self.output.append(i)
self.pop()
else:
while(not self.isEmpty() and
self.notGreater(i)):
self.output.append(self.pop())
self.push(i)
exp = "a+b*(c^d-e)^(f+g*h)-i"
obj = Conversion(len(exp))
obj.infixToPostfix(exp)
Output:
abcd^e-fgh*+^*+i-
Result:
Thus the conversion can be successfully executed.
Viva:
Aim:
CO2: Design, implement, and analyse linear data structures, such as lists, queues, and
stacks, according to the needs of different applications.
Theory:
The full form of FCFS Scheduling is First Come First Serve Scheduling. FCFS
Scheduling algorithm automatically executes the queued processes and requests in the order
of their arrival. It allocates the job that first arrived in the queue to the CPU, then allocates
the second one, and so on.
Algorithm:
Step 1:Input the number of processes required to be scheduled using FCFS, burst time for
each process and its arrival time.
Step 2:Calculate the Finish Time, Turn Around Time and Waiting Time for each process
which in turn help to calculate Average Waiting Time and Average Turn Around Time
required by CPU to schedule given set of process using FCFS.
for i = 0, Finish Time T 0 = Arrival Time T 0 + Burst Time T 0
wt[0] = 0
for i in range(1, n ):
wt[i] = bt[i - 1] + wt[i - 1]
# calculating turnaround
# time by adding
bt[i] + wt[i]
for i in range(n):
tat[i] = bt[i] + wt[i]
wt = [0] * n
tat = [0] * n
total_wt = 0
total_tat = 0
for i in range(n):
total_wt = total_wt + wt[i]
total_tat = total_tat + tat[i]
print(" " + str(i + 1) + "\t\t" + str(bt[i]) + "\t " str(wt[i]) + "\t\t " +
str(tat[i]))
if __name__ =="__main__":
processes = [ 1, 2, 3]
n = len(processes)
burst_time = [10, 5, 8]
findavgTime(processes, n, burst_time)
Output:
Result:
Viva:
Aim:
To implement searching using Linear and Binary Search algorithm using python.
CO3: Design, implement, and analyse efficient sorting and searching to meet requirements
such as searching, indexing, and sorting
Theory:
Searching Algorithms are designed to check for an element or retrieve an element from
any data structure where it is stored. Linear Search sequentially checks each element in the
list until it finds a match or exhausts the list. Binary Search continuously divides the sorted
list, comparing the middle element with the target value.
Algorithm:
Linear Search:
Binary search:
if (arr[mid] == key):
return mid
elif (arr[mid] > key):
return BinarySearch(arr, low, mid - 1, key)
else:
return BinarySearch(arr, mid + 1, high, key)
else:
return -1
arr = [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ]
key = 40
result = BinarySearch(arr, 0, len(arr)-1, key)
if result != -1:
print(key, "Found at index", str(result))
else:
print(key, "not Found")
Output of Binary
Search: 40 Found at
index 3
Output of Linear Search:
Result:
Thus the implementation of searching using Linear and Binary Search using python was
executed successfully.
• In searching algorithm.
• Huffman coding is implemented with the help of BST.
• They are used in implementing dictionaries.
• They are used in spelling checking.
• BST is used for indexing in DBMS.
• They are also used in priority queues.
Viva:
Aim:
To Implement sorting Algorithm using Quick Sort and Insertion Sort algorithm using
python.
CO3: Design, implement, and analyse efficient sorting and searching to meet requirements
such as searching, indexing, and sorting
Theory:
Quicksort algorithm is efficient if the size of the input is very large. But, insertion sort
is more efficient than quick sort in case of small arrays as the number of comparisons and
swaps are less compared to quicksort.
Algorithm:
Quick Sort:
Step 1:Find a “pivot” item in the array. This item is the basis for comparison for a single
round.
Step 2:Start a pointer (the left pointer) at the first item in the array.
Step 3:Start a pointer (the right pointer) at the last item in the array.
Step 4:While the value at the left pointer in the array is less than the pivot value, move
the left pointer to the right (add 1).
Step 5:Continue until the value at the left pointer is greater than or equal to the pivot
value.
Step 6:While the value at the right pointer in the array is greater than the pivot value,
move the right pointer to the left (subtract 1).
Step 7:Continue until the value at the right pointer is less than or equal to the pivot
value.
Step 8:If the left pointer is less than or equal to the right pointer, then swap the values at
these locations in the array.
Step 9:Move the left pointer to the right by one and the right pointer to the left by one.
Insertion Sort:
def partition(arr,low,high):
i = ( low-1 )
pivot = arr[high]
for j in range(low , high):
def quickSort(arr,low,high):
if low < high:
pi = partition(arr,low,high)
quickSort(arr, low, pi-1)
quickSort(arr, pi+1, high)
arr = [2,5,3,8,6,5,4,7]
n = len(arr)
quickSort(arr,0,n-1)
print ("Sorted array is:")
for i in range(n):
print (arr[i],end=" ")
def insertionSort(arr):
for i in range(1, len(arr)):
key = arr[i]
j = i-1
while j >=0 and key < arr[j] :
arr[j+1] = arr[j]
j -= 1
arr[j+1] = key
arr = ['t','u','t','o','r','i','a','l']
insertionSort(arr)
print ("The sorted array is:")
for i in range(len(arr)):
print (arr[i])
Output:
Result:
Thus the implementation of searching Quick and Insertion Sort algorithm using python was
executed successfully.
Viva:
Aim:
CO3: Design, implement, and analyse efficient sorting and searching to meet requirements
such as searching, indexing, and sorting
Theory:
A hash table, also known as a hash map, is a data structure that maps keys to values.
It is one part of a technique called hashing, the other of which is a hash function. A hash
function is an algorithm that produces an index of where a value can be found or stored in
the hash table.
Algorithm:
Step 1:Create a structure, data (hash table item) with key and value as data.
Step 2:for loops to define the range within the set of elements.
Step 3:hashfunction(key) for the size of capacity.
Step 4:Using insert(),removal() data to be presented or
removed.
Step 5:Stop the program.
Coding:
hashTable = [[],] * 10
def checkPrime(n):
if n == 1 or n == 0:
return 0
for i in range(2, n//2):
if n % i == 0:
return 0
return 1
def getPrime(n):
if n % 2 == 0:
n=n+1
while not checkPrime(n):
n += 2
return n
def hashFunction(key):
capacity = getPrime(10)
return key % capacity
def insertData(key, data):
index = hashFunction(key)
hashTable[index] = [key, data] def removeData(key):
index = hashFunction(key)
hashTable[index] = 0
insertData(123, "apple")
insertData(432, "mango")
insertData(213, "banana")
insertData(654, "guava")
print(hashTable)
removeData(123)
print(hashTable)
Output:
[[], [], [123, 'apple'], [432, 'mango'], [213, 'banana'], [654, 'guava'], [], [], [],
[]] [[], [], 0, [432, 'mango'], [213, 'banana'], [654, 'guava'], [], [], [], []]
Result:
Some of the more popular examples are username-password databases, integrity checks, and
blockchain verification.
• Verifying the integrity of messages and files. An important application of secure
hashes is the verification of message integrity. ...
• Signature generation and verification. ...
• Password verification. ...
• Proof-of-work. ...
• File or data identifier.
Viva:
Aim:
CO4: Design, implement, and analyse efficient tree structures to meet requirements such as
binary ADT, AVL and heaps.
Theory:
Trees are a fundamental concept in computer science and data structures used to solve
complex problems. Trees provide an efficient way of organizing, storing, and retrieving data.
They are versatile structures that can be used for sorting, searching, indexing, and traversal
applications.
Algorithm:
Coding:
class Node:
def __init__(self, data):
self.left = None
self.right = None
self.data = data
def insert(self, data):
if self.data:
if data < self.data:
if self.left is None:
self.left = Node(data)
else:
self.left.insert(data)
elif data > self.data:
if self.right is None:
self.right = Node(data)
else:
self.right.insert(data)
else:
self.data = data
def PrintTree(self):
if self.left:
self.left.PrintTree()
print( self.data), if
self.right:
self.right.PrintTree()
root = Node(12)
root.insert(6)
root.insert(14)
root.insert(3)
root.PrintTree()
Output:
3
6
12
14
Result:
• Artificial Intelligence: In AI, binary trees are used in decision tree algorithms for tasks
like classification and regression. They are also used in minimax algorithms for game
playing, such as in the construction of game trees for games like chess or tic-tac-toe.
• Data Compression: Huffman coding, a widely used algorithm for lossless data
compression, employs binary trees to generate variable-length codes for characters
based on their frequencies.
• Cryptography: Binary trees are used in cryptographic algorithms, such as Merkle trees
used in blockchain technology for efficient and secure verification of large datasets.
• Hierarchical Data Structures: Binary trees are used to represent hierarchical data
structures like organization charts, file systems, and XML/JSON parsing.
• Computer Graphics: Binary space partitioning trees (BSP trees) are used in computer
graphics for visibility determination and efficient rendering of scenes.
• Genetics and Evolutionary Biology: Binary trees are used in evolutionary biology for
representing phylogenetic trees, which depict evolutionary relationships between
species.
Viva:
Aim:
CO4: Design, implement, and analyse efficient tree structures to meet requirements such as
binary ADT, AVL and heaps.
Theory:
Tree traversal algorithms are used to visit and process all the nodes in a tree. There
are mainly three types of tree traversal algorithms:
• In-order Traversal:
In an in-order traversal, nodes are visited in the order: left child, root, right child.
For binary search trees (BSTs), in-order traversal visits the nodes in sorted order.
• Pre-order Traversal:
In a pre-order traversal, nodes are visited in the order: root, left child, right child.
This is often used to create a copy of the tree.
• Post-order Traversal:
In a post-order traversal, nodes are visited in the order: left child, right child, root.
This is often used in deleting the tree or evaluating expressions.
Algorithm:
Step 1:Inorder(tree)
• Traverse the left subtree, i.e., call Inorder(left-subtree)
• Visit the root.
• Traverse the right subtree, i.e., call Inorder(right-subtree)
Step 2:Preorder(tree)
• Visit the root.
• Traverse the left subtree, i.e., call Preorder(left-subtree)
• Traverse the right subtree, i.e., call Preorder(right-subtree)
Step 3:Postorder(tree)
• Traverse the left subtree, i.e., call Postorder(left-subtree)
• Traverse the right subtree, i.e., call Postorder(right-subtree)
Coding:
class Node:
def __init__(self,key):
self.left = None
self.right = None
self.val = key
def printInorder(root):
if root:
printInorder(root.left)
print(root.val),
printInorder(root.right)
def printPostorder(root):
if root:
printPostorder(root.left)
printPostorder(root.right)
print(root.val),
def printPreorder(root):
if root:
print(root.val),
printPreorder(root.left)
printPreorder(root.right)
root = Node(1)
root.left = Node(2)
root.right = Node(3)
root.left.left = Node(4)
root.left.right = Node(5)
print ("\nPreorder traversal of binary tree is")
printPreorder(root)
print ("\nInorder traversal of binary tree is")
printInorder(root)
print ("\nPostorder traversal of binary tree is")
printPostorder(root)
Output:
Result:
Tree traversal algorithms, such as in-order, pre-order, and post-order traversals, are essential
for processing tree nodes in a specific order. They are used in various applications, including
expression evaluation, syntax parsing, and data manipulation.
Viva:
Aim:
CO4: Design, implement, and analyse efficient tree structures to meet requirements such as
binary ADT, AVL and heaps.
Theory:
Binary Search Trees (BSTs) are a type of binary tree where each node has at most
two children, and the left child is less than the parent node, while the right child is greater.
This property makes searching, insertion, and deletion operations very efficient.
Algorithm:
Step 2 - Compare the search element with the value of root node in the tree.
Step 3 - If both are matched, then display "Given node is found!!!" and terminate the
function.
Step 4 - If both are not matched, then check whether search element is smaller or larger than
that node value.
Step 5 - If search element is smaller, then continue the search process in left subtree.
Step 6- If search element is larger, then continue the search process in right subtree.
Step 7 - Repeat the same until we find the exact element or until the search element is
compared with the leaf node.
Step 8 - If we reach to the node having the value equal to the search value then display
"Element is found" and terminate the function.
Coding:
class Node:
def __init__(self, data):
self.left = None
self.right = None
self.data = data
self.left.insert(data)
elif data > self.data:
if self.right is None:
self.right = Node(data)
else:
self.right.insert(data)
else:
self.data = data
def PrintTree(self):
if self.left:
self.left.PrintTree()
print( self.data),
if self.right:
self.right.PrintTree()
root = Node(12)
root.insert(6)
root.insert(14)
root.insert(3)
print(root.findval(7))
Output:
7 Not Found
14 is found
Result:
Thus the Implementation of Binary Search Trees using python was executed successfully.
• In searching algorithm.
• Huffman coding is implemented with the help of BST.
• They are used in implementing dictionaries.
• They are used in spelling checking.
• BST is used for indexing in DBMS.
• They are also used in priority queues.
Viva:
Aim:
CO4: Design, implement, and analyse efficient tree structures to meet requirements such as
binary ADT, AVL and heaps.
Theory:
Heaps are tree-based data structures constrained by a heap property. Heaps are used
in many famous algorithms such as Dijkstra's algorithm for finding the shortest path, the
heap sort sorting algorithm, implementing priority queues, and more.
Algorithm:
Coding:
1, 3, 5, 78, 21, 45
[1, 3, 5, 78, 21, 45, 8]
[3, 8, 5, 78, 21, 45]
Result:
• Priority Queue: Heap is used in the construction of the priority queue efficiently.
You can easily insert, delete, and identify priority elements, or you can insert and
extract the element with priority in the time complexity of O(log n).
• Graph Algorithms: Heap Implemented priority queues are also used in graph
algorithms, such as Dijkstra’s algorithm and prim’s algorithm.
• Order Statistics: We can easily use Heap data structures to find the kth largest/
smallest element in the array.
• Embedded System: We can use heap data structure effectively in systems concerned
with security and also in embedded systems such as Linux kernel.
Viva:
Aim:
CO5: Model problems as graph problems and implement efficient graph algorithms to solve
them
Theory:
Graphs in data structures are used to represent the relationships between objects.
Every graph consists of a set of points known as vertices or nodes connected by lines known
as edges. The vertices in a network represent entities.
Algorithm:
Coding:
class graph:
def__init__(self,gdict=Non):
if gdict is None:
gdict = []
self.gdict = gdict
def getVertices(self):
return list(self.gdict.keys())
graph_elements = { "a" : ["b","c"],
"b" : ["a", "d"],
"c" : ["a", "d"],
"d" : ["e"],
"e" : ["d"]
}
g = graph(graph_elements)
print(g.getVertices())
class graph:
def__init__(self,gdict=Non):
if gdict is None:
gdict = {}
self.gdict = gdict
def edges(self):
return self.findedges()
def findedges(self):
edgename = []
for vrtx in self.gdict:
for nxtvrtx in self.gdict[vrtx]:
if {nxtvrtx, vrtx} not in edgename:
edgename.append({vrtx, nxtvrtx})
return edgename
graph_elements = { "a" : ["b","c"],
"b" : ["a", "d"],
"c" : ["a", "d"],
"d" : ["e"],
"e" : ["d"]
}
g = graph(graph_elements)
print(g.edges())
Output:
DISPLAYING VERTICES
['a', 'b', 'c', 'd', 'e']
DISPLAYING EDGES
[{'a', 'b'}, {'a', 'c'}, {'d', 'b'}, {'c', 'd'}, {'d', 'e'}]
Result:
Some of the real-life applications of graph data structure include Social Graphs, Knowledge
Graphs, Path Optimization Algorithms, Recommendation Engines, and Scientific
Computations.
Viva:
Aim:
CO5: Model problems as graph problems and implement efficient graph algorithms to solve
them
Theory:
Algorithm:
DFS:
Step 2 - Select any vertex as starting point for traversal. Visit that vertex and push it on to the
Stack.
Step 3 - Visit any one of the non-visited adjacent vertices of a vertex which is at the top of
stack and push it on to the stack.
Step 4 - Repeat step 3 until there is no new vertex to be visited from the vertex which is at
the top of the stack.
Step 5 - When there is no new vertex to visit then use back tracking and pop one vertex from
the stack.
Step 7 - When stack becomes Empty, then produce final spanning tree by removing unused
edges from the graph
BFS:
Step 2 - Select any vertex as starting point for traversal. Visit that vertex and insert it into the
Queue.
Step 3 - Visit all the non-visited adjacent vertices of the vertex which is at front of the Queue
and insert them into the Queue.
Step 4 - When there is no new vertex to be visited from the vertex which is at front of the
Queue then delete that vertex.
Step 6 - When queue becomes empty, then produce final spanning tree by removing unused
edges from the graph
BFS
Output:
DFS
Output:
FOUND: J
['A', 'D', 'F', 'G', 'I']
Result:
Viva:
Aim:
To Implement single source shortest path algorithm using Bellman Ford Algorithm.
CO5: Model problems as graph problems and implement efficient graph algorithms to solve
them
Theory:
The Single-Source Shortest Path (SSSP) problem consists of finding the shortest paths
between a given vertex v and all other vertices in the graph. Algorithms such as Breadth-
First-Search (BFS) for unweighted graphs or Dijkstra solve this problem.
Algorithm:
Step 1:This step initializes distances from source to all vertices as infinite and distance to
source itself as 0.
Step 2:Create an array dist[] of size |V| with all values as infinite except dist[src] where src is
source vertex.
Step 3:This step calculates shortest distances.
Step 4:Do following |V|-1 times where |V| is the number of vertices in given graph.
Step 5:Do following for each edge u-v
If dist[v] > dist[u] + weight of edge uv, then update dist[v]
dist[v] = dist[u] + weight of edge uv
Step 6:This step reports if there is a negative weight cycle in graph.
Step 7:Do following for each edge u-v
If dist[v] > dist[u] + weight of edge uv, then “Graph contains negative weight cycle” The
idea of step 3 is, step 2 guarantees shortest distances if graph doesn’t contain negative
weight cycle. If we iterate through all edges one more time and get a shorter path for any
vertex, then there is a negative weight cycle.
Output:
2
3 -2
Result:
Thus the Implementation of single source shortest path algorithm was successfully executed.
Aim:
CO5: Model problems as graph problems and implement efficient graph algorithms to solve
them
Theory:
In Kruskal's algorithm, sort all edges of the given graph in increasing order. Then it
keeps on adding new edges and nodes in the MST if the newly added edge does not form a
cycle. It picks the minimum weighted edge at first and the maximum weighted edge at last.
Algorithm:
Coding:
class Graph: d
ef __init__(self, vertices):
self.V = vertices
self.graph = []
def add_edge(self, u, v, w):
self.graph.append([u, v, w])
def find(self, parent, i):
return self.find(parent, parent[i])
def apply_union(self, parent, rank, x,
y):
xroot = self.find(parent, x)
yroot = self.find(parent, y)
if rank[xroot] < rank[yroot]:
parent[xroot] = yroot
elif rank[xroot] > rank[yroot]:
parent[yroot] = xroot
else:
parent[yroot] = xroot
rank[xroot] += 1
def kruskal_algo(self):
result = []
i, e = 0, 0
self.graph = sorted(self.graph,
key=lambda item: item[2])
parent = []
rank = []
for node in range(self.V):
parent.append(node)
rank.append(0)
while e < self.V - 1:
u, v, w = self.graph[i]
i=i+1
x = self.find(parent, u)
y = self.find(parent, v)
if x != y:
e=e+1
result.append([u, v, w])
self.apply_union(parent, rank, x, y)
for u, v, weight in result:
print("%d - %d: %d" % (u, v,
weight)) g = Graph(6)
g.add_edge(0, 1, 4)
g.add_edge(0, 2, 4)
g.add_edge(1, 2, 2)
g.add_edge(1, 0, 4)
g.add_edge(2, 0, 4)
g.add_edge(2, 1, 2)
g.add_edge(2, 3, 3)
g.add_edge(2, 5, 2)
g.add_edge(2, 4, 4)
g.add_edge(3, 2, 3)
g.add_edge(3, 4, 3)
g.add_edge(4, 2, 4)
g.add_edge(4, 3, 3)
g.add_edge(5, 2, 2)
g.add_edge(5, 4, 3)
g.kruskal_algo()
Output:
1 - 2: 2
2 - 5: 2
2 - 3: 3
3 - 4: 3
0 - 1: 4
Result:
• Landing cables.
• TV Network.
• Tour Operations.
• LAN Networks.
• A network of pipes for drinking water or natural gas.
• An electric grid.
• Single-link Cluster.
Viva: