0% found this document useful (0 votes)
61 views

Lab File

The document describes three experiments related to sorting algorithms: 1) Binary search - An iterative and recursive program to search a value in logarithmic time using binary search. 2) Merge sort - A program to implement the merge sort algorithm which follows a divide and conquer approach to sort elements. 3) Quick sort - A program to implement the quick sort algorithm which uses a partition-exchange approach to sort elements by dividing the array around a pivot value.

Uploaded by

Harshita Meena
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

Lab File

The document describes three experiments related to sorting algorithms: 1) Binary search - An iterative and recursive program to search a value in logarithmic time using binary search. 2) Merge sort - A program to implement the merge sort algorithm which follows a divide and conquer approach to sort elements. 3) Quick sort - A program to implement the quick sort algorithm which uses a partition-exchange approach to sort elements by dividing the array around a pivot value.

Uploaded by

Harshita Meena
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

EXPERIMENT NO.

Aim / Title: Write a program for Iterative and Recursive Binary Search.

Problem Statement: Program for Iterative and Recursive Binary Search.

Objectives: To understand the implementation of Recursive Binary Search

Outcomes: Student should be able to search a value in logarithmic time i.e. O(logN), which
makes it ideal to search a number on a huge list.

Pre-requisite: Basic knowledge of C/C++ programming language

Hardware requirements: PC i3 and above.

Software requirements: software for C/C++ (any software like Turbo/Borland C complier. ,
DevC++, Codeblock etc)

Theory:

Binary Search is a search algorithm that is used to find the position of an element (target value )
in a sorted array. The array should be sorted prior to applying a binary search.

Binary search is also known by these names, logarithmic search, binary chop, half interval
search.

Working
The binary search algorithm works by comparing the element to be searched by the middle
element of the array and based on this comparison follows the required procedure.

Case 1 − element = middle, the element is found return the index.

Case 2 − element > middle, search for the element in the sub-array
starting from middle+1 index to n.

Case 3 − element < middle, search for element in the sub-array


starting from 0 index to middle -1.

ALGORITHM
Parameters inital_value , end_value

Step 1 : Find the middle element of array. using ,


middle = initial_value + end_value / 2 ;
Step 2 : If middle = element, return ‘element found’ and index.
Step 3 : if middle > element, call the function with end_value = middle - 1 .
Step 4 : if middle < element, call the function with start_value = middle + 1 .
Step 5 : exit.

The implementation of the binary search algorithm function uses the


call to function again and again. This call can be of two types −

• Iterative
• Recursive
Iterative call is looping over the same block of code multiple times ]

Recursive call is calling the same function again and again.

Instructions: None

Program: To be performed by student

Output: To be attached after performance

Conclusion: Hand written, to be submitted by student

Sample Viva Questions and Answers:


• What is the Logic of Fibonacci series? Ans-Fibonacci Series is a pattern of numbers
where each number is the result of addition of the previous two consecutive numbers.
First 2 numbers start with 0 and 1. The third numbers in the sequence is 0+1=1. The 4th
number is the addition of 2nd and 3rd number i.e. 1+1=2 and so on.
The Fibonacci Sequence is the series of numbers:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, …

• What do you mean by an Array? How to create an Array?

• Can you declare an array without assigning the size of an array?


Ans-yes you declare an array without assigning the size of an array.

• Can we change the size of an array at run time?


Ans-no we Can't change the size of an array at run time.

• Is there any difference between int[] a and int a[]?


Ans-no there is not any difference between int[] a and int a[].

Roll Name of Date of Date of Grade Sign of Sign of


No. Student Performance Evaluation Student Faculty
0818it1 Harshita
91024 meena
EXPERIMENT NO. 2

Aim / Title: Write a program for Merge Sort.

Problem Statement: Program for Iterative and Recursive Binary Search.

Objectives: To understand the implementation of Merge Sort

Outcomes: Student should be able to perform sorting with optimal complexity using merging
algorithm.

Pre-requisite: Basic knowledge of C/C++ programming language

Hardware requirements: PC i3 and above.

Software requirements: software for C/C++ (any software like Turbo/Borland C complier. ,
DevC++, Codeblock etc)

Theory:

Merge sort
Merge sort is the algorithm which follows divide and conquer approach. Consider an array A of n
number of elements. The algorithm processes the elements in 3 steps.
If A Contains 0 or 1 elements then it is already sorted, otherwise, Divide A into two sub-array of
equal number of elements.
Conquer means sort the two sub-arrays recursively using the merge sort.
Combine the sub-arrays to form a single final sorted array maintaining the ordering of the array.
The main idea behind merge sort is that, the short list takes less time to be sorted.
Complexity

Complexity Best case Average Case Worst Case

Time Complexity O(n log n) O(n log n) O(n log n)

Space Complexity O(n)

Example :
Consider the following array of 7 elements. Sort the array by using merge sort.
• A = {10, 5, 2, 23, 45, 21, 7}  
Algorithm
• Step 1: [INITIALIZE] SET I = BEG, J = MID + 1, INDEX = 0
• Step 2: Repeat while (I <= MID) AND (J<=END)
IF ARR[I] < ARR[J]
SET TEMP[INDEX] = ARR[I]
SET I = I + 1
ELSE
SET TEMP[INDEX] = ARR[J]
SET J = J + 1
[END OF IF]
SET INDEX = INDEX + 1
[END OF LOOP]
Step 3: [Copy the remaining
elements of right sub-array, if
any]
IF I > MID
Repeat while J <= END
SET TEMP[INDEX] = ARR[J]
SET INDEX = INDEX + 1, SET J = J + 1
[END OF LOOP]
[Copy the remaining elements of
left sub-array, if any]
ELSE
Repeat while I <= MID
SET TEMP[INDEX] = ARR[I]
SET INDEX = INDEX + 1, SET I = I + 1
[END OF LOOP]
[END OF IF]
• Step 4: [Copy the contents of TEMP back to ARR] SET K = 0
• Step 5: Repeat while K < INDEX
SET ARR[K] = TEMP[K]
SET K = K + 1
[END OF LOOP]
• Step 6: Exit
MERGE_SORT(ARR, BEG, END)
• Step 1: IF BEG < END
SET MID = (BEG + END)/2
CALL MERGE_SORT (ARR, BEG, MID)
CALL MERGE_SORT (ARR, MID + 1, END)
MERGE (ARR, BEG, MID, END)
[END OF IF]
• Step 2: END

Instructions: None

Program: To be performed by student

Output: To be attached after performance

Conclusion: Hand written, to be submitted by student

Sample Viva Questions and Answers:


• Merge sort uses which of the following technique to implement sorting?
Ans-Merge sort uses divide and conquer technique to implement sorting.

• What is the average case time complexity of merge sort?


Ans-In sorting n objects, merge sort has an average case performance of O(n log n).

• Merge sort can be implemented using O(1) auxiliary space. True or false?
Ans-true

Roll Name of Date of Date of Grade Sign of Sign of


No. Student Performance Evaluation Student Faculty
0818it1 Harshita
91024 meena

EXPERIMENT NO. 3

Aim / Title: Write a program for Quick Sort.

Problem Statement: Implementing Quick Sort Algorithm.

Objectives: To make a program of sorting ‘n’ numbers using Quick Sort Algorithm.
Outcomes: Students should be able to understand and implement the Quick Sort Algorithm

Pre-requisite: Basic knowledge of C/C++ programming language

Hardware requirements: PC i3 and above.

Software requirements: software for C/C++ (any software like Turbo/Borland C complier. ,
DevC++, Codeblock etc)

Theory:

Quick Sort Algorithm


Quick Sort is one of the different Sorting Technique which is based on the concept of Divide and
Conquer, just like merge sort. But in quick sort all the heavy lifting(major work) is done while
dividing the array into subarrays, while in case of merge sort, all the real work happens during
merging the subarrays. In case of quick sort, the combine step does absolutely nothing.
It is also called partition-exchange sort. This algorithm divides the list into three main parts:
• Elements less than the Pivot element
• Pivot element(Central element)
• Elements greater than the pivot element
Pivot element can be any element from the array, it can be the first element, the last element or
any random element. In this tutorial, we will take the rightmost element or the last element as
pivot.
For example: In the array {52, 37, 63, 14, 17, 8, 6, 25}, we take 25 as pivot. So after the first
pass, the list will be changed like this.
{6 8 17 14 25 63 37 52}

Hence after the first pass, pivot will be set at its position, with all the elements smaller to it on its
left and all the elements larger than to its right. Now 6 8 17 14 and 63 37 52 are considered as
two separate sunarrays, and same recursive logic will be applied on them, and we will keep
doing this until the complete array is sorted.
How Quick Sorting Works?
Following are the steps involved in quick sort algorithm:
• After selecting an element as pivot, which is the last index of the array in our case, we
divide the array for the first time.
• In quick sort, we call this partitioning. It is not simple breaking down of array into 2
subarrays, but in case of partitioning, the array elements are so positioned that all the
elements smaller than the pivot will be on the left side of the pivot and all the elements
greater than the pivot will be on the right side of it.
• And the pivot element will be at its final sorted position.
• The elements to the left and right, may not be sorted.
• Then we pick subarrays, elements on the left of pivot and elements on the right of pivot,
and we perform partitioning on them by choosing a pivot in the subarrays.
Let's consider an array with values {9, 7, 5, 11, 12, 2, 14, 3, 10, 6}
Below, we have a pictorial representation of how quick sort will sort the given array.

In step 1, we select the last element as the pivot, which is 6 in this case, and call for partitioning,
hence re-arranging the array in such a way that 6 will be placed in its final position and to its left
will be all the elements less than it and to its right, we will have all the elements greater than it.

Then we pick the subarray on the left and the subarray on the right and select a pivot for them, in
the above diagram, we chose 3 as pivot for the left subarray and 11 as pivot for the right
subarray.

And we again call for partitioning.

Instructions: None

Program: To be performed by student

Output: To be attached after performance

Conclusion: Hand written, to be submitted by student

Sample Viva Questions

• Quick sort follows Divide-and-Conquer strategy. True or False?


Ans-true

• Which methods is the most effective for picking the pivot element?
Ans-Median-of-three partitioning is the best method for choosing an appropriate pivot
element. Picking a first, last or random element as a pivot is not much effective.
• Find the pivot element from the given input using median-of-three partitioning method. 8,
1, 4, 9, 6, 3, 5, 2, 7, 0.
Ans-Left element=8, right element=0,
Centre=[position(left+right)/2]=6.

Roll Name of Date of Date of Grade Sign of Sign of


No. Student Performance Evaluation Student Faculty
0818it1 Harshita
91024 meena

EXPERIMENT NO. 4

Aim / Title: Write a program for Strassen’s Matrix Multiplication.

Problem Statement: Let us consider two matrices X and Y. We want to calculate the resultant
matrix Z by multiplying X and Y..

Objectives: Calculate the resultant matrix Z by multiplying X and Y by making a program.

Outcomes: Students should be able to understand and implement Strassen’s Matrix


Multiplication.

Pre-requisite: Basic knowledge of C/C++ programming language

Hardware requirements: PC i3 and above.

Software requirements: software for C/C++ (any software like Turbo/Borland C complier. ,
DevC++, Codeblock etc)

Theory:

Naïve Method
First, we will discuss naïve method and its complexity. Here, we are calculating Z = X × Y.
Using Naïve method, two matrices (X and Y) can be multiplied if the order of these matrices are
p × q and q × r. Following is the algorithm.

Algorithm: Matrix-Multiplication (X, Y, Z)


for i = 1 to p do
for j = 1 to r do
Z[i,j] := 0
for k = 1 to q do
Z[i,j] := Z[i,j] + X[i,k] × Y[k,j]

Complexity
Here, we assume that integer operations take O(1) time. There are three for loops in this
algorithm and one is nested in other. Hence, the algorithm takes O() time to execute.
Strassen’s Matrix Multiplication Algorithm
In this context, using Strassen’s Matrix multiplication algorithm, the time consumption can be
improved a little bit.

Strassen’s Matrix multiplication can be performed only on square matrices where n is a power of
2. Order of both of the matrices are n × n.

Divide X, Y and Z into four (n/2)×(n/2) matrices as represented below –

M1:=(A+C)×(E+F)
M2:=(B+D)×(G+H)

M5:=(C+D)×(E)
M6:=(A+B)×(H)

Then,

J:=M4+M6
K:=M5+M7

Analysis

Using this
recurrence relation, we get 

Hence, the complexity of Strassen’s matrix multiplication algorithm is 

Instructions: None

Program: To be performed by student


Output: To be attached after performance

Conclusion: Hand written, to be submitted by student

Sample Viva Questions and Answers:


• What is the running time of Strassen’s algorithm for matrix multiplication?
Ans-Strassen's matrix algorithm requires only 7 recursive multiplications of n/2 x n/2
matrix and Theta(n2) scalar additions and subtractions yielding the running time as
O(n2.81).

• Strassen’s matrix multiplication algorithm follows which technique.?


Ans-Strassen’s matrix multiplication algorithm follows divide and conquer technique.

• The number of scalar additions and subtractions used in Strassen’s matrix multiplication
algorithm is?
Ans-theta(n^2)

Roll Name of Date of Date of Grade Sign of Sign of


No. Student Performance Evaluation Student Faculty
Harshita
0818it1 meena
91024

EXPERIMENT NO. 5

Aim / Title: Write a program for Optimal Merge Pattern.

Problem Statement: Program for Optimal Merging.

Objectives: To understand the implementation of optimal merging

Outcomes: Student should be able to Optimal Merge Algorithm.

Pre-requisite: Basic knowledge of C/C++ programming language

Hardware requirements: PC i3 and above.

Software requirements: software for C/C++ (any software like Turbo/Borland C complier. ,
DevC++, Codeblock etc)

Theory:

Merge a set of sorted files of different length into a single sorted file. We need to find an optimal
solution, where the resultant file will be generated in minimum time.
If the number of sorted files are given, there are many ways to merge them into a single sorted
file. This merge can be performed pair wise. Hence, this type of merging is called as 2-way
merge patterns.
As, different pairings require different amounts of time, in this strategy we want to determine an
optimal way of merging many files together. At each step, two shortest sequences are merged.
To merge a p-record file and a q-record file requires possibly p + q record moves, the obvious
choice being, merge the two smallest files together at each step.
Two-way merge patterns can be represented by binary merge trees. Let us consider a set of n
sorted files {f1, f2, f3, …, fn}. Initially, each element of this is considered as a single node binary
tree. To find this optimal solution, the following algorithm is used.
Algorithm: TREE (n)
for i := 1 to n – 1 do
declare new node
node.leftchild := least (list)
node.rightchild := least (list)
node.weight) := ((node.leftchild).weight) +
((node.rightchild).weight)
insert (list, node);
return least (list);
At the end of this algorithm, the weight of the root node represents the optimal cost.
Example
Let us consider the given files, f1, f2, f3, f4 and f5 with 20, 30, 10, 5 and 30 number of elements
respectively.
If merge operations are performed according to the provided sequence, then
M1 = merge f1 and f2 => 20 + 30 = 50
M2 = merge M1 and f3 => 50 + 10 = 60
M3 = merge M2 and f4 => 60 + 5 = 65
M4 = merge M3 and f5 => 65 + 30 = 95
Hence, the total number of operations is
50 + 60 + 65 + 95 = 270
Now, the question arises is there any better solution?

f4, f3, f1, f2, f5


Hence, merge operations can be performed on this sequence.
M1 = merge f4 and f3 => 5 + 10 = 15
M2 = merge M1 and f1 => 15 + 20 = 35
M3 = merge M2 and f2 => 35 + 30 = 65
M4 = merge M3 and f5 => 65 + 30 = 95
Therefore, the total number of operations is
15 + 35 + 65 + 95 = 210
Obviously, this is better than the previous one.
In this context, we are now going to solve the problem using this algorithm.
Initial Set

Step-1

Step-2

Step-3

Step-4

Hence, the solution takes 15 + 35 + 60 + 95 = 205 number of comparisons.

Instructions: None

Program: To be performed by student

Output: To be attached after performance


Conclusion: Hand written, to be submitted by student

Sample Viva Questions and Answers:


• The optimal merge pattern is based on _________ method?
Ans-The optimal merge pattern is based on greedy method.

• Greedy job scheduling with deadlines algorithms’ complexity is defined as?


Ans-

• job sequencing with deadline is based on ____________method?


Ans-

Roll Name of Date of Date of Grade Sign of Sign of


No. Student Performance Evaluation Student Faculty
0818it1 Harshita
91024 meena

EXPERIMENT NO. 6

Aim / Title: Write a program for Huffman coding.

Problem Statement: Program for Huffman coding.

Objectives: To understand the implementation of Huffman coding

Outcomes: Student should be able to generate Huffman coding

Pre-requisite: Basic knowledge of C/C++ programming language

Hardware requirements: PC i3 and above.

Software requirements: software for C/C++ (any software like Turbo/Borland C complier. ,
DevC++, Codeblock etc)

Theory:

Huffman Coding is a technique of compressing data to reduce its size without losing any of the
details. It was first developed by David Huffman.
Huffman Coding is generally useful to compress the data in which there are frequently occurring
characters.

How Huffman Coding works?


Suppose the string below is to be sent over a network.
Initial string
Each character occupies 8 bits. There are a total of 15 characters in the above string. Thus, a total
of 8 * 15 = 120 bits are required to send this string.
Using the Huffman Coding technique, we can compress the string to a smaller size.
Huffman coding first creates a tree using the frequencies of the character and then generates code
for each character.
Once the data is encoded, it has to be decoded. Decoding is done using the same tree.
Huffman Coding prevents any ambiguity in the decoding process using the concept of prefix
code ie. a code associated with a character should not be present in the prefix of any other code.
The tree created above helps in maintaining the property.
Huffman coding is done with the help of the following steps.
• Calculate the frequency of each character in the string.

Frequency of string
• Sort the characters in increasing order of the frequency. These are stored in a priority
queue Q.

Characters sorted according to the frequency


• Make each unique character as a leaf node.
• Create an empty node z. Assign the minimum frequency to the left child of z and assign
the second minimum frequency to the right child of z. Set the value of the z as the sum of
the above two minimum frequencies.

Getting the sum of the least numbers


• Remove these two minimum frequencies from Q and add the sum into the list of
frequencies (* denote the internal nodes in the figure above).
• Insert node z into the tree.
• Repeat steps 3 to 5 for all the characters.

Repeat steps 3 to 5 for all the characters.


 
Repeat steps 3 to 5 for all the characters.
• For each non-leaf node, assign 0 to the left edge and 1 to the right edge.

Assign 0 to the left edge and 1 to the right edge.


For sending the above string over a network, we have to send the tree as well as the above
compressed code. The total size is given by the table below.
 
Character Frequency Code Size
A 5 11 5*2 = 10
B 1 100 1*3 = 3
C 6 0 6*1 = 6
D 3 101 3*3 = 9
4 * 8 = 32 bits 15 bits   28 bits
 
Without encoding, the total size of the string was 120 bits. After encoding the size is reduced
to 32 + 15 + 28 = 75.

Decoding the code


For decoding the code, we can take the code and traverse through the tree to find the character.
Let 101 is to be decoded, we can traverse from the root as in the figure below.
Decoding

Huffman Coding Algorithm


create a priority queue Q consisting of each unique character.
sort then in ascending order of their frequencies.
for all the unique characters:
create a newNode
extract minimum value from Q and assign it to leftChild of newNode
extract minimum value from Q and assign it to rightChild of newNode
calculate the sum of these two minimum values and assign it to the value of newNode
insert this newNode into the tree
return rootNode

Instructions: None
Program: To be performed by student

Output: To be attached after performance

Conclusion: Hand written, to be submitted by student

Sample Viva Questions and Answers:

1-2 lines answers – handwritten to be given by students


• The type of encoding where no character code is the prefix of another character code is
called?
• What is the running time of the Huffman encoding algorithm?
• What is the running time of the Huffman algorithm, if its implementation of the priority
queue is done using linked lists?

Roll Name of Date of Date of Grade Sign of Sign of


No. Student Performance Evaluation Student Faculty
0818it1 Harshita
91024 meena

EXPERIMENT NO 7
Aim / Title: Greedy Paradigm: Minimum Spanning Tree using Kruskal’s Algorithm.

Problem Statement: Write a program for minimum spanning trees using Kruskal’s algorithm.
Objectives: To understand the algorithm to determine the minimum spanning tree using
Kruskal’s algorithm.

Outcomes: Students will be able to understand Kruskal’s Algorithm and greedy approach of
solving problems.

Pre-requisite: Greedy Strategy, Graphs, Spanning trees

Hardware requirements: Any CPU with Pentium Processor or similar, 256 MB RAM or more,
1 GB Hard Disk or more

Software requirements: 64 bit Windows Operating System

Theory: Program for Kruskal’s algorithm.


What is Minimum Spanning Tree? 
Given a connected and undirected graph, a spanning tree of that graph is a sub-graph that is a
tree and connects all the vertices together. A single graph can have many different spanning trees.
A minimum spanning tree (MST) or minimum weight spanning tree for a weighted, connected,
undirected graph is a spanning tree with a weight less than or equal to the weight of every other
spanning tree. The weight of a spanning tree is the sum of weights given to each edge of the
spanning tree. A minimum spanning tree has (V – 1) edges where V is the number of vertices in
the given graph. 

Kruskal's Algorithm:
This algorithm will create spanning tree with minimum weight, from a given weighted graph. 
• Begin
• Create the edge list of given graph, with their weights.
• Sort the edge list according to their weights in ascending order.
• Draw all the nodes to create skeleton for spanning tree.
• Pick up the edge at the top of the edge list (i.e. edge with minimum weight).
• Remove this edge from the edge list.
• Connect the vertices in the skeleton with given edge. If by connecting the vertices, a
cycle is created in the skeleton, then discard this edge.
• Repeat steps 5 to 7, until n-1 edges are added or list of edges is over.
• Return 

Time Complexity of Kruskal’s Algorithm:

Let us assume a graph with e number of edges and n number of vertices. Kruskal’s algorithm
starts with sorting of edges.

Time complexity of sorting algorithm= O (e log e)

In Kruskal’s algorithm, we have to add an edge to the spanning tree, in each iteration. This
involves merging of two components.
Time complexity of merging of components = O (e log n)
Overall time complexity of the algorithm = O (e log e) + O (e log n)

Instructions: None
Program: Paste code here

Output: Paste output here

Conclusion: Handwritten by students.

Conclusion:___________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
___________
Sample Viva Questions and Answers:

• What is a spanning tree?


________________________________________________________________________
________________________________________________________________________
________________________________________________________________________

• How is Kruskal’s approach different from Prim’s algorithm?


________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
• State True or False. Kruskal’s algorithm is best suited for the dense graphs than the prim’s
algorithm.
________________________________________________________________________
• What is the time complexity of Kruskal’s algorithm?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________

Roll Name of Date of Date of Grade Sign of Sign of


No. Student Performance Evaluation Student Faculty
0818it1 Harshita
91024 meena

EXPERIMENT NO 8
Aim / Title: Greedy Paradigm: Minimum Spanning Tree Using Prim’s algorithm

Problem Statement: Write a program for minimum spanning trees using Prim’s algorithm.

Objectives: To understand the algorithm to determine the minimum spanning tree using Prim’s
algorithm.

Outcomes: Students will be able to understand Prim’s Algorithm and greedy approach of solving
problems.

Pre-requisite: Greedy Strategy, Graphs, Spanning trees

Hardware requirements: Any CPU with Pentium Processor or similar, 256 MB RAM or more,
1 GB Hard Disk or more

Software requirements: 64 bit Windows Operating System


Theory: Program for Prim’s algorithm.
What is Minimum Spanning Tree? 
Given a connected and undirected graph, a spanning tree of that graph is a sub-graph that is a
tree and connects all the vertices together. A single graph can have many different spanning trees.
A minimum spanning tree (MST) or minimum weight spanning tree for a weighted, connected,
undirected graph is a spanning tree with a weight less than or equal to the weight of every other
spanning tree. The weight of a spanning tree is the sum of weights given to each edge of the
spanning tree. A minimum spanning tree has (V – 1) edges where V is the number of vertices in
the given graph. 

Prim's Algorithm:
This algorithm creates spanning tree with minimum weight from a given weighted graph.

• Begin
• Create edge list of given graph, with their weights.
• Draw all nodes to create skeleton for spanning tree.
• Select an edge with lowest weight and add it to skeleton and delete edge from edge list.
• Add other edges. While adding an edge take care that the one end of the edge should
always be in the skeleton tree and its cost should be minimum.
• Repeat step 5 until n-1 edges are added.
• Return.

Time Complexity of Prim’s Algorithm:

Prim’s algorithm contains two nested loops. Each of this loop has a complexity of O (n). Thus,
the complexity of Prim’s algorithm for a graph having n vertices = O (n2).

Instructions: None
Program: Paste code here

Output: Paste output here

Conclusion: Handwritten by students.

Conclusion:___________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
___________

Sample Viva Questions and Answers:

• What is a spanning tree?


________________________________________________________________________
________________________________________________________________________
________________________________________________________________________

• How is Kruskal’s approach different from Prim’s algorithm?


________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
• State True or False. Prim’s algorithm is best suited for the dense graphs than the prim’s
algorithm.
________________________________________________________________________
• What is the time complexity of Prim’s algorithm?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________

Roll Name of Date of Date of Grade Sign of Sign of


No. Student Performance Evaluation Student Faculty
0818it1 Harshita
91024 meena

EXPERIMENT NO 9
Aim / Title: Greedy Paradigm: Single Sources Shortest Path Algorithm

Problem Statement: Write a program for single source shortest path algorithm/ Dijkstra’s
Algorithm.

Objectives: To understand the algorithm to determine the single source shortest path with
Dijkstra’s technique.

Outcomes: Students will be able to understand Dijkstra’s Algorithm and greedy approach of
solving problems.

Pre-requisite: Greedy Strategy, Graphs

Hardware requirements: Any CPU with Pentium Processor or similar, 256 MB RAM or more,
1 GB Hard Disk or more

Software requirements: 64 bit Windows Operating System

Theory: Program for single sources shortest path algorithm or Dijkstra’s algorithm.
Let us consider a number of cities connected with roads and a traveler wants to travel form his
home city A to the destination B with a minimum cost. So the traveler will be interested to know
the
following:
• Is there a path from city A to city B?
• If there is more than one path from A to B, which is the shortest or least cost path?
Let us consider the graph G = (V, E), a weighting function w(e) for the edges in E and a source
node v0. The problem is to determine the shortest path from v0 to all the remaining nodes of G.
The solution to this problem is suggested by E.W. Dijkstra and the algorithm is popularly known
as Dijkstra’s algorithm. This algorithm finds the shortest paths one by one. If we have already
constructed i shortest paths, then the next path to be constructed should be the next shortest path.
Let S be the set of vertices to which the shortest paths have already been generated. For z not in
S, let dist[z] be the length of the shortest path starting form v0, going through only those vertices
that are in S and ending at z. Let u is the vertex in S to which the shortest path has already been
found. If dist[z] >dist[u] + w(u,z) then dist[z] is updated to dist[u] + w(u,z) and the predecessor of
z is set to u. The Dijkstra’s algorithm is presented below.

Dijkstra’s Algorithm

1. Create cost matrix C[ ][ ] from adjacency matrix adj[ ][ ]. C[i][j] is the cost of going from
vertex i to vertex j. If there is no edge between vertices i and j then C[i][j] is infinity.
 2. Array visited[ ] is initialized to zero.
               for(i=0;i<n;i++)
                              visited[i]=0;
3. If the vertex 0 is the source vertex then visited[0] is marked as 1.
4. Create the distance matrix, by storing the cost of vertices from vertex no. 0 to n-1 from the
source vertex 0.
               for(i=1;i<n;i++)
                              distance[i]=cost[0][i];
Initially, distance of source vertex is taken as 0. i.e. distance[0]=0;
5. for(i=1;i<n;i++)
– Choose a vertex w, such that distance[w] is minimum and visited[w] is 0. Mark visited[w] as 1.
– Recalculate the shortest distance of remaining vertices from the source.
– Only, the vertices not marked as 1 in array visited[ ] should be considered for recalculation of
distance. i.e. for each vertex v
               if(visited[v]==0)
                              distance[v]=min(distance[v],
                              distance[w]+cost[w][v])
Time Complexity of Dijkstra’s Algorithm:

The program contains two nested loops each of which has a complexity of O(n). n is number of
vertices. So the complexity of algorithm is O(n2).

Instructions:  Dijkstra’s algorithm doesn’t work for graphs with negative weight cycles, it may
give correct results for a graph with negative edges.
Program: Paste code here

Output: Paste output here

Conclusion: Handwritten by students.

Conclusion:___________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
___________

Sample Viva Questions and Answers:


• The shortest path might not pass through all the vertices. Also, there can be more than one
shortest path between two nodes. Comment.
______________________________________________________________________________
______________________________________________________________________________
____________________________________________________________

• The algorithm calculates shortest distance, but does it also calculate the path information?
______________________________________________________________________________
______________________________________________________________________________
____________________________________________________________
• What is the space complexity of Single source shortest path algorithm?
________________________________________________________________________
• What is the time complexity of Single source shortest path algorithm?
______________________________________________________________________________
______________________________________________________________________________
____________________________________________________________

Roll Name of Date of Date of Grade Sign of Sign of


No. Student Performance Evaluation Student Faculty
0818it1 Harshita
91024 meena

EXPERIMENT NO 10
Aim / Title: Dynamic Programming Paradigm: All Pairs Shortest Path Algorithm

Problem Statement: Write a program for Floyd-Warshal (All pairs shortest path) algorithm.

Objectives: To understand the algorithm to determine the shortest path with Floyd Warshal’s
technique.

Outcomes: Students will be able to understand Floyd-Warshal Algorithm and Dynamic


Programming approach of solving problems.

Pre-requisite: Dynamic Programming Strategy, Graphs

Hardware requirements: Any CPU with Pentium Processor or similar, 256 MB RAM or more,
1 GB Hard Disk or more

Software requirements: 64 bit Windows Operating System

Theory:
Warshall's algorithm uses the adjacency matrix to find the transitive closure of a directed graph.
Program for Floyd-Warshal algorithm:

Floyd-Warshall's algorithm is a graph analysis algorithm for finding shortest paths in a weighted,
directed graph. A single execution of the algorithm will find the shortest paths between all pairs
of vertices.

This algorithm compares all possible paths through the graph between each pair of vertices. It is
able to do this with only |V|3 comparisons. This is remarkable considering that there may be up
to |V|2 edges in the graph, and every combination of edges is tested. It does so by incrementally
improving an estimate on the shortest path between two vertices, until the estimate is known to
be optimal.

The Input and Output Format

We assume that the graph is represented by an n by n matrix with the weights of the edges. We
also assume the n vertices are numbered 1, ..., n.

Here is the input:

• wij = 0, if i = j
• wij = w(i, j), if ij and (i, j) belongs to E
• wij = , if ij and (i, j) does not belong to E

Output format: an n by n distance matrix D = [dij] where dij is the distance from vertex i to j.

Step 1: The Floyd-Warshall Decomposition

Definition: The vertices v2, v3, ..., vl-1 are called the intermediate vertices of the path p =


(v1, v2, ..., vl).

• Let dij (k) be the length of the shortest path from i to j such that all intermediate vertices on


the path (if any) are in set {1, 2, ..., k}.
• dij (0) is set to be wij, i.e., no intermediate vertex.
• Let D (k) be the n by n matrix [dij (k)].
• Claim: dij (n) is the shortest distance from i to j, with the intermediate vertices set {1,
2, ..., n}. So our aim is to compute D (n).
• Subproblems: computer D (k) for k = 0, 1, ..., n.

Step 2: Structure of Shortest Paths

Observation 1: A shortest path does not contain the same vertex twice.

Proof: A path containing the same vertex twice contains a cycle. Removing cycle gives a shorter
path.

Observation 2: For a shortest path from i to j such that any intermediate vertices on the path are
chosen from the set {1, 2, ..., k}, there are two possibilities:

• k is not a vertex on the path, the shortest such path has length dij (k-1).
• k is a vertex on the path, the shortest such path has length dik (k-1) + dkj (k-1).

Consider a shortest path from i to j containing the vertex k. It consists of a subpath


from i to k and a subpath from k to j. Each subpath can only contain intermediate vertices in
{1, ..., k-1}, and must be as short as possible, namely they have lengths dik (k-1) and dkj (k-1).  Hence
the path has length dik (k-1) + dkj (k-1).

Combining the above two cases we get:


dij (k) = min {dij (k-1), dik (k-1) + dkj (k-1)}.

Step 3: the Bottom-up Computation

• Bottom: D (0) = [wij], the weight matrix.


• Compute D (k) from D (k-1) using dij (k) = min {dij (k-1), dik (k-1) + dkj (k-1)} for  k = 1, ..., n.

Floyd-Warshal Algorithm

Time Complexity of the algorithm: There are three loops. Each loop has constant complexities.
So, the time complexity of the Floyd Warshal algorithm O(n3)

Instructions:  
Program: Paste code here

Output: Paste output here

Conclusion: Handwritten by students.

Conclusion:___________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
___________

Sample Viva Questions and Answers:


• State the difference between Dijkstra’s and Floyd Warshal algorithm?
______________________________________________________________________________
______________________________________________________________________________
____________________________________________________________

• What is the space complexity of All pairs shortest path algorithm?

________________________________________________________________________
• What happens when the value of k is 0 in the Floyd Warshall Algorithm?
______________________________________________________________________________
______________________________________________________________________________
____________________________________________________________
Roll Name of Date of Date of Grade Sign of Sign of
No. Student Performance Evaluation Student Faculty
0818it1 Harshita
91024 meena

EXPERIMENT NO 11
Aim / Title: Branch and Bound Strategy: Travelling Salesperson Algorithm

Problem Statement: Write a program for traveling salesman problem.

Objectives: To understand the branch and bound technique to solve the traveling salesman
problem.

Outcomes: Students will be able to understand traveling salesman problem along with branch
and bound approach of solving problems.

Pre-requisite: Branch and Bound Strategy, State space search tree

Hardware requirements: Any CPU with Pentium Processor or similar, 256 MB RAM or more,
1 GB Hard Disk or more

Software requirements: 64 bit Windows Operating System

Theory:
In the branch and bound solution of travelling salesperson problem, we need to find the cost at
the nodes at first. The cost is found by using cost matrix reduction, in accordance with two
accompanying steps row reduction & column reduction.
In general to get the optimal (lower bound in this problem) cost starting from the node, we
reduce each row and column in such a way that there must be atleast one 0 in each row and
column. For doing this, we just need to reduce the minimum value from each row and column.
We compute a bound on best possible solution that we can get if we down this node. If the bound
on best possible solution itself is worse than current best (best computed so far), then we ignore
the subtree rooted with the node.
Note that the cost through a node includes two costs.

1) Cost of reaching the node from the root (When we reach a node, we have this cost computed)
2) Cost of reaching an answer from current node to a leaf (We compute a bound on this cost to
decide whether to ignore subtree with this node or not).
In branch and bound, the challenging part is figuring out a way to compute a bound on best possible
solution. Below is an idea used to compute bounds for Traveling salesman problem.
Cost of any tour can be written as below.

adjacent to u and in the


tour T)

For every vertex u, if we consider two edges through it in T,


and sum their costs. The overall sum for all vertices would
be twice of cost of tour T (We have considered every edge
twice.)
(Sum of two tour edges adjacent to u) >= (sum of minimum weight
two edges adjacent to
u)

weight edges adjacent to u)

Time Complexity of Travelling salesperson problem:

The worst case complexity of Branch and Bound remains same as that of the Brute Force clearly
because in worst case, we may never get a chance to prune a node. Whereas, in practice it
performs better depending on the different instance of the TSP. The complexity also depends on
the choice of the bounding function as they are the ones deciding how many nodes to be pruned.
Suppose we have N cities, then we need to generate all the permutations of the (N-1) cities,
excluding the root city. Hence the time complexity for generating the permutation is O((n-1)!),
which is equal to O(2^(n-1)). Hence the final time complexity of the algorithm can be O(n^2 *
2^n).

Instructions:  
Program: Paste code here

Output: Paste output here

Conclusion: Handwritten by students.

Conclusion:___________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
___________

Sample Viva Questions and Answers:


• Compare the dynamic programming solution of TSP with branch and bound solution.
______________________________________________________________________________
______________________________________________________________________________
____________________________________________________________

• What is meant by a reduced matrix?


___________________________________________________________________________
___________________________________________________________________________
__________________________________________________________________
• What is the space complexity of Travelling salesperson algorithm?
________________________________________________________________________
Roll Name of Date of Date of Grade Sign of Sign of
No. Student Performance Evaluation Student Faculty
0818it1 Harshita
91024 meena

EXPERIMENT NO 12
Aim / Title: Backtracking Approach: Hamiltonian Cycle Problem

Problem Statement: Write a program for Hamiltonian cycle problem.

Objectives: To understand the backtracking technique to solve the Hamiltonian cycle problem.

Outcomes: Students will be able to understand Hamiltonian cycle problem along with
backtracking approach of solving problems.

Pre-requisite: Backtracking Strategy, State space search tree, Graphs

Hardware requirements: Any CPU with Pentium Processor or similar, 256 MB RAM or more,
1 GB Hard Disk or more

Software requirements: 64 bit Windows Operating System

Theory:
Hamiltonian Path in an undirected graph is a path that visits each vertex exactly once. A
Hamiltonian cycle (or Hamiltonian circuit) is a Hamiltonian Path such that there is an edge (in
the graph) from the last vertex to the first vertex of the Hamiltonian Path. Determine whether a
given graph contains Hamiltonian Cycle or not. If it contains, then prints the path. Following are
the input and output of the required function.
Input:
A 2D array graph[V][V] where V is the number of vertices in graph and graph[V][V] is
adjacency matrix representation of the graph. A value graph[i][j] is 1 if there is a direct edge
from i to j, otherwise graph[i][j] is 0.
Output:
An array path[V] that should contain the Hamiltonian Path. path[i] should represent the ith
vertex in the Hamiltonian Path. The code should also return false if there is no Hamiltonian
Cycle in the graph.

Instructions: The problem to check whether a graph (directed or undirected) contains a


Hamiltonian Path is NP-complete, so is the problem of finding all the Hamiltonian Paths in a
graph.
Program: Paste code here

Output: Paste output here

Conclusion: Handwritten by students.


Conclusion:___________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
___________

Sample Viva Questions and Answers:


• Which of the prevalent algorithm approaches can be used to solve the Hamiltonian path
problem efficiently, other than backtracking?
______________________________________________________________________________
______________________________________________________________________________
____________________________________________________________

• What is the difference between a Hamiltonian path and a circuit?


___________________________________________________________________________
___________________________________________________________________________
__________________________________________________________________
• What is an NP-Complete problem?
________________________________________________________________________

Roll Name of Date of Date of Grade Sign of Sign of


No. Student Performance Evaluation Student Faculty
0818it1 Harshita
91024 meena

You might also like