Lab File
Lab File
Aim / Title: Write a program for Iterative and Recursive Binary Search.
Outcomes: Student should be able to search a value in logarithmic time i.e. O(logN), which
makes it ideal to search a number on a huge list.
Software requirements: software for C/C++ (any software like Turbo/Borland C complier. ,
DevC++, Codeblock etc)
Theory:
Binary Search is a search algorithm that is used to find the position of an element (target value )
in a sorted array. The array should be sorted prior to applying a binary search.
Binary search is also known by these names, logarithmic search, binary chop, half interval
search.
Working
The binary search algorithm works by comparing the element to be searched by the middle
element of the array and based on this comparison follows the required procedure.
Case 2 − element > middle, search for the element in the sub-array
starting from middle+1 index to n.
ALGORITHM
Parameters inital_value , end_value
• Iterative
• Recursive
Iterative call is looping over the same block of code multiple times ]
Instructions: None
Outcomes: Student should be able to perform sorting with optimal complexity using merging
algorithm.
Software requirements: software for C/C++ (any software like Turbo/Borland C complier. ,
DevC++, Codeblock etc)
Theory:
Merge sort
Merge sort is the algorithm which follows divide and conquer approach. Consider an array A of n
number of elements. The algorithm processes the elements in 3 steps.
If A Contains 0 or 1 elements then it is already sorted, otherwise, Divide A into two sub-array of
equal number of elements.
Conquer means sort the two sub-arrays recursively using the merge sort.
Combine the sub-arrays to form a single final sorted array maintaining the ordering of the array.
The main idea behind merge sort is that, the short list takes less time to be sorted.
Complexity
Example :
Consider the following array of 7 elements. Sort the array by using merge sort.
• A = {10, 5, 2, 23, 45, 21, 7}
Algorithm
• Step 1: [INITIALIZE] SET I = BEG, J = MID + 1, INDEX = 0
• Step 2: Repeat while (I <= MID) AND (J<=END)
IF ARR[I] < ARR[J]
SET TEMP[INDEX] = ARR[I]
SET I = I + 1
ELSE
SET TEMP[INDEX] = ARR[J]
SET J = J + 1
[END OF IF]
SET INDEX = INDEX + 1
[END OF LOOP]
Step 3: [Copy the remaining
elements of right sub-array, if
any]
IF I > MID
Repeat while J <= END
SET TEMP[INDEX] = ARR[J]
SET INDEX = INDEX + 1, SET J = J + 1
[END OF LOOP]
[Copy the remaining elements of
left sub-array, if any]
ELSE
Repeat while I <= MID
SET TEMP[INDEX] = ARR[I]
SET INDEX = INDEX + 1, SET I = I + 1
[END OF LOOP]
[END OF IF]
• Step 4: [Copy the contents of TEMP back to ARR] SET K = 0
• Step 5: Repeat while K < INDEX
SET ARR[K] = TEMP[K]
SET K = K + 1
[END OF LOOP]
• Step 6: Exit
MERGE_SORT(ARR, BEG, END)
• Step 1: IF BEG < END
SET MID = (BEG + END)/2
CALL MERGE_SORT (ARR, BEG, MID)
CALL MERGE_SORT (ARR, MID + 1, END)
MERGE (ARR, BEG, MID, END)
[END OF IF]
• Step 2: END
Instructions: None
• Merge sort can be implemented using O(1) auxiliary space. True or false?
Ans-true
EXPERIMENT NO. 3
Objectives: To make a program of sorting ‘n’ numbers using Quick Sort Algorithm.
Outcomes: Students should be able to understand and implement the Quick Sort Algorithm
Software requirements: software for C/C++ (any software like Turbo/Borland C complier. ,
DevC++, Codeblock etc)
Theory:
Hence after the first pass, pivot will be set at its position, with all the elements smaller to it on its
left and all the elements larger than to its right. Now 6 8 17 14 and 63 37 52 are considered as
two separate sunarrays, and same recursive logic will be applied on them, and we will keep
doing this until the complete array is sorted.
How Quick Sorting Works?
Following are the steps involved in quick sort algorithm:
• After selecting an element as pivot, which is the last index of the array in our case, we
divide the array for the first time.
• In quick sort, we call this partitioning. It is not simple breaking down of array into 2
subarrays, but in case of partitioning, the array elements are so positioned that all the
elements smaller than the pivot will be on the left side of the pivot and all the elements
greater than the pivot will be on the right side of it.
• And the pivot element will be at its final sorted position.
• The elements to the left and right, may not be sorted.
• Then we pick subarrays, elements on the left of pivot and elements on the right of pivot,
and we perform partitioning on them by choosing a pivot in the subarrays.
Let's consider an array with values {9, 7, 5, 11, 12, 2, 14, 3, 10, 6}
Below, we have a pictorial representation of how quick sort will sort the given array.
In step 1, we select the last element as the pivot, which is 6 in this case, and call for partitioning,
hence re-arranging the array in such a way that 6 will be placed in its final position and to its left
will be all the elements less than it and to its right, we will have all the elements greater than it.
Then we pick the subarray on the left and the subarray on the right and select a pivot for them, in
the above diagram, we chose 3 as pivot for the left subarray and 11 as pivot for the right
subarray.
Instructions: None
• Which methods is the most effective for picking the pivot element?
Ans-Median-of-three partitioning is the best method for choosing an appropriate pivot
element. Picking a first, last or random element as a pivot is not much effective.
• Find the pivot element from the given input using median-of-three partitioning method. 8,
1, 4, 9, 6, 3, 5, 2, 7, 0.
Ans-Left element=8, right element=0,
Centre=[position(left+right)/2]=6.
EXPERIMENT NO. 4
Problem Statement: Let us consider two matrices X and Y. We want to calculate the resultant
matrix Z by multiplying X and Y..
Software requirements: software for C/C++ (any software like Turbo/Borland C complier. ,
DevC++, Codeblock etc)
Theory:
Naïve Method
First, we will discuss naïve method and its complexity. Here, we are calculating Z = X × Y.
Using Naïve method, two matrices (X and Y) can be multiplied if the order of these matrices are
p × q and q × r. Following is the algorithm.
Complexity
Here, we assume that integer operations take O(1) time. There are three for loops in this
algorithm and one is nested in other. Hence, the algorithm takes O() time to execute.
Strassen’s Matrix Multiplication Algorithm
In this context, using Strassen’s Matrix multiplication algorithm, the time consumption can be
improved a little bit.
Strassen’s Matrix multiplication can be performed only on square matrices where n is a power of
2. Order of both of the matrices are n × n.
M1:=(A+C)×(E+F)
M2:=(B+D)×(G+H)
M5:=(C+D)×(E)
M6:=(A+B)×(H)
Then,
J:=M4+M6
K:=M5+M7
Analysis
Using this
recurrence relation, we get
Instructions: None
• The number of scalar additions and subtractions used in Strassen’s matrix multiplication
algorithm is?
Ans-theta(n^2)
EXPERIMENT NO. 5
Software requirements: software for C/C++ (any software like Turbo/Borland C complier. ,
DevC++, Codeblock etc)
Theory:
Merge a set of sorted files of different length into a single sorted file. We need to find an optimal
solution, where the resultant file will be generated in minimum time.
If the number of sorted files are given, there are many ways to merge them into a single sorted
file. This merge can be performed pair wise. Hence, this type of merging is called as 2-way
merge patterns.
As, different pairings require different amounts of time, in this strategy we want to determine an
optimal way of merging many files together. At each step, two shortest sequences are merged.
To merge a p-record file and a q-record file requires possibly p + q record moves, the obvious
choice being, merge the two smallest files together at each step.
Two-way merge patterns can be represented by binary merge trees. Let us consider a set of n
sorted files {f1, f2, f3, …, fn}. Initially, each element of this is considered as a single node binary
tree. To find this optimal solution, the following algorithm is used.
Algorithm: TREE (n)
for i := 1 to n – 1 do
declare new node
node.leftchild := least (list)
node.rightchild := least (list)
node.weight) := ((node.leftchild).weight) +
((node.rightchild).weight)
insert (list, node);
return least (list);
At the end of this algorithm, the weight of the root node represents the optimal cost.
Example
Let us consider the given files, f1, f2, f3, f4 and f5 with 20, 30, 10, 5 and 30 number of elements
respectively.
If merge operations are performed according to the provided sequence, then
M1 = merge f1 and f2 => 20 + 30 = 50
M2 = merge M1 and f3 => 50 + 10 = 60
M3 = merge M2 and f4 => 60 + 5 = 65
M4 = merge M3 and f5 => 65 + 30 = 95
Hence, the total number of operations is
50 + 60 + 65 + 95 = 270
Now, the question arises is there any better solution?
Step-1
Step-2
Step-3
Step-4
Instructions: None
EXPERIMENT NO. 6
Software requirements: software for C/C++ (any software like Turbo/Borland C complier. ,
DevC++, Codeblock etc)
Theory:
Huffman Coding is a technique of compressing data to reduce its size without losing any of the
details. It was first developed by David Huffman.
Huffman Coding is generally useful to compress the data in which there are frequently occurring
characters.
Frequency of string
• Sort the characters in increasing order of the frequency. These are stored in a priority
queue Q.
Instructions: None
Program: To be performed by student
EXPERIMENT NO 7
Aim / Title: Greedy Paradigm: Minimum Spanning Tree using Kruskal’s Algorithm.
Problem Statement: Write a program for minimum spanning trees using Kruskal’s algorithm.
Objectives: To understand the algorithm to determine the minimum spanning tree using
Kruskal’s algorithm.
Outcomes: Students will be able to understand Kruskal’s Algorithm and greedy approach of
solving problems.
Hardware requirements: Any CPU with Pentium Processor or similar, 256 MB RAM or more,
1 GB Hard Disk or more
Kruskal's Algorithm:
This algorithm will create spanning tree with minimum weight, from a given weighted graph.
• Begin
• Create the edge list of given graph, with their weights.
• Sort the edge list according to their weights in ascending order.
• Draw all the nodes to create skeleton for spanning tree.
• Pick up the edge at the top of the edge list (i.e. edge with minimum weight).
• Remove this edge from the edge list.
• Connect the vertices in the skeleton with given edge. If by connecting the vertices, a
cycle is created in the skeleton, then discard this edge.
• Repeat steps 5 to 7, until n-1 edges are added or list of edges is over.
• Return
Let us assume a graph with e number of edges and n number of vertices. Kruskal’s algorithm
starts with sorting of edges.
In Kruskal’s algorithm, we have to add an edge to the spanning tree, in each iteration. This
involves merging of two components.
Time complexity of merging of components = O (e log n)
Overall time complexity of the algorithm = O (e log e) + O (e log n)
Instructions: None
Program: Paste code here
Conclusion:___________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
___________
Sample Viva Questions and Answers:
EXPERIMENT NO 8
Aim / Title: Greedy Paradigm: Minimum Spanning Tree Using Prim’s algorithm
Problem Statement: Write a program for minimum spanning trees using Prim’s algorithm.
Objectives: To understand the algorithm to determine the minimum spanning tree using Prim’s
algorithm.
Outcomes: Students will be able to understand Prim’s Algorithm and greedy approach of solving
problems.
Hardware requirements: Any CPU with Pentium Processor or similar, 256 MB RAM or more,
1 GB Hard Disk or more
Prim's Algorithm:
This algorithm creates spanning tree with minimum weight from a given weighted graph.
• Begin
• Create edge list of given graph, with their weights.
• Draw all nodes to create skeleton for spanning tree.
• Select an edge with lowest weight and add it to skeleton and delete edge from edge list.
• Add other edges. While adding an edge take care that the one end of the edge should
always be in the skeleton tree and its cost should be minimum.
• Repeat step 5 until n-1 edges are added.
• Return.
Prim’s algorithm contains two nested loops. Each of this loop has a complexity of O (n). Thus,
the complexity of Prim’s algorithm for a graph having n vertices = O (n2).
Instructions: None
Program: Paste code here
Conclusion:___________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
___________
EXPERIMENT NO 9
Aim / Title: Greedy Paradigm: Single Sources Shortest Path Algorithm
Problem Statement: Write a program for single source shortest path algorithm/ Dijkstra’s
Algorithm.
Objectives: To understand the algorithm to determine the single source shortest path with
Dijkstra’s technique.
Outcomes: Students will be able to understand Dijkstra’s Algorithm and greedy approach of
solving problems.
Hardware requirements: Any CPU with Pentium Processor or similar, 256 MB RAM or more,
1 GB Hard Disk or more
Theory: Program for single sources shortest path algorithm or Dijkstra’s algorithm.
Let us consider a number of cities connected with roads and a traveler wants to travel form his
home city A to the destination B with a minimum cost. So the traveler will be interested to know
the
following:
• Is there a path from city A to city B?
• If there is more than one path from A to B, which is the shortest or least cost path?
Let us consider the graph G = (V, E), a weighting function w(e) for the edges in E and a source
node v0. The problem is to determine the shortest path from v0 to all the remaining nodes of G.
The solution to this problem is suggested by E.W. Dijkstra and the algorithm is popularly known
as Dijkstra’s algorithm. This algorithm finds the shortest paths one by one. If we have already
constructed i shortest paths, then the next path to be constructed should be the next shortest path.
Let S be the set of vertices to which the shortest paths have already been generated. For z not in
S, let dist[z] be the length of the shortest path starting form v0, going through only those vertices
that are in S and ending at z. Let u is the vertex in S to which the shortest path has already been
found. If dist[z] >dist[u] + w(u,z) then dist[z] is updated to dist[u] + w(u,z) and the predecessor of
z is set to u. The Dijkstra’s algorithm is presented below.
Dijkstra’s Algorithm
1. Create cost matrix C[ ][ ] from adjacency matrix adj[ ][ ]. C[i][j] is the cost of going from
vertex i to vertex j. If there is no edge between vertices i and j then C[i][j] is infinity.
2. Array visited[ ] is initialized to zero.
for(i=0;i<n;i++)
visited[i]=0;
3. If the vertex 0 is the source vertex then visited[0] is marked as 1.
4. Create the distance matrix, by storing the cost of vertices from vertex no. 0 to n-1 from the
source vertex 0.
for(i=1;i<n;i++)
distance[i]=cost[0][i];
Initially, distance of source vertex is taken as 0. i.e. distance[0]=0;
5. for(i=1;i<n;i++)
– Choose a vertex w, such that distance[w] is minimum and visited[w] is 0. Mark visited[w] as 1.
– Recalculate the shortest distance of remaining vertices from the source.
– Only, the vertices not marked as 1 in array visited[ ] should be considered for recalculation of
distance. i.e. for each vertex v
if(visited[v]==0)
distance[v]=min(distance[v],
distance[w]+cost[w][v])
Time Complexity of Dijkstra’s Algorithm:
The program contains two nested loops each of which has a complexity of O(n). n is number of
vertices. So the complexity of algorithm is O(n2).
Instructions: Dijkstra’s algorithm doesn’t work for graphs with negative weight cycles, it may
give correct results for a graph with negative edges.
Program: Paste code here
Conclusion:___________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
___________
• The algorithm calculates shortest distance, but does it also calculate the path information?
______________________________________________________________________________
______________________________________________________________________________
____________________________________________________________
• What is the space complexity of Single source shortest path algorithm?
________________________________________________________________________
• What is the time complexity of Single source shortest path algorithm?
______________________________________________________________________________
______________________________________________________________________________
____________________________________________________________
EXPERIMENT NO 10
Aim / Title: Dynamic Programming Paradigm: All Pairs Shortest Path Algorithm
Problem Statement: Write a program for Floyd-Warshal (All pairs shortest path) algorithm.
Objectives: To understand the algorithm to determine the shortest path with Floyd Warshal’s
technique.
Hardware requirements: Any CPU with Pentium Processor or similar, 256 MB RAM or more,
1 GB Hard Disk or more
Theory:
Warshall's algorithm uses the adjacency matrix to find the transitive closure of a directed graph.
Program for Floyd-Warshal algorithm:
Floyd-Warshall's algorithm is a graph analysis algorithm for finding shortest paths in a weighted,
directed graph. A single execution of the algorithm will find the shortest paths between all pairs
of vertices.
This algorithm compares all possible paths through the graph between each pair of vertices. It is
able to do this with only |V|3 comparisons. This is remarkable considering that there may be up
to |V|2 edges in the graph, and every combination of edges is tested. It does so by incrementally
improving an estimate on the shortest path between two vertices, until the estimate is known to
be optimal.
We assume that the graph is represented by an n by n matrix with the weights of the edges. We
also assume the n vertices are numbered 1, ..., n.
Here is the input:
• wij = 0, if i = j
• wij = w(i, j), if ij and (i, j) belongs to E
• wij = , if ij and (i, j) does not belong to E
Observation 1: A shortest path does not contain the same vertex twice.
Proof: A path containing the same vertex twice contains a cycle. Removing cycle gives a shorter
path.
Observation 2: For a shortest path from i to j such that any intermediate vertices on the path are
chosen from the set {1, 2, ..., k}, there are two possibilities:
• k is not a vertex on the path, the shortest such path has length dij (k-1).
• k is a vertex on the path, the shortest such path has length dik (k-1) + dkj (k-1).
Floyd-Warshal Algorithm
Time Complexity of the algorithm: There are three loops. Each loop has constant complexities.
So, the time complexity of the Floyd Warshal algorithm O(n3)
Instructions:
Program: Paste code here
Conclusion:___________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
___________
________________________________________________________________________
• What happens when the value of k is 0 in the Floyd Warshall Algorithm?
______________________________________________________________________________
______________________________________________________________________________
____________________________________________________________
Roll Name of Date of Date of Grade Sign of Sign of
No. Student Performance Evaluation Student Faculty
0818it1 Harshita
91024 meena
EXPERIMENT NO 11
Aim / Title: Branch and Bound Strategy: Travelling Salesperson Algorithm
Objectives: To understand the branch and bound technique to solve the traveling salesman
problem.
Outcomes: Students will be able to understand traveling salesman problem along with branch
and bound approach of solving problems.
Hardware requirements: Any CPU with Pentium Processor or similar, 256 MB RAM or more,
1 GB Hard Disk or more
Theory:
In the branch and bound solution of travelling salesperson problem, we need to find the cost at
the nodes at first. The cost is found by using cost matrix reduction, in accordance with two
accompanying steps row reduction & column reduction.
In general to get the optimal (lower bound in this problem) cost starting from the node, we
reduce each row and column in such a way that there must be atleast one 0 in each row and
column. For doing this, we just need to reduce the minimum value from each row and column.
We compute a bound on best possible solution that we can get if we down this node. If the bound
on best possible solution itself is worse than current best (best computed so far), then we ignore
the subtree rooted with the node.
Note that the cost through a node includes two costs.
1) Cost of reaching the node from the root (When we reach a node, we have this cost computed)
2) Cost of reaching an answer from current node to a leaf (We compute a bound on this cost to
decide whether to ignore subtree with this node or not).
In branch and bound, the challenging part is figuring out a way to compute a bound on best possible
solution. Below is an idea used to compute bounds for Traveling salesman problem.
Cost of any tour can be written as below.
The worst case complexity of Branch and Bound remains same as that of the Brute Force clearly
because in worst case, we may never get a chance to prune a node. Whereas, in practice it
performs better depending on the different instance of the TSP. The complexity also depends on
the choice of the bounding function as they are the ones deciding how many nodes to be pruned.
Suppose we have N cities, then we need to generate all the permutations of the (N-1) cities,
excluding the root city. Hence the time complexity for generating the permutation is O((n-1)!),
which is equal to O(2^(n-1)). Hence the final time complexity of the algorithm can be O(n^2 *
2^n).
Instructions:
Program: Paste code here
Conclusion:___________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
___________
EXPERIMENT NO 12
Aim / Title: Backtracking Approach: Hamiltonian Cycle Problem
Objectives: To understand the backtracking technique to solve the Hamiltonian cycle problem.
Outcomes: Students will be able to understand Hamiltonian cycle problem along with
backtracking approach of solving problems.
Hardware requirements: Any CPU with Pentium Processor or similar, 256 MB RAM or more,
1 GB Hard Disk or more
Theory:
Hamiltonian Path in an undirected graph is a path that visits each vertex exactly once. A
Hamiltonian cycle (or Hamiltonian circuit) is a Hamiltonian Path such that there is an edge (in
the graph) from the last vertex to the first vertex of the Hamiltonian Path. Determine whether a
given graph contains Hamiltonian Cycle or not. If it contains, then prints the path. Following are
the input and output of the required function.
Input:
A 2D array graph[V][V] where V is the number of vertices in graph and graph[V][V] is
adjacency matrix representation of the graph. A value graph[i][j] is 1 if there is a direct edge
from i to j, otherwise graph[i][j] is 0.
Output:
An array path[V] that should contain the Hamiltonian Path. path[i] should represent the ith
vertex in the Hamiltonian Path. The code should also return false if there is no Hamiltonian
Cycle in the graph.