AEC Mod Lab
AEC Mod Lab
AIM:
Develop a program and measure the Running Time for Binary Search with Divide and Conquer.
THEORY:
Explanation
1. First take 3 variables in which the first integer index, last integer index, and middle
integer index value of array/sub-array are stored and then use another variable to store the
given integer to be found.
int low=0, high=9, mid, x=3;
2. Consider a while-loop that will run until low variable is low/equal to high variable.
3. Now, within the while-loop, assign the mid variable to middle integer index –>
mid=(low+ high)/2
4. Now, check if x is greater than/less than/equals to mid
• If x>a[mid], that means x is on the right side of the middle index, so we will change
low=mid.
• If x<a[mid], that means x is on the left side of the middle index, so we will change
high=mid.
• Otherwise, x==a[mid] means x is found and return True.
1,3,5,7,9,11,13,15,17,21
SOURCE CODE:
#include<stdio.h>
int binary_search(int A[], int key, int len) {
int low = 0;
int high = len -1;
while (low <= high) {
int mid = low + ((high - low) / 2);
if (A[mid] == key) {
return mid;
}
if (key < A[mid]) {
high = mid - 1;
}
else {
low = mid + 1;
}
}
return -1;
}
int main()
{
int a[10]={1,3,5,7,9,11,13,15,17,21};
int key = 3;
int position = binary_search (a, key, 10);
if (position == -1)
{
printf("Not found");
return 0;
}
printf("Found it at %d", position);
return 0;
}
OUTPUT:
Found it at 1.
RECURSION METHOD:
SOURCE CODE:
#include <stdio.h>
// A recursive binary search function. It returns location of x in
// given array arr[l..r] is present, otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l)
{
int mid = l + (r - l)/2;
// If the element is present at the middle itself
if (arr[mid] == x) return mid;
// If element is smaller than mid, then it can only be present
// in left subarray
if (arr[mid] > x) return binarySearch(arr, l, mid-1, x);
// Else the element can only be present in right subarray
return binarySearch(arr, mid+1, r, x);
}
// We reach here when element is not present in array
return -1;
}
int main(void)
{
int arr[] = {2, 3, 4, 10, 40};
int n = sizeof(arr)/ sizeof(arr[0]);
int x = 10;
int result = binarySearch(arr, 0, n-1, x);
(result == -1)? printf("Element is not present in array")
: printf("Element is present at index %d", result);
return 0;
}
OUTPUT:
Element is present at index 3
RECURANCE RELATION:
T(n)= {
1 if n=1
T(n/2) +1 if n>1
}
Consider T(n)=T(n/2) +1 -------(1)
Now substitute n is replaced by n/2 in equation (1)
Then we get
T(n)= (T(n/4) +1) +1
= T(n/4) +2------(2)
= T(n/8) +3------(3)
= T(n/16) +4------(4)
.
.
.
.
= T(n/2^k) +k
Put n/2^k = 1 then n=2^k
k=log n
T(n)=1+log n
Hence
Time complexity is O(log n)
Result:
EXPERIMENT-2
AIM:
Develop a program and measure the Running Time for Merge Sort with Divide and Conquer
THEORY:
Merge sort is a sorting technique based on divide and conquer technique. With worst-case time
complexity being Ο(n log n), it is one of the most respected algorithms.
Merge sort first divides the array into equal halves and then combines them in a sorted manner.
How Merge Sort Works?
To understand merge sort, we take an unsorted array as the following –
We know that merge sort first divides the whole array iteratively into equal halves unless the
atomic values are achieved. We see here that an array of 8 items is divided into two arrays of
size 4.
This does not change the sequence of appearance of items in the original. Now we divide these
two arrays into halves.
We further divide these arrays and we achieve atomic value which can no more be divided.
Now, we combine them in exactly the same manner as they were broken down. Please note the
color codes given to these lists.
We first compare the element for each list and then combine them into another list in a sorted
manner. We see that 14 and 33 are in sorted positions. We compare 27 and 10 and in the target
list of 2 values we put 10 first, followed by 27. We change the order of 19 and 35 whereas 42
and 44 are placed sequentially.
In the next iteration of the combining phase, we compare lists of two data values, and merge
them into a list of found data values placing all in a sorted order.
After the final merging, the list should look like this −
return c
end procedure
Source Code:
#include <stdio.h>
// Merge two subarrays L and M into arr
void merge(int arr[], int p, int q, int r) {
mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);
RECURSION METHOD:
#include <stdio.h>
#define max 10
int a[11] = { 10, 14, 19, 26, 27, 31, 33, 35, 42, 44, 0 };
int b[10];
void merging(int low, int mid, int high) {
int l1, l2, i;
for(l1 = low, l2 = mid + 1, i = low; l1 <= mid && l2 <= high; i++) {
if(a[l1] <= a[l2])
b[i] = a[l1++];
else
b[i] = a[l2++];
}
while(l1 <= mid)
b[i++] = a[l1++];
while(l2 <= high)
b[i++] = a[l2++];
for(i = low; i <= high; i++)
a[i] = b[i];
}
void sort(int low, int high) {
int mid;
if(low < high) {
mid = (low + high) / 2;
sort(low, mid);
sort(mid+1, high);
merging(low, mid, high);
} else {
return;
}
}
int main() {
int i;
printf("List before sorting\n");
for(i = 0; i <= max; i++)
printf("%d ", a[i]);
sort(0, max);
printf("\nList after sorting\n");
for(i = 0; i <= max; i++)
printf("%d ", a[i]);
}
OUTPUT:
List before sorting
10 14 19 26 27 31 33 35 42 44 0
List after sorting
0 10 14 19 26 27 31 33 35 42 44
Recurrence relation:
T(n)= {
1 if n=1
2T(n/2) +n if n>1
}
T(n)= 2T(n/2) +n------------------(1)
=4T(n/4) +2n------------------(2)
=8T(n/8) +3n------------------(3)
.
.
.
=2^k T(n/2^K) + kn-------------(4)
Assume n/2^k =1 ➔ n=2^k
➔ k=log n
Then
T(n)=n+ n*log n
O(n*log n)
Merge Sort Complexity:
Time Complexity
Best O(n*log n)
Worst O(n*log n)
Average O(n*log n)
Stability Yes
Time Complexity:
Best Case Complexity: O(n*log n)
Worst Case Complexity: O(n*log n)
Average Case Complexity: O(n*log n)
Space Complexity
The space complexity of merge sort is O(n).
Result:
EXPERIMENT-3
AIM:
Develop a program and Measure the Running Time for Quick Sort with Divide and Conquer.
THEORY:
Quicksort Algorithm
Quicksort is a sorting algorithm based on the divide and conquer approach where
1. An array is divided into subarrays by selecting a pivot element (element selected from
the array).
While dividing the array, the pivot element should be positioned in such a way that elements
less than pivot are kept on the left side and elements greater than pivot are on the right side of
the pivot.
2. The left and right subarrays are also divided using the same approach. This process
continues until each subarray contains a single element.
3. At this point, elements are already sorted. Finally, elements are combined to form a
sorted array.
Working of Quicksort Algorithm
1. Select the Pivot Element
There are different variations of quicksort where the pivot element is selected from different
positions. Here, we will be selecting the rightmost element of the array as the pivot element.
Comparison
of pivot element with element beginning from the first index
2. If the element is greater than the pivot element, a second pointer is set for that
element. If the
element is greater than the pivot element, a second pointer is set for that element.
3. Now, pivot is compared with other elements. If an element smaller than the pivot
element is reached, the smaller element is swapped with the greater element found
earlier. Pivot is
compared with other elements.
4. Again, the process is repeated to set the next greater element as the second pointer.
And, swap it with another smaller element.
The process is
repeated to set the next greater element as the second pointer.
The process
goes on until the second last element is reached.
Finally, the
pivot element is swapped with the second pointer.
3. Divide Subarrays
Pivot elements are again chosen for the left and the right sub-parts separately. And, step 2 is
repeated.
Select pivot element of
in each half and put at correct place using recursion
The subarrays are divided until each subarray is formed of a single element. At this point, the
array is already sorted.
Quick Sort Algorithm
You can understand the working of quicksort algorithm with the help of the
illustrations below.
SOURCE CODE:
#include<stdio.h>
int n;
int main()
{
int arr[30],l,r,i;
void quick_sort(int arr[],int,int);
printf("\nInput number of elements: ");
scanf(" %d",&n);
printf("\nInput array values one by one: ");
for(i=0;i<n;i++)
scanf(" %d",&arr[i]);
l=0; r=n-1;
quick_sort(arr,l,r);
printf("\nThe quick sorted array is: ");
for(i=0;i<n;i++)
printf(" %d",arr[i]);
printf("\n");
}
void quick_sort(int arr[],int low,int high)
{
int temp,left,right,x,k;
if(low>=high)
return;
else
{
x=arr[low];
right=low+1;
left = high;
while(right<=left)
{
while(arr[right]<x && right <= high)
{
right ++;
}
while(arr[left]>x && left > low)
{
left--;
}
if(right<left)
{
temp=arr[right];
arr[right]=arr[left];
arr[left]=temp;
right++;
left--;
}
}
arr[low]=arr[left];
arr[left]=x;
quick_sort(arr,low,left-1);
quick_sort(arr,left+1,high);
}
}
OUTPUT:
Input number of elements: 10
Input array values one by one:25
78
65
14
20
152
1
26
96
45
The Quick Sorted array is: 1 14 20 25 26 45 65 78 96 152.
RECURSION METHOD:
#include <stdio.h>
// function to swap elements
void swap(int *a, int *b) {
int t = *a;
*a = *b;
*b = t;
}
// function to find the partition position
int partition(int array[], int low, int high) {
// select the rightmost element as pivot
int pivot = array[high];
// pointer for greater element
int i = (low - 1);
// traverse each element of the array
// compare them with the pivot
for (int j = low; j < high; j++) {
if (array[j] <= pivot) {
// if element smaller than pivot is found
// swap it with the greater element pointed by i
i++;
// swap element at i with element at j
swap(&array[i], &array[j]);
}
}
// swap the pivot element with the greater element at i
swap(&array[i + 1], &array[high]);
// return the partition point
return (i + 1);
}
void quickSort(int array[], int low, int high) {
if (low < high) {
// find the pivot element such that
// elements smaller than pivot are on left of pivot
// elements greater than pivot are on right of pivot
int pi = partition(array, low, high);
// recursive call on the left of pivot
quickSort(array, low, pi - 1);
// recursive call on the right of pivot
quickSort(array, pi + 1, high);
}
}
// function to print array elements
void printArray(int array[], int size) {
for (int i = 0; i < size; ++i) {
printf("%d ", array[i]);
}
printf("\n");
}
// main function
int main() {
int data [] = {8, 7, 2, 1, 0, 9, 6};
int n = sizeof(data) / sizeof(data[0]);
printf("Unsorted Array\n");
printArray (data, n);
// perform quicksort on data
quickSort(data, 0, n - 1);
Time Complexity
Best O(n*log n)
Worst O(n2)
Average O(n*log n)
Stability No
1. Time Complexities
• Worst Case Complexity [Big-O]: O(n2)
It occurs when the pivot element picked is either the greatest or the smallest element.
This condition leads to the case in which the pivot element lies in an extreme end of the sorted
array. One sub-array is always empty and another sub-array contains n - 1 elements. Thus,
quicksort is called only on this sub-array.
However, the quicksort algorithm has better performance for scattered pivots.
• Best Case Complexity [Big-omega]: O(n*log n)
It occurs when the pivot element is always the middle element or near to the middle element.
• Average Case Complexity [Big-theta]: O(n*log n)
It occurs when the above conditions do not occur.
2. Space Complexity
The space complexity for quicksort is O(log n).
Result:
EXPERIMENT-4
AIM:
Develop a program and measure the Running Time for estimating minimum-
cost spanning trees with greedy method.
Theory:
In Kruskal’s algorithm, sort all edges of the given graph in increasing order. Then it keeps
on adding new edges and nodes in the MST if the newly added edge does not form a cycle.
It picks the minimum weighted edge at first at the maximum weighted edge at last. Thus we
can say that it makes a locally optimal choice in each step in order to find the optimal solution.
Hence this is a Greedy Algorithm.
How to find MST using Kruskal’s algorithm?
Below are the steps for finding MST using Kruskal’s algorithm:
1. Sort all the edges in non-decreasing order of their weight.
2. Pick the smallest edge. Check if it forms a cycle with the spanning tree formed so
far. If the cycle is not formed, include this edge. Else, discard it.
3. Repeat step#2 until there are (V-1) edges in the spanning tree.
Step 2 uses the Union-Find algorithm to detect cycles.
So we recommend reading the following post as a prerequisite.
• Union-Find Algorithm | Set 1 (Detect Cycle in a Graph)
• Union-Find Algorithm | Set 2 (Union By Rank and Path Compression)
Kruskal’s algorithm to find the minimum cost spanning tree uses the greedy approach. The
Greedy Choice is to pick the smallest weight edge that does not cause a cycle in the MST
constructed so far. Let us understand it with an example:
Illustration:
The graph contains 9 vertices and 14 edges. So, the minimum spanning tree formed will be
having (9 – 1) = 8 edges.
After sorting:
1 7 6
2 8 2
2 6 5
4 0 1
4 2 5
6 8 6
7 2 3
7 7 8
8 0 7
8 1 2
9 3 4
10 5 4
11 1 7
14 3 5
Now pick all edges one by one from the sorted list of edges
Step 1: Pick edge 7-6. No cycle is formed, include it.
Step 6: Pick edge 8-6. Since including this edge results in the cycle, discard it. Pick edge 2-
3: No cycle is formed, include it.
Step 8: Pick edge 1-2. Since including this edge results in the cycle, discard it. Pick edge 3-
4. No cycle is formed, include it.
Note: Since the number of edges included in the MST equals to (V – 1), so the algorithm
stops here
Source Code:
#include <stdio.h>
#include <stdlib.h>
return parent[component]
= findParent(parent, parent[component]);
}
int parent[n];
int rank[n];
printf(
"Following are the edges in the constructed MST\n");
for (int i = 0; i < n; i++) {
int v1 = findParent(parent, edge[i][0]);
int v2 = findParent(parent, edge[i][1]);
int wt = edge[i][2];
// Driver code
int main()
{
int edge[5][3] = { { 0, 1, 10 },
{ 0, 2, 6 },
{ 0, 3, 5 },
{ 1, 3, 15 },
{ 2, 3, 4 } };
kruskalAlgo(5, edge);
return 0;
}
Output
Following are the edges in the constructed MST
2 -- 3 == 4
0 -- 3 == 5
0 -- 1 == 10
Minimum Cost Spanning Tree: 19
Time Complexity:
O(E * logE) or O(E * logV)
• Sorting of edges takes O(E * logE) time.
• After sorting, we iterate through all edges and apply the find-union algorithm.
The find and union operations can take at most O(logV) time.
• So overall complexity is O(E * logE + E * logV) time.
• The value of E can be at most O(V 2), so O(logV) and O(logE) are the same.
Therefore, the overall time complexity is O(E * logE) or O(E*logV)
Auxiliary Space: O(V + E), where V is the number of vertices and E is the number of
edges in the graph.
This algorithm always starts with a single node and moves through several adjacent nodes,
in order to explore all of the connected edges along the way.
The algorithm starts with an empty spanning tree. The idea is to maintain two sets of vertices.
The first set contains the vertices already included in the MST, and the other set contains the
vertices not yet included. At every step, it considers all the edges that connect the two sets
and picks the minimum weight edge from these edges. After picking the edge, it moves the
other endpoint of the edge to the set containing MST.
A group of edges that connects two sets of vertices in a graph is called cut in graph theory. So,
at every step of Prim’s algorithm, find a cut, pick the minimum weight edge from the cut,
and include this vertex in MST Set (the set that contains already included vertices).
The working of Prim’s algorithm can be described by using the following steps:
Step 1: Determine an arbitrary vertex as the starting vertex of the MST.
Step 2: Follow steps 3 to 5 till there are vertices that are not included in the MST (known as
fringe vertex).
Step 3: Find edges connecting any tree vertex with the fringe vertices.
Step 4: Find the minimum among these edges.
Step 5: Add the chosen edge to the MST if it does not form any cycle.
Step 6: Return the MST and exit
Note: For determining a cycle, we can divide the vertices into two sets [one set contains the
vertices included in MST and the other contains the fringe vertices.]
Illustration of Prim’s Algorithm:
Consider the following graph as an example for which we need to find the Minimum Spanning
Tree (MST).
Step 1: Firstly, we select an arbitrary vertex that acts as the starting vertex of the Minimum
Spanning Tree. Here we have selected vertex 0 as the starting vertex.
Step 2: All the edges connecting the incomplete MST and other vertices are the edges {0, 1}
and {0, 7}. Between these two the edge with minimum weight is {0, 1}. So include the edge
and vertex 1 in the MST.
Step 4: The edges that connect the incomplete MST with the fringe vertices are {1, 2}, {7,
6} and {7, 8}. Add the edge {7, 6} and the vertex 6 in the MST as it has the least weight
(i.e., 1).
Step 5: The connecting edges now are {7, 8}, {1, 2}, {6, 8} and {6, 5}. Include edge {6, 5}
and vertex 5 in the MST as the edge has the minimum weight (i.e., 2) among them.
Step 7: The connecting edges between the incomplete MST and the other edges are {2, 8},
{2, 3}, {5, 3} and {5, 4}. The edge with minimum weight is edge {2, 8} which has weight 2.
So include this edge and the vertex 8 in the MST.
Step 8: See here that the edges {7, 8} and {2, 3} both have same weight which are
minimum. But 7 is already part of MST. So we will consider the edge {2, 3} and include
that edge and vertex 3 in the MST.
The final structure of the MST is as follows and the weight of the edges of the MST is (4 + 8
+ 1 + 2 + 4 + 2 + 7 + 9) = 37.
Note: If we had selected the edge {1, 2} in the third step then the MST would look like the
following.
Structure of the alternate MST if we had selected edge {1, 2} in the MST
How to implement Prim’s Algorithm?
Follow the given steps to utilize the Prim’s Algorithm mentioned above for finding MST of
a graph:
• Create a set mstSet that keeps track of vertices already included in MST.
• Assign a key value to all vertices in the input graph. Initialize all key values as
INFINITE. Assign the key value as 0 for the first vertex so that it is picked first.
• While mstSet doesn’t include all vertices
• Pick a vertex u that is not there in mstSet and has a minimum key
value.
• Include u in the mstSet.
• Update the key value of all adjacent vertices of u. To update the key
values, iterate through all adjacent vertices.
• For every adjacent vertex v, if the weight of edge u-v is less than the
previous key value of v, update the key value as the weight of u-v.
The idea of using key values is to pick the minimum weight edge from the cut. The key values
are used only for vertices that are not yet included in MST, the key value for these vertices
indicates the minimum weight edges connecting them to the set of vertices included in MST.
Source code:
// A C program for Prim's Minimum Spanning Tree (MST) algorithm. The program is
// for adjacency matrix representation of the graph
#include <limits.h>
#include <stdbool.h>
#include <stdio.h>
// A utility function to find the vertex with minimum key value, from the set of vertices
// not yet included in MST
int minKey(int key[], bool mstSet[])
{
// Initialize min value
int min = INT_MAX, min_index;
return min_index;
}
// Function to construct and print MST for a graph represented using adjacency matrix
// representation.
void primMST(int graph[V][V])
{
// Array to store constructed MST
int parent[V];
// Key values used to pick minimum weight edge in cut
int key[V];
// To represent set of vertices included in MST
bool mstSet[V];
// Pick the minimum key vertex from the set of vertices not yet included in MST.
int u = minKey(key, mstSet);
// Update key value and parent index of the adjacent vertices of the picked vertex.
// Consider only those vertices which are not yet included in MST
for (int v = 0; v < V; v++)
// graph[u][v] is non zero only for adjacent vertices of m mstSet[v] is false for vertices not
// yet included in MST. Update the key only if graph[u][v] is smaller than key[v]
// Driver's code
int main()
{
int graph[V][V] = { { 0, 2, 0, 6, 0 },
{ 2, 0, 3, 8, 5 },
{ 0, 3, 0, 0, 7 },
{ 6, 8, 0, 0, 9 },
{ 0, 5, 7, 9, 0 } };
return 0;
}
Output
Edge Weight
0-1 2
1-2 3
0-3 6
1-4 5
Result:
EXPERIMENT-5
AIM:
Develop a program and measure the Running Time for Estimating Single Source Shortest
Paths with Greedy Method.
Theory:
Find Shortest Paths from Source to all Vertices using Dijkstra’s Algorithm
Given a graph and a source vertex in the graph, find the shortest paths from the source to
all vertices in the given graph.
Examples:
Input: src = 0, the graph is shown below.
Output: 0 4 12 19 21 11 9 8 14
Explanation: The distance from 0 to 1 = 4.
The minimum distance from 0 to 2 = 12. 0->1->2
The minimum distance from 0 to 3 = 19. 0->1->2->3
The minimum distance from 0 to 4 = 21. 0->7->6->5->4
The minimum distance from 0 to 5 = 11. 0->7->6->5
The minimum distance from 0 to 6 = 9. 0->7->6
The minimum distance from 0 to 7 = 8. 0->7
The minimum distance from 0 to 8 = 14. 0->1->2->8
Step 1:
•The set sptSet is initially empty and distances assigned to vertices are {0, INF,
INF, INF, INF, INF, INF, INF} where INF indicates infinite.
• Now pick the vertex with a minimum distance value. The vertex 0 is picked, include
it in sptSet. So sptSet becomes {0}. After including 0 to sptSet, update distance
values of its adjacent vertices.
• Adjacent vertices of 0 are 1 and 7. The distance values of 1 and 7 are updated as
4 and 8.
The following subgraph shows vertices and their distance values, only the vertices with finite
distance values are shown. The vertices included in SPT are shown in green colour.
Step 2:
• Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). The vertex 1 is picked and added to sptSet.
• So sptSet now becomes {0, 1}. Update the distance values of adjacent vertices of 1.
• The distance value of vertex 2 becomes 12.
Step 3:
• Pick the vertex with minimum distance value and not already included in SPT
(not in sptSET). Vertex 7 is picked. So sptSet now becomes {0, 1, 7}.
• Update the distance values of adjacent vertices of 7. The distance value of vertex
6 and 8 becomes finite (15 and 9 respectively).
Step 4:
• Pick the vertex with minimum distance value and not already included in SPT
(not in sptSET). Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}.
• Update the distance values of adjacent vertices of 6. The distance value of vertex
5 and 8 are updated.
We repeat the above steps until sptSet includes all vertices of the given graph. Finally, we
get the following Shortest Path Tree (SPT).
Source Code:
#include <limits.h>
#include <stdbool.h>
#include <stdio.h>
return min_index;
}
// Function that implements Dijkstra's single source shortest path algorithm for a graph
// represented using adjacency matrix representation
void dijkstra(int graph[V][V], int src)
{
int dist[V]; // The output array. dist[i] will hold the shortest
// distance from src to i
// driver's code
int main()
{
/* Let us create the example graph discussed above */
int graph[V][V] = { { 0, 4, 0, 0, 0, 0, 0, 8, 0 },
{ 4, 0, 8, 0, 0, 0, 0, 11, 0 },
{ 0, 8, 0, 7, 0, 4, 0, 0, 2 },
{ 0, 0, 7, 0, 9, 14, 0, 0, 0 },
{ 0, 0, 0, 9, 0, 10, 0, 0, 0 },
{ 0, 0, 4, 14, 10, 0, 2, 0, 0 },
{ 0, 0, 0, 0, 0, 2, 0, 1, 6 },
{ 8, 11, 0, 0, 0, 0, 1, 0, 7 },
{ 0, 0, 2, 0, 0, 0, 6, 7, 0 } };
// Function call
dijkstra(graph, 0);
return 0;
}
Output:
5 11
6 9
7 8
8 14
Result:
EXPERIMENT-6
AIM:
Develop a program and Measure the Running Time for Optimal Binary Search Trees with
Dynamic Programming.
Theory:
Optimal Binary Search Tree
An Optimal Binary Search Tree (OBST), also known as a Weighted Binary Search Tree,
is a binary search tree that minimizes the expected search cost. In a binary search tree, the
search cost is the number of comparisons required to search for a given key.
In an OBST, each node is assigned a weight that represents the probability of the key being
searched for. The sum of all the weights in the tree is 1.0. The expected search cost of a node
is the sum of the product of its depth and weight, and the expected search cost of its children.
To construct an OBST, we start with a sorted list of keys and their probabilities. We then
build a table that contains the expected search cost for all possible sub-trees of the original
list. We can use dynamic programming to fill in this table efficiently. Finally, we use this
table to construct the OBST.
The time complexity of constructing an OBST is O(n^3), where n is the number of keys.
However, with some optimizations, we can reduce the time complexity to O(n^2). Once the
OBST is constructed, the time complexity of searching for a key is O(log n), the same as for
a regular binary search tree.
The OBST is a useful data structure in applications where the keys have different
probabilities of being searched for. It can be used to improve the efficiency of searching and
retrieval operations in databases, compilers, and other computer programs.
Given a sorted array key [0.. n-1] of search keys and an array freq[0.. n-1] of frequency
counts, where freq[i] is the number of searches for keys[i]. Construct a binary search tree of
all keys such that the total cost of all the searches is as small as possible.
Let us first define the cost of a BST. The cost of a BST node is the level of that node
multiplied by its frequency. The level of the root is 1.
Examples:
Input: keys[] = {10, 12}, freq[] = {34, 50}
There can be following two possible BSTs
10 12
\ /
12 10
I II
Frequency of searches of 10 and 12 are 34 and 50 respectively.
The cost of tree I is 34*1 + 50*2 = 134
The cost of tree II is 50*1 + 34*2 = 118
Input: keys[] = {10, 12, 20}, freq[] = {34, 8, 50}
There can be following possible BSTs
10 12 20 10 20
\ / \ / \ /
12 10 20 12 20 10
\ / / \
20 10 12 12
I II III IV V
Among all possible BSTs, cost of the fifth BST is minimum.
Cost of the fifth BST is 1*50 + 2*34 + 3*8 = 142
1) Optimal Substructure:
The optimal cost for freq[i..j] can be recursively calculated using the following formula.
Output:
// The main function that calculates minimum cost of a Binary Search Tree. It mainly uses
// optCost() to find the optimal cost.
Output
Cost of Optimal BST is 142
Result:
EXPERIMENT-7
AIM:
Develop a program and Measure the Running Time for Identifying Solution for Travelling
Salesperson Problem with Dynamic Programming.
Theory:
Travelling Salesman Problem using Dynamic Programming
Given a set of cities and the distance between every pair of cities, the problem is to find the
shortest possible route that visits every city exactly once and returns to the starting point.
Note the difference between Hamiltonian Cycle and TSP. The Hamiltonian cycle problem is
to find if there exists a tour that visits every city exactly once. Here we know that Hamiltonian
Tour exists (because the graph is complete) and in fact, many such tours exist, the problem
is to find a minimum weight Hamiltonian Cycle.
For example, consider the graph shown in the figure on the right side. A TSP tour in the graph
is 1-2-4-3-1. The cost of the tour is 10+25+30+15 which is 80. The problem is a famous NP-
hard problem. There is no polynomial-time know solution for this problem. The following
are different solutions for the traveling salesman problem.
Naive Solution:
1) Consider city 1 as the starting and ending point.
2) Generate all (n-1)! Permutations of cities.
3) Calculate the cost of every permutation and keep track of the minimum cost
permutation.
4) Return the permutation with minimum cost.
Time Complexity: Θ(n!)
Dynamic Programming:
Let the given set of vertices be {1, 2, 3, 4,….n}. Let us consider 1 as starting and ending
point of output. For every other vertex I (other than 1), we find the minimum cost path with
1 as the starting point, I as the ending point, and all vertices appearing exactly once. Let the
cost of this path cost (i), and the cost of the corresponding Cycle would cost (i) + dist(i, 1)
where dist(i, 1) is the distance from I to 1. Finally, we return the minimum of all [cost(i) +
dist(i, 1)] values. This looks simple so far.
Now the question is how to get cost(i)? To calculate the cost(i) using Dynamic Programming,
we need to have some recursive relation in terms of sub-problems.
Let us define a term C(S, i) be the cost of the minimum cost path visiting each vertex in set S
exactly once, starting at 1 and ending at i. We start with all subsets of size 2 and calculate
C(S, i) for all subsets where S is the subset, then we calculate C(S, i) for all subsets S of size
3 and so on. Note that 1 must be present in every subset.
If size of S is 2, then S must be {1, i},
C(S, i) = dist(1, i)
Else if size of S is greater than 2.
C(S, i) = min { C(S-{i}, j) + dis(j, i)} where j belongs to S, j != i and j != 1.
Below is the dynamic programming solution for the problem using top down recursive +
memorized approach:-
For maintaining the subsets we can use the bitmasks to represent the remaining nodes in our
subset. Since bits are faster to operate and there are only few nodes in graph, bitmasks is
better to use.
For example: –
10100 represents node 2 and node 4 are left in set to be processed
010010 represents node 1 and 4 are left in subset.
NOTE:- ignore the 0th bit since our graph is 1-based.
Source Code:
#include <stdio.h>
int tsp_g[10][10] = {
{12, 30, 33, 10, 45},
{56, 22, 9, 15, 18},
{29, 13, 8, 5, 12},
{33, 28, 16, 10, 3},
{1, 4, 30, 24, 20}
};
int visited[10], n, cost = 0;
/* main function */
int main(){
int i, j;
n = 5;
for(i = 0; i < n; i++) {
visited[i] = 0;
}
printf("\n\nShortest Path:\t");
travellingsalesman(0);
printf("\n\nMinimum Cost: \t");
printf("%d\n", cost);
return 0;
}
Output:
Shortest Path: 154321
Minimum Cost: 99
Time Complexity : O(n2*2n) where O(n* 2n) are maximum number of unique
subproblems/states and O(n) for transition (through for loop as in code) in every states.
Result:
EXPERIMENT-8
AIM:
Develop a program and Measure the Running Time for Identifying Solution for 8-Queens
Problem with Backtracking.
Theory:
The eight queens problem is the problem of placing eight queens on an 8×8 chessboard such
that none of them attack one another (no two are in the same row, column, or diagonal). More
generally, the n queens problem places n queens on an n×n chessboard. There are different
solutions for the problem. Backtracking | Set 3 (N Queen Problem) Branch and Bound | Set
5 (N Queen Problem). You can find detailed solutions
at http://en.literateprograms.org/Eight_queens_puzzle_(C)
Explanation:
• This pseudocode uses a backtracking algorithm to find a solution to the 8
Queen problem, which consists of placing 8 queens on a chessboard in such a way
that no two queens threaten each other.
• The algorithm starts by placing a queen on the first column, then it proceeds to
the next column and places a queen in the first safe row of that column.
• If the algorithm reaches the 8th column and all queens are placed in a safe
position, it prints the board and returns true.
If the algorithm is unable to place a queen in a safe position in a certain column,
it backtracks to the previous column and tries a different row.
• The “isSafe” function checks if it is safe to place a queen on a certain row and
column by checking if there are any queens in the same row, diagonal or anti-
diagonal.
• It’s worth to notice that this is just a high-level pseudocode and it might need to
be adapted depending on the specific implementation and language you are using.
Source Code:
/* C program to solve N Queen Problem using backtracking */
#define N 4
#include <stdbool.h>
#include <stdio.h>
/* A utility function to print solution */
void printSolution(int board[N][N])
{
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++)
printf(" %d ", board[i][j]);
printf("\n");
}
}
/* A utility function to check if a queen can be placed on board[row][col]. Note that this
function is called when "col" queens are already placed in columns from 0 to col -1. So we
need to check only left side for attacking queens */
bool isSafe(int board[N][N], int row, int col)
{
int i, j;
/* Check this row on left side */
for (i = 0; i < col; i++)
if (board[row][i])
return false;
/* Check upper diagonal on left side */
for (i = row, j = col; i >= 0 && j >= 0; i--, j--)
if (board[i][j])
return false;
/* Check lower diagonal on left side */
for (i = row, j = col; j >= 0 && i < N; i++, j--)
if (board[i][j])
return false;
return true;
}
/* A recursive utility function to solve N Queen problem */
bool solveNQUtil(int board[N][N], int col)
{
/* base case: If all queens are placed then return true */
if (col >= N)
return true;
/* Consider this column and try placing this queen in all rows one by one */
for (int i = 0; i < N; i++) {
/* Check if the queen can be placed on board[i][col] */
if (isSafe(board, i, col)) {
/* Place this queen in board[i][col] */
board[i][col] = 1;
/* recur to place rest of the queens */
if (solveNQUtil(board, col + 1))
return true;
/* If placing queen in board[i][col] doesn't lead to a solution, then remove queen from
board[i][col] */
board[i][col] = 0; // BACKTRACK
}
}
/* If the queen cannot be placed in any row in this column col then return false */
return false;
}
/* This function solves the N Queen problem using Backtracking. It mainly uses
solveNQUtil() to solve the problem. It returns false if queens cannot be placed, otherwise,
return true and prints placement of queens in the form of 1s. Please note that there may be
more than one solutions, this function prints one of the feasible solutions.*/
bool solveNQ()
{
int board[N][N] = { { 0, 0, 0,
{ 0, 0, 0, 0 },
{ 0, 0, 0, 0 },
{ 0, 0, 0, 0 } };
if (solveNQUtil(board, 0) == false) {
printf("Solution does not exist");
return false;
}
printSolution(board);
return true;
}
// driver program to test above function
int main()
{
solveNQ();
return 0;
}
Output
..Q.
Q...
...Q
.Q..
Result:
EXPERIMENT-9
AIM:
Develop a program and Measure the Running Time for Graph Colouring with Backtracking.
Theory:
m Coloring Problem
Given an undirected graph and a number m, determine if the graph can be colored
with at most m colors such that no two adjacent vertices of the graph are colored with
the same color
Note: Here coloring of a graph means the assignment of colors to all vertices
Following is an example of a graph that can be colored with 3 different colors:
Examples:
Input: graph = {0, 1, 1, 1},
{1, 0, 1, 0},
{1, 1, 0, 1},
{1, 0, 1, 0}
Output: Solution Exists: Following are the assigned colors: 1 2 3 2
Explanation: By coloring the vertices with following colors, adjacent vertices does
not have same colors
Input: graph = {1, 1, 1, 1},
{1, 1, 1, 1},
{1, 1, 1, 1},
{1, 1, 1, 1}
Output: Solution does not exist
Explanation: No solution exits
Source Code:
#include <stdbool.h>
#include <stdio.h>
#define V 4
return false;
return true;
/* This function solves the m Coloring problem using recursion. It returns false if the
m colours cannot be assigned, otherwise, return true and prints assignments of colours
to all vertices. Please note that there may be more than one solutions, this function
prints one of the feasible solutions.*/
bool graphColoring(bool graph[V][V], int m, int i,
int color[V])
if (i == V) {
// if coloring is safe
if (isSafe(graph, color)) {
printSolution(color);
return true;
return false;
color[i] = j;
if (graphColoring(graph, m, i + 1, color))
return true;
color[i] = 0;
return false;
}
/* A utility function to print solution */
printf("Solution Exists:"
printf("\n");
// Driver code
int main()
|/|
|/|
|/|
(0)---(1)
*/
bool graph[V][V] = {
{ 0, 1, 1, 1 },
{ 1, 0, 1, 0 },
{ 1, 1, 0, 1 },
{ 1, 0, 1, 0 },
};
int m = 3; // Number of colors
int color[V];
color[i] = 0;
// Function call
if (!graphColoring(graph, m, 0, color))
return 0;
Output
Solution Exists: Following are the assigned colors
1 2 3 2
Auxiliary Space: O(V). Recursive Stack of graph coloring(…) function will require
O(V) space.
Result:
EXPERIMENT-10
AIM: Develop a program and measure the running time to generate solution of Hamiltonian
Cycle problem with Backtracking.
Theory:
Hamiltonian Cycle using Backtracking
The Hamiltonian cycle of undirected graph G <= V , E> is the cycle containing each vertex
in V. -If graph contains a Hamiltonian cycle, it is called Hamiltonian graph otherwise it is
non-Hamiltonian.
Finding a Hamiltonian cycle in a graph is a well-known problem with many real-world
applications, such as in network routing and scheduling.
Hamiltonian Path in an undirected graph is a path that visits each vertex exactly once.
A Hamiltonian cycle (or Hamiltonian circuit) is a Hamiltonian Path such that there is an edge
(in the graph) from the last vertex to the first vertex of the Hamiltonian Path. Determine
whether a given graph contains Hamiltonian Cycle or not. If it contains, then prints the path.
Create an empty path array and add vertex 0 to it. Add other vertices, starting from the vertex
1. Before adding a vertex, check for whether it is adjacent to the previously added vertex and
not already added. If we find such a vertex, we add the vertex as part of the solution. If we
do not find a vertex then we return false.
Source code:
/* C program for solution of Hamiltonian Cycle problem using backtracking */
#include<stdio.h>
// Number of vertices in the graph
#define V 5
void printSolution(int path[]);
/* A utility function to check if the vertex v can be added at index 'pos' in the Hamiltonian
Cycle constructed so far (stored in 'path[]') */
bool isSafe(int v, bool graph[V][V], int path[], int pos)
{
/* Check if this vertex is an adjacent vertex of the previously added vertex. */
if (graph [ path[pos-1] ][ v ] == 0)
return false;
/* Check if the vertex has already been included. This step can be optimized by creating an
array of size V */
for (int i = 0; i < pos; i++)
if (path[i] == v)
return false;
return true;
}
/* A recursive utility function to solve hamiltonian cycle problem */
bool hamCycleUtil(bool graph[V][V], int path[], int pos)
{
/* base case: If all vertices are included in Hamiltonian Cycle */
if (pos == V)
{
// And if there is an edge from the last included vertex to the first vertex
if ( graph[ path[pos-1] ][ path[0] ] == 1 )
return true;
else
return false;
}
// Try different vertices as a next candidate in Hamiltonian Cycle.
// We don't try for 0 as we included 0 as starting point in hamCycle()
for (int v = 1; v < V; v++)
{
/* Check if this vertex can be added to Hamiltonian Cycle */
if (isSafe(v, graph, path, pos))
{
path[pos] = v;
/* recur to construct rest of the path */
if (hamCycleUtil (graph, path, pos+1) == true)
return true;
/* If adding vertex v doesn't lead to a solution, then remove it */
path[pos] = -1;
}
}
/* If no vertex can be added to Hamiltonian Cycle constructed so far,
then return false */
return false;
}
/* This function solves the Hamiltonian Cycle problem using Backtracking. It mainly uses
hamCycleUtil() to solve the problem. It returns false if there is no Hamiltonian Cycle possible,
otherwise return true and prints the path. Please note that there may be more than one solutions,
this function prints one of the feasible solutions. */
bool hamCycle(bool graph[V][V])
{
int *path = new int[V];
for (int i = 0; i < V; i++)
path[i] = -1;
/* Let us put vertex 0 as the first vertex in the path. If there is a Hamiltonian Cycle, then the
path can be started from any point of the cycle as the graph is undirected */
path[0] = 0;
if ( hamCycleUtil(graph, path, 1) == false )
{
printf("\nSolution does not exist");
return false;
}
printSolution(path);
return true;
}
/* A utility function to print solution */
void printSolution(int path[])
{
printf ("Solution Exists:"
" Following is one Hamiltonian Cycle \n");
for (int i = 0; i < V; i++)
printf(" %d ", path[i]);
// Let us print the first vertex again to show the complete cycle
printf(" %d ", path[0]);
printf("\n");
}
// driver program to test above function
int main()
{
/* Let us create the following graph
(0)--(1)--(2)
|/\|
|/\|
|/ \|
(3)-------(4) */
bool graph1[V][V] = {{0, 1, 0, 1, 0},
{1, 0, 1, 1, 1},
{0, 1, 0, 0, 1},
{1, 1, 0, 0, 1},
{0, 1, 1, 1, 0},
};
// Print the solution
hamCycle(graph1);
/* Let us create the following graph
(0)--(1)--(2)
|/\|
|/\|
|/ \|
(3) (4) */
bool graph2[V][V] = {{0, 1, 0, 1, 0},
{1, 0, 1, 1, 1},
{0, 1, 0, 0, 1},
{1, 1, 0, 0, 0},
{0, 1, 1, 0, 0},
};
// Print the solution
hamCycle(graph2);
return 0;
}
Output:
Solution Exists: Following is one Hamiltonian Cycle
0 1 2 4 3 0
Result:
EXPERIMENT-11
AIM:
Develop a program and Measure the Running Time to Generate Solution of Knapsack with
Backtracking.
Theory:
0/1 Knapsack Problem
We are given N items where each item has some weight and profit associated with it.
We are also given a bag with capacity W, [i.e., the bag can hold at most W weight in it]. The
target is to put the items into the bag such that the sum of profits associated with them is the
maximum possible.
Note: The constraint here is we can either put an item completely into the bag or cannot put
it at all [It is not possible to put a part of an item into the bag].
Examples:
Input: N = 3, W = 4, profit[] = {1, 2, 3}, weight[] = {4, 5, 1}
Output: 3
Explanation: There are two items which have weight less than or equal to 4. If we select
the item with weight 4, the possible profit is 1. And if we select the item with weight 1, the
possible profit is 3. So the maximum possible profit is 3. Note that we cannot put both the
items with weight 4 and 1 together as the capacity of the bag is 4.
Input: N = 3, W = 3, profit[] = {1, 2, 3}, weight[] = {4, 5, 6}
Output: 0
Source code:
/* A Naive recursive implementation of 0-1 Knapsack problem */
#include <stdio.h>
// A utility function that returns maximum of two integers
int max(int a, int b) { return (a > b) ? a : b; }
// Returns the maximum value that can be put in a knapsack of capacity W
int knapSack(int W, int wt[], int val[], int n)
{
// Base Case
if (n == 0 || W == 0)
return 0;
// If weight of the nth item is more than Knapsack capacity W, then this item cannot be
// included in the optimal solution
if (wt[n - 1] > W)
return knapSack(W, wt, val, n - 1);
// Return the maximum of two cases: // (1) nth item included // (2) not included
else
return max(
val[n - 1]
+ knapSack(W - wt[n - 1], wt, val, n - 1),
knapSack(W, wt, val, n - 1));
}
// Driver code
int main()
{
int profit[] = { 60, 100, 120 };
int weight[] = { 10, 20, 30 };
int W = 50;
int n = sizeof(profit) / sizeof(profit[0]);
printf("%d", knapSack(W, weight, profit, n));
return 0;
}
Output
220
Time Complexity: O(2N)
Auxiliary Space: O(N), Stack space required for recursion
Result: