0% found this document useful (0 votes)
3 views

AEC Mod Lab

The document outlines three experiments focused on different sorting algorithms: Binary Search, Merge Sort, and Quick Sort, all utilizing the divide and conquer technique. Each experiment includes the aim, theory, source code, output, and analysis of time complexity. The results demonstrate the effectiveness of these algorithms with their respective complexities being O(log n) for Binary Search, O(n log n) for Merge Sort, and O(n log n) for Quick Sort.

Uploaded by

Sunny Midde
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

AEC Mod Lab

The document outlines three experiments focused on different sorting algorithms: Binary Search, Merge Sort, and Quick Sort, all utilizing the divide and conquer technique. Each experiment includes the aim, theory, source code, output, and analysis of time complexity. The results demonstrate the effectiveness of these algorithms with their respective complexities being O(log n) for Binary Search, O(n log n) for Merge Sort, and O(n log n) for Quick Sort.

Uploaded by

Sunny Midde
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

EXPERIMENT-1

AIM:
Develop a program and measure the Running Time for Binary Search with Divide and Conquer.
THEORY:
Explanation
1. First take 3 variables in which the first integer index, last integer index, and middle
integer index value of array/sub-array are stored and then use another variable to store the
given integer to be found.
int low=0, high=9, mid, x=3;
2. Consider a while-loop that will run until low variable is low/equal to high variable.
3. Now, within the while-loop, assign the mid variable to middle integer index –>
mid=(low+ high)/2
4. Now, check if x is greater than/less than/equals to mid
• If x>a[mid], that means x is on the right side of the middle index, so we will change
low=mid.
• If x<a[mid], that means x is on the left side of the middle index, so we will change
high=mid.
• Otherwise, x==a[mid] means x is found and return True.
1,3,5,7,9,11,13,15,17,21
SOURCE CODE:
#include<stdio.h>
int binary_search(int A[], int key, int len) {
int low = 0;
int high = len -1;
while (low <= high) {
int mid = low + ((high - low) / 2);
if (A[mid] == key) {
return mid;
}
if (key < A[mid]) {
high = mid - 1;
}
else {
low = mid + 1;
}
}
return -1;
}
int main()
{
int a[10]={1,3,5,7,9,11,13,15,17,21};
int key = 3;
int position = binary_search (a, key, 10);
if (position == -1)
{
printf("Not found");
return 0;
}
printf("Found it at %d", position);
return 0;
}

OUTPUT:
Found it at 1.
RECURSION METHOD:
SOURCE CODE:
#include <stdio.h>
// A recursive binary search function. It returns location of x in
// given array arr[l..r] is present, otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l)
{
int mid = l + (r - l)/2;
// If the element is present at the middle itself
if (arr[mid] == x) return mid;
// If element is smaller than mid, then it can only be present
// in left subarray
if (arr[mid] > x) return binarySearch(arr, l, mid-1, x);
// Else the element can only be present in right subarray
return binarySearch(arr, mid+1, r, x);
}
// We reach here when element is not present in array
return -1;
}
int main(void)
{
int arr[] = {2, 3, 4, 10, 40};
int n = sizeof(arr)/ sizeof(arr[0]);
int x = 10;
int result = binarySearch(arr, 0, n-1, x);
(result == -1)? printf("Element is not present in array")
: printf("Element is present at index %d", result);
return 0;
}

OUTPUT:
Element is present at index 3
RECURANCE RELATION:
T(n)= {
1 if n=1
T(n/2) +1 if n>1
}
Consider T(n)=T(n/2) +1 -------(1)
Now substitute n is replaced by n/2 in equation (1)
Then we get
T(n)= (T(n/4) +1) +1
= T(n/4) +2------(2)
= T(n/8) +3------(3)
= T(n/16) +4------(4)
.
.
.
.
= T(n/2^k) +k
Put n/2^k = 1 then n=2^k
k=log n
T(n)=1+log n
Hence
Time complexity is O(log n)

Result:
EXPERIMENT-2
AIM:
Develop a program and measure the Running Time for Merge Sort with Divide and Conquer
THEORY:
Merge sort is a sorting technique based on divide and conquer technique. With worst-case time
complexity being Ο(n log n), it is one of the most respected algorithms.
Merge sort first divides the array into equal halves and then combines them in a sorted manner.
How Merge Sort Works?
To understand merge sort, we take an unsorted array as the following –

We know that merge sort first divides the whole array iteratively into equal halves unless the
atomic values are achieved. We see here that an array of 8 items is divided into two arrays of
size 4.

This does not change the sequence of appearance of items in the original. Now we divide these
two arrays into halves.

We further divide these arrays and we achieve atomic value which can no more be divided.

Now, we combine them in exactly the same manner as they were broken down. Please note the
color codes given to these lists.
We first compare the element for each list and then combine them into another list in a sorted
manner. We see that 14 and 33 are in sorted positions. We compare 27 and 10 and in the target
list of 2 values we put 10 first, followed by 27. We change the order of 19 and 35 whereas 42
and 44 are placed sequentially.

In the next iteration of the combining phase, we compare lists of two data values, and merge
them into a list of found data values placing all in a sorted order.
After the final merging, the list should look like this −

Now we should learn some programming aspects of merge sorting.


Algorithm
Merge sort keeps on dividing the list into equal halves until it can no more be divided. By
definition, if it is only one element in the list, it is sorted. Then, merge sort combines the smaller
sorted lists keeping the new list sorted too.
Step 1 − if it is only one element in the list it is already sorted, return.
Step 2 − divide the list recursively into two halves until it can no more be divided.
Step 3 − merge the smaller lists into new list in sorted order.
Pseudocode
We shall now see the pseudocodes for merge sort functions. As our algorithms point out two
main functions − divide & merge.
Merge sort works with recursion and we shall see our implementation in the same way.

procedure mergesort( var a as array )


if ( n == 1 ) return a
var l1 as array = a[0] ... a[n/2]
var l2 as array = a[n/2+1] ... a[n]
l1 = mergesort( l1 )
l2 = mergesort( l2 )
return merge( l1, l2 )
end procedure

procedure merge( var a as array, var b as array )


var c as array
while ( a and b have elements )
if ( a[0] > b[0] )
add b[0] to the end of c
remove b[0] from b
else
add a[0] to the end of c
remove a[0] from a
end if
end while

while ( a has elements )


add a[0] to the end of c
remove a[0] from a
end while

while ( b has elements )


add b[0] to the end of c
remove b[0] from b
end while

return c
end procedure

Source Code:
#include <stdio.h>
// Merge two subarrays L and M into arr
void merge(int arr[], int p, int q, int r) {

// Create L ← A[p..q] and M ← A[q+1..r]


int n1 = q - p + 1;
int n2 = r - q;
int L[n1], M[n2];
for (int i = 0; i < n1; i++)
L[i] = arr[p + i];
for (int j = 0; j < n2; j++)
M[j] = arr[q + 1 + j];
// Maintain current index of sub-arrays and main array
int i, j, k;
i = 0;
j = 0;
k = p;
// Until we reach either end of either L or M, pick larger among
// elements L and M and place them in the correct position at A[p..r]
while (i < n1 && j < n2) {
if (L[i] <= M[j]) {
arr[k] = L[i];
i++;
} else {
arr[k] = M[j];
j++;
}
k++;
}

// When we run out of elements in either L or M,


// pick up the remaining elements and put in A[p..r]
while (i < n1) {
arr[k] = L[i];
i++;
k++;
}
while (j < n2) {
arr[k] = M[j];
j++;
k++;
}
}
// Divide the array into two subarrays, sort them and merge them
void mergeSort(int arr[], int l, int r) {
if (l < r) {
// m is the point where the array is divided into two subarrays
int m = l + (r - l) / 2;

mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);

// Merge the sorted subarrays


merge(arr, l, m, r);
}
}
// Print the array
void printArray(int arr[], int size) {
for (int i = 0; i < size; i++)
printf("%d ", arr[i]);
printf("\n");
}
// Driver program
int main() {
int arr[] = {6, 5, 12, 10, 9, 1};
int size = sizeof(arr) / sizeof(arr[0]);
mergeSort(arr, 0, size - 1);
printf("Sorted array: \n");
printArray(arr, size);
}
OUTPUT:
Sorted array:
0 1 5 6 9 10 12

RECURSION METHOD:
#include <stdio.h>
#define max 10

int a[11] = { 10, 14, 19, 26, 27, 31, 33, 35, 42, 44, 0 };
int b[10];
void merging(int low, int mid, int high) {
int l1, l2, i;
for(l1 = low, l2 = mid + 1, i = low; l1 <= mid && l2 <= high; i++) {
if(a[l1] <= a[l2])
b[i] = a[l1++];
else
b[i] = a[l2++];
}
while(l1 <= mid)
b[i++] = a[l1++];
while(l2 <= high)
b[i++] = a[l2++];
for(i = low; i <= high; i++)
a[i] = b[i];
}
void sort(int low, int high) {
int mid;
if(low < high) {
mid = (low + high) / 2;
sort(low, mid);
sort(mid+1, high);
merging(low, mid, high);
} else {
return;
}
}

int main() {
int i;
printf("List before sorting\n");
for(i = 0; i <= max; i++)
printf("%d ", a[i]);
sort(0, max);
printf("\nList after sorting\n");
for(i = 0; i <= max; i++)
printf("%d ", a[i]);
}

OUTPUT:
List before sorting
10 14 19 26 27 31 33 35 42 44 0
List after sorting
0 10 14 19 26 27 31 33 35 42 44
Recurrence relation:
T(n)= {
1 if n=1
2T(n/2) +n if n>1
}
T(n)= 2T(n/2) +n------------------(1)
=4T(n/4) +2n------------------(2)
=8T(n/8) +3n------------------(3)
.
.
.
=2^k T(n/2^K) + kn-------------(4)
Assume n/2^k =1 ➔ n=2^k
➔ k=log n
Then
T(n)=n+ n*log n
O(n*log n)
Merge Sort Complexity:

Time Complexity

Best O(n*log n)

Worst O(n*log n)

Average O(n*log n)

Space Complexity O(n)

Stability Yes

Time Complexity:
Best Case Complexity: O(n*log n)
Worst Case Complexity: O(n*log n)
Average Case Complexity: O(n*log n)
Space Complexity
The space complexity of merge sort is O(n).

Result:
EXPERIMENT-3
AIM:
Develop a program and Measure the Running Time for Quick Sort with Divide and Conquer.
THEORY:
Quicksort Algorithm
Quicksort is a sorting algorithm based on the divide and conquer approach where
1. An array is divided into subarrays by selecting a pivot element (element selected from
the array).

While dividing the array, the pivot element should be positioned in such a way that elements
less than pivot are kept on the left side and elements greater than pivot are on the right side of
the pivot.
2. The left and right subarrays are also divided using the same approach. This process
continues until each subarray contains a single element.
3. At this point, elements are already sorted. Finally, elements are combined to form a
sorted array.
Working of Quicksort Algorithm
1. Select the Pivot Element
There are different variations of quicksort where the pivot element is selected from different
positions. Here, we will be selecting the rightmost element of the array as the pivot element.

Select a pivot element

2. Rearrange the Array


Now the elements of the array are rearranged so that elements that are smaller than the pivot
are put on the left and the elements greater than the pivot are put on the right.

Put all the smaller


elements on the left and greater on the right of pivot element

Here's how we rearrange the array:


1. A pointer is fixed at the pivot element. The pivot element is compared with the
elements beginning from the first index.

Comparison
of pivot element with element beginning from the first index

2. If the element is greater than the pivot element, a second pointer is set for that

element. If the
element is greater than the pivot element, a second pointer is set for that element.

3. Now, pivot is compared with other elements. If an element smaller than the pivot
element is reached, the smaller element is swapped with the greater element found

earlier. Pivot is
compared with other elements.

4. Again, the process is repeated to set the next greater element as the second pointer.
And, swap it with another smaller element.
The process is
repeated to set the next greater element as the second pointer.

5. The process goes on until the second last element is reached.

The process
goes on until the second last element is reached.

6. Finally, the pivot element is swapped with the second pointer.

Finally, the
pivot element is swapped with the second pointer.

3. Divide Subarrays
Pivot elements are again chosen for the left and the right sub-parts separately. And, step 2 is
repeated.
Select pivot element of
in each half and put at correct place using recursion

The subarrays are divided until each subarray is formed of a single element. At this point, the
array is already sorted.
Quick Sort Algorithm

quickSort(array, leftmostIndex, rightmostIndex)


if (leftmostIndex < rightmostIndex)
pivotIndex <- partition(array,leftmostIndex, rightmostIndex)
quickSort(array, leftmostIndex, pivotIndex - 1)
quickSort(array, pivotIndex, rightmostIndex)

partition(array, leftmostIndex, rightmostIndex)


set rightmostIndex as pivotIndex
storeIndex <- leftmostIndex - 1
for i <- leftmostIndex + 1 to rightmostIndex
if element[i] < pivotElement
swap element[i] and element[storeIndex]
storeIndex++
swap pivotElement and element[storeIndex+1]
return storeIndex + 1
Visual Illustration of Quicksort Algorithm

You can understand the working of quicksort algorithm with the help of the
illustrations below.

Sorting the elements on the left of pivot using recursion

Sorting the elements on the right of pivot using recursion

SOURCE CODE:
#include<stdio.h>
int n;
int main()
{
int arr[30],l,r,i;
void quick_sort(int arr[],int,int);
printf("\nInput number of elements: ");
scanf(" %d",&n);
printf("\nInput array values one by one: ");
for(i=0;i<n;i++)
scanf(" %d",&arr[i]);
l=0; r=n-1;
quick_sort(arr,l,r);
printf("\nThe quick sorted array is: ");
for(i=0;i<n;i++)
printf(" %d",arr[i]);
printf("\n");
}
void quick_sort(int arr[],int low,int high)
{
int temp,left,right,x,k;
if(low>=high)
return;
else
{
x=arr[low];
right=low+1;
left = high;
while(right<=left)
{
while(arr[right]<x && right <= high)
{
right ++;
}
while(arr[left]>x && left > low)
{
left--;
}
if(right<left)
{
temp=arr[right];
arr[right]=arr[left];
arr[left]=temp;
right++;
left--;
}
}
arr[low]=arr[left];
arr[left]=x;
quick_sort(arr,low,left-1);
quick_sort(arr,left+1,high);
}
}

OUTPUT:
Input number of elements: 10
Input array values one by one:25
78
65
14
20
152
1
26
96
45
The Quick Sorted array is: 1 14 20 25 26 45 65 78 96 152.
RECURSION METHOD:
#include <stdio.h>
// function to swap elements
void swap(int *a, int *b) {
int t = *a;
*a = *b;
*b = t;
}
// function to find the partition position
int partition(int array[], int low, int high) {
// select the rightmost element as pivot
int pivot = array[high];
// pointer for greater element
int i = (low - 1);
// traverse each element of the array
// compare them with the pivot
for (int j = low; j < high; j++) {
if (array[j] <= pivot) {
// if element smaller than pivot is found
// swap it with the greater element pointed by i
i++;
// swap element at i with element at j
swap(&array[i], &array[j]);
}
}
// swap the pivot element with the greater element at i
swap(&array[i + 1], &array[high]);
// return the partition point
return (i + 1);
}
void quickSort(int array[], int low, int high) {
if (low < high) {
// find the pivot element such that
// elements smaller than pivot are on left of pivot
// elements greater than pivot are on right of pivot
int pi = partition(array, low, high);
// recursive call on the left of pivot
quickSort(array, low, pi - 1);
// recursive call on the right of pivot
quickSort(array, pi + 1, high);
}
}
// function to print array elements
void printArray(int array[], int size) {
for (int i = 0; i < size; ++i) {
printf("%d ", array[i]);
}
printf("\n");
}
// main function
int main() {
int data [] = {8, 7, 2, 1, 0, 9, 6};
int n = sizeof(data) / sizeof(data[0]);
printf("Unsorted Array\n");
printArray (data, n);
// perform quicksort on data
quickSort(data, 0, n - 1);

printf("Sorted array in ascending order: \n");


printArray (data, n);
}
OUTPUT:
Unsorted Array
8721096
Sorted array in ascending order: 0126789
Quicksort Complexity

Time Complexity

Best O(n*log n)

Worst O(n2)

Average O(n*log n)

Space Complexity O(log n)

Stability No

1. Time Complexities
• Worst Case Complexity [Big-O]: O(n2)
It occurs when the pivot element picked is either the greatest or the smallest element.
This condition leads to the case in which the pivot element lies in an extreme end of the sorted
array. One sub-array is always empty and another sub-array contains n - 1 elements. Thus,
quicksort is called only on this sub-array.
However, the quicksort algorithm has better performance for scattered pivots.
• Best Case Complexity [Big-omega]: O(n*log n)
It occurs when the pivot element is always the middle element or near to the middle element.
• Average Case Complexity [Big-theta]: O(n*log n)
It occurs when the above conditions do not occur.
2. Space Complexity
The space complexity for quicksort is O(log n).

Result:
EXPERIMENT-4
AIM:
Develop a program and measure the Running Time for estimating minimum-
cost spanning trees with greedy method.

Theory:

Kruskal’s Minimum Spanning Tree (MST) Algorithm

In Kruskal’s algorithm, sort all edges of the given graph in increasing order. Then it keeps
on adding new edges and nodes in the MST if the newly added edge does not form a cycle.
It picks the minimum weighted edge at first at the maximum weighted edge at last. Thus we
can say that it makes a locally optimal choice in each step in order to find the optimal solution.
Hence this is a Greedy Algorithm.
How to find MST using Kruskal’s algorithm?
Below are the steps for finding MST using Kruskal’s algorithm:
1. Sort all the edges in non-decreasing order of their weight.
2. Pick the smallest edge. Check if it forms a cycle with the spanning tree formed so
far. If the cycle is not formed, include this edge. Else, discard it.
3. Repeat step#2 until there are (V-1) edges in the spanning tree.
Step 2 uses the Union-Find algorithm to detect cycles.
So we recommend reading the following post as a prerequisite.
• Union-Find Algorithm | Set 1 (Detect Cycle in a Graph)
• Union-Find Algorithm | Set 2 (Union By Rank and Path Compression)
Kruskal’s algorithm to find the minimum cost spanning tree uses the greedy approach. The
Greedy Choice is to pick the smallest weight edge that does not cause a cycle in the MST
constructed so far. Let us understand it with an example:

Illustration:

Below is the illustration of the above approach:


Input Graph:

The graph contains 9 vertices and 14 edges. So, the minimum spanning tree formed will be
having (9 – 1) = 8 edges.
After sorting:

Weight Source Destination

1 7 6

2 8 2

2 6 5

4 0 1

4 2 5

6 8 6

7 2 3

7 7 8

8 0 7

8 1 2

9 3 4

10 5 4

11 1 7

14 3 5

Now pick all edges one by one from the sorted list of edges
Step 1: Pick edge 7-6. No cycle is formed, include it.

Add edge 7-6 in the MST

Step 2: Pick edge 8-2. No cycle is formed, include it.

Add edge 8-2 in the MST

Step 3: Pick edge 6-5. No cycle is formed, include it.

Add edge 6-5 in the MST


Step 4: Pick edge 0-1. No cycle is formed, include it.

Add edge 0-1 in the MST

Step 5: Pick edge 2-5. No cycle is formed, include it.

Add edge 2-5 in the MST

Step 6: Pick edge 8-6. Since including this edge results in the cycle, discard it. Pick edge 2-
3: No cycle is formed, include it.

Add edge 2-3 in the MST


Step 7: Pick edge 7-8. Since including this edge results in the cycle, discard it. Pick edge 0-
7. No cycle is formed, include it.

Add edge 0-7 in MST

Step 8: Pick edge 1-2. Since including this edge results in the cycle, discard it. Pick edge 3-
4. No cycle is formed, include it.

Add edge 3-4 in the MST

Note: Since the number of edges included in the MST equals to (V – 1), so the algorithm
stops here

Source Code:

// C code to implement Kruskal's algorithm

#include <stdio.h>
#include <stdlib.h>

// Comparator function to use in sorting


int comparator(const void* p1, const void* p2)
{
const int(*x)[3] = p1;
const int(*y)[3] = p2;
return (*x)[2] - (*y)[2];
}

// Initialization of parent[] and rank[] arrays


void makeSet(int parent[], int rank[], int n)
{
for (int i = 0; i < n; i++) {
parent[i] = i;
rank[i] = 0;
}
}

// Function to find the parent of a node


int findParent(int parent[], int component)
{
if (parent[component] == component)
return component;

return parent[component]
= findParent(parent, parent[component]);
}

// Function to unite two sets


void unionSet(int u, int v, int parent[], int rank[], int n)
{
// Finding the parents
u = findParent(parent, u);
v = findParent(parent, v);

if (rank[u] < rank[v]) {


parent[u] = v;
}
else if (rank[u] > rank[v]) {
parent[v] = u;
}
else {
parent[v] = u;

// Since the rank increases if


// the ranks of two sets are same
rank[u]++;
}
}

// Function to find the MST


void kruskalAlgo(int n, int edge[n][3])
{
// First we sort the edge array in ascending order
// so that we can access minimum distances/cost
qsort(edge, n, sizeof(edge[0]), comparator);

int parent[n];
int rank[n];

// Function to initialize parent[] and rank[]


makeSet(parent, rank, n);

// To store the minimun cost


int minCost = 0;

printf(
"Following are the edges in the constructed MST\n");
for (int i = 0; i < n; i++) {
int v1 = findParent(parent, edge[i][0]);
int v2 = findParent(parent, edge[i][1]);
int wt = edge[i][2];

// If the parents are different that


// means they are in different sets so
// union them
if (v1 != v2) {
unionSet(v1, v2, parent, rank, n);
minCost += wt;
printf("%d -- %d == %d\n", edge[i][0],
edge[i][1], wt);
}
}

printf("Minimum Cost Spanning Tree: %d\n", minCost);


}

// Driver code
int main()
{
int edge[5][3] = { { 0, 1, 10 },
{ 0, 2, 6 },
{ 0, 3, 5 },
{ 1, 3, 15 },
{ 2, 3, 4 } };

kruskalAlgo(5, edge);

return 0;
}
Output
Following are the edges in the constructed MST
2 -- 3 == 4
0 -- 3 == 5
0 -- 1 == 10
Minimum Cost Spanning Tree: 19

Time Complexity:
O(E * logE) or O(E * logV)
• Sorting of edges takes O(E * logE) time.
• After sorting, we iterate through all edges and apply the find-union algorithm.
The find and union operations can take at most O(logV) time.
• So overall complexity is O(E * logE + E * logV) time.
• The value of E can be at most O(V 2), so O(logV) and O(logE) are the same.
Therefore, the overall time complexity is O(E * logE) or O(E*logV)
Auxiliary Space: O(V + E), where V is the number of vertices and E is the number of
edges in the graph.

Prim’s Algorithm for Minimum Spanning Tree (MST)

This algorithm always starts with a single node and moves through several adjacent nodes,
in order to explore all of the connected edges along the way.
The algorithm starts with an empty spanning tree. The idea is to maintain two sets of vertices.
The first set contains the vertices already included in the MST, and the other set contains the
vertices not yet included. At every step, it considers all the edges that connect the two sets
and picks the minimum weight edge from these edges. After picking the edge, it moves the
other endpoint of the edge to the set containing MST.
A group of edges that connects two sets of vertices in a graph is called cut in graph theory. So,
at every step of Prim’s algorithm, find a cut, pick the minimum weight edge from the cut,
and include this vertex in MST Set (the set that contains already included vertices).

How does Prim’s Algorithm Work?

The working of Prim’s algorithm can be described by using the following steps:
Step 1: Determine an arbitrary vertex as the starting vertex of the MST.
Step 2: Follow steps 3 to 5 till there are vertices that are not included in the MST (known as
fringe vertex).
Step 3: Find edges connecting any tree vertex with the fringe vertices.
Step 4: Find the minimum among these edges.
Step 5: Add the chosen edge to the MST if it does not form any cycle.
Step 6: Return the MST and exit
Note: For determining a cycle, we can divide the vertices into two sets [one set contains the
vertices included in MST and the other contains the fringe vertices.]
Illustration of Prim’s Algorithm:
Consider the following graph as an example for which we need to find the Minimum Spanning
Tree (MST).

Step 1: Firstly, we select an arbitrary vertex that acts as the starting vertex of the Minimum
Spanning Tree. Here we have selected vertex 0 as the starting vertex.

0 is selected as starting vertex

Step 2: All the edges connecting the incomplete MST and other vertices are the edges {0, 1}
and {0, 7}. Between these two the edge with minimum weight is {0, 1}. So include the edge
and vertex 1 in the MST.

1 is added to the MST


Step 3: The edges connecting the incomplete MST to other vertices are {0, 7}, {1, 7} and
{1, 2}. Among these edges the minimum weight is 8 which is of the edges {0, 7} and {1, 2}.
Let us here include the edge {0, 7} and the vertex 7 in the MST. [We could have also
included edge {1, 2} and vertex 2 in the MST].

7 is added in the MST

Step 4: The edges that connect the incomplete MST with the fringe vertices are {1, 2}, {7,
6} and {7, 8}. Add the edge {7, 6} and the vertex 6 in the MST as it has the least weight
(i.e., 1).

6 is added in the MST

Step 5: The connecting edges now are {7, 8}, {1, 2}, {6, 8} and {6, 5}. Include edge {6, 5}
and vertex 5 in the MST as the edge has the minimum weight (i.e., 2) among them.

Include vertex 5 in the MST


Step 6: Among the current connecting edges, the edge {5, 2} has the minimum weight. So
include that edge and the vertex 2 in the MST.

Include vertex 2 in the MST

Step 7: The connecting edges between the incomplete MST and the other edges are {2, 8},
{2, 3}, {5, 3} and {5, 4}. The edge with minimum weight is edge {2, 8} which has weight 2.
So include this edge and the vertex 8 in the MST.

Add vertex 8 in the MST

Step 8: See here that the edges {7, 8} and {2, 3} both have same weight which are
minimum. But 7 is already part of MST. So we will consider the edge {2, 3} and include
that edge and vertex 3 in the MST.

Include vertex 3 in MST


Step 9: Only the vertex 4 remains to be included. The minimum weighted edge from the
incomplete MST to 4 is {3, 4}.

Include vertex 4 in the MST

The final structure of the MST is as follows and the weight of the edges of the MST is (4 + 8
+ 1 + 2 + 4 + 2 + 7 + 9) = 37.

The structure of the MST formed using the above method

Note: If we had selected the edge {1, 2} in the third step then the MST would look like the
following.

Structure of the alternate MST if we had selected edge {1, 2} in the MST
How to implement Prim’s Algorithm?

Follow the given steps to utilize the Prim’s Algorithm mentioned above for finding MST of
a graph:
• Create a set mstSet that keeps track of vertices already included in MST.
• Assign a key value to all vertices in the input graph. Initialize all key values as
INFINITE. Assign the key value as 0 for the first vertex so that it is picked first.
• While mstSet doesn’t include all vertices
• Pick a vertex u that is not there in mstSet and has a minimum key
value.
• Include u in the mstSet.
• Update the key value of all adjacent vertices of u. To update the key
values, iterate through all adjacent vertices.
• For every adjacent vertex v, if the weight of edge u-v is less than the
previous key value of v, update the key value as the weight of u-v.
The idea of using key values is to pick the minimum weight edge from the cut. The key values
are used only for vertices that are not yet included in MST, the key value for these vertices
indicates the minimum weight edges connecting them to the set of vertices included in MST.

Source code:

// A C program for Prim's Minimum Spanning Tree (MST) algorithm. The program is
// for adjacency matrix representation of the graph

#include <limits.h>
#include <stdbool.h>
#include <stdio.h>

// Number of vertices in the graph


#define V 5

// A utility function to find the vertex with minimum key value, from the set of vertices
// not yet included in MST
int minKey(int key[], bool mstSet[])
{
// Initialize min value
int min = INT_MAX, min_index;

for (int v = 0; v < V; v++)


if (mstSet[v] == false && key[v] < min)
min = key[v], min_index = v;

return min_index;
}

// A utility function to print the constructed MST stored in parent[]


int printMST(int parent[], int graph[V][V])
{
printf("Edge \tWeight\n");
for (int i = 1; i < V; i++)
printf("%d - %d \t%d \n", parent[i], i,
graph[i][parent[i]]);
}

// Function to construct and print MST for a graph represented using adjacency matrix
// representation.
void primMST(int graph[V][V])
{
// Array to store constructed MST
int parent[V];
// Key values used to pick minimum weight edge in cut
int key[V];
// To represent set of vertices included in MST
bool mstSet[V];

// Initialize all keys as INFINITE


for (int i = 0; i < V; i++)
key[i] = INT_MAX, mstSet[i] = false;

// Always include first 1st vertex in MST.


// Make key 0 so that this vertex is picked as first vertex.
key[0] = 0;

// First node is always root of MST


parent[0] = -1;

// The MST will have V vertices


for (int count = 0; count < V - 1; count++) {

// Pick the minimum key vertex from the set of vertices not yet included in MST.
int u = minKey(key, mstSet);

// Add the picked vertex to the MST Set


mstSet[u] = true;

// Update key value and parent index of the adjacent vertices of the picked vertex.
// Consider only those vertices which are not yet included in MST
for (int v = 0; v < V; v++)

// graph[u][v] is non zero only for adjacent vertices of m mstSet[v] is false for vertices not
// yet included in MST. Update the key only if graph[u][v] is smaller than key[v]

if (graph[u][v] && mstSet[v] == false && graph[u][v] < key[v])


parent[v] = u, key[v] = graph[u][v];
}
// print the constructed MST
printMST(parent, graph);
}

// Driver's code
int main()
{
int graph[V][V] = { { 0, 2, 0, 6, 0 },
{ 2, 0, 3, 8, 5 },
{ 0, 3, 0, 0, 7 },
{ 6, 8, 0, 0, 9 },
{ 0, 5, 7, 9, 0 } };

// Print the solution


primMST(graph);

return 0;
}

Output
Edge Weight
0-1 2
1-2 3
0-3 6
1-4 5

Time Complexity: O(V2),


If the input graph is represented using an adjacency list, then the time complexity of Prim’s
algorithm can be reduced to O(E * logV) with the help of a binary heap. In this
implementation, we are always considering the spanning tree to start from the root of the
graph.
Auxiliary Space: O(V)

Result:
EXPERIMENT-5
AIM:

Develop a program and measure the Running Time for Estimating Single Source Shortest
Paths with Greedy Method.

Theory:

Find Shortest Paths from Source to all Vertices using Dijkstra’s Algorithm

Given a graph and a source vertex in the graph, find the shortest paths from the source to
all vertices in the given graph.
Examples:
Input: src = 0, the graph is shown below.

Output: 0 4 12 19 21 11 9 8 14
Explanation: The distance from 0 to 1 = 4.
The minimum distance from 0 to 2 = 12. 0->1->2
The minimum distance from 0 to 3 = 19. 0->1->2->3
The minimum distance from 0 to 4 = 21. 0->7->6->5->4
The minimum distance from 0 to 5 = 11. 0->7->6->5
The minimum distance from 0 to 6 = 9. 0->7->6
The minimum distance from 0 to 7 = 8. 0->7
The minimum distance from 0 to 8 = 14. 0->1->2->8

Dijkstra shortest path algorithm using Prim’s Algorithm in O(V 2):


Dijkstra’s algorithm is very similar to Prim’s algorithm for minimum spanning tree.
Like Prim’s MST, generate a SPT (shortest path tree) with a given source as a root. Maintain
two sets, one set contains vertices included in the shortest-path tree, other set includes
vertices not yet included in the shortest-path tree. At every step of the algorithm, find a vertex
that is in the other set (set not yet included) and has a minimum distance from the source.

Follow the steps below to solve the problem:


• Create a set sptSet (shortest path tree set) that keeps track of vertices included in
the shortest path tree, i.e., whose minimum distance from the source is calculated
and finalized. Initially, this set is empty.
• Assign a distance value to all vertices in the input graph. Initialize all distance
values as INFINITE. Assign the distance value as 0 for the source vertex so that
it is picked first.
• While sptSet doesn’t include all vertices
• Pick a vertex u that is not there in sptSet and has a minimum distance value.
• Include u to sptSet.
• Then update the distance value of all adjacent vertices of u.
• To update the distance values, iterate through all adjacent vertices.
•For every adjacent vertex v, if the sum of the distance value of u (from source)
and weight of edge u-v, is less than the distance value of v, then update the
distance value of v.
Note: We use a boolean array sptSet[] to represent the set of vertices included in SPT. If a
value sptSet[v] is true, then vertex v is included in SPT, otherwise not. Array dist[] is used
to store the shortest distance values of all vertices.
Below is the illustration of the above approach:
Illustration:
To understand the Dijkstra’s Algorithm lets take a graph and find the shortest path from
source to all nodes.
Consider below graph and src = 0

Step 1:
•The set sptSet is initially empty and distances assigned to vertices are {0, INF,
INF, INF, INF, INF, INF, INF} where INF indicates infinite.
• Now pick the vertex with a minimum distance value. The vertex 0 is picked, include
it in sptSet. So sptSet becomes {0}. After including 0 to sptSet, update distance
values of its adjacent vertices.
• Adjacent vertices of 0 are 1 and 7. The distance values of 1 and 7 are updated as
4 and 8.
The following subgraph shows vertices and their distance values, only the vertices with finite
distance values are shown. The vertices included in SPT are shown in green colour.

Step 2:
• Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). The vertex 1 is picked and added to sptSet.
• So sptSet now becomes {0, 1}. Update the distance values of adjacent vertices of 1.
• The distance value of vertex 2 becomes 12.
Step 3:
• Pick the vertex with minimum distance value and not already included in SPT
(not in sptSET). Vertex 7 is picked. So sptSet now becomes {0, 1, 7}.
• Update the distance values of adjacent vertices of 7. The distance value of vertex
6 and 8 becomes finite (15 and 9 respectively).

Step 4:
• Pick the vertex with minimum distance value and not already included in SPT
(not in sptSET). Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}.
• Update the distance values of adjacent vertices of 6. The distance value of vertex
5 and 8 are updated.

We repeat the above steps until sptSet includes all vertices of the given graph. Finally, we
get the following Shortest Path Tree (SPT).

Source Code:

// C program for Dijkstra's single source shortest path


// algorithm. The program is for adjacency matrix
// representation of the graph

#include <limits.h>
#include <stdbool.h>
#include <stdio.h>

// Number of vertices in the graph


#define V 9

// A utility function to find the vertex with minimum


// distance value, from the set of vertices not yet included
// in shortest path tree
int minDistance(int dist[], bool sptSet[])
{
// Initialize min value
int min = INT_MAX, min_index;

for (int v = 0; v < V; v++)


if (sptSet[v] == false && dist[v] <= min)
min = dist[v], min_index = v;

return min_index;
}

// A utility function to print the constructed distance array


void printSolution(int dist[])
{
printf("Vertex \t\t Distance from Source\n");
for (int i = 0; i < V; i++)
printf("%d \t\t\t\t %d\n", i, dist[i]);
}

// Function that implements Dijkstra's single source shortest path algorithm for a graph
// represented using adjacency matrix representation
void dijkstra(int graph[V][V], int src)
{
int dist[V]; // The output array. dist[i] will hold the shortest
// distance from src to i

bool sptSet[V]; // sptSet[i] will be true if vertex i is


// included in shortest path tree or shortest distance from src to i is finalized

// Initialize all distances as INFINITE and stpSet[] as false


for (int i = 0; i < V; i++)
dist[i] = INT_MAX, sptSet[i] = false;

// Distance of source vertex from itself is always 0


dist[src] = 0;

// Find shortest path for all vertices


for (int count = 0; count < V - 1; count++) {
// Pick the minimum distance vertex from the set of
// vertices not yet processed. u is always equal to
// src in the first iteration.
int u = minDistance(dist, sptSet);

// Mark the picked vertex as processed


sptSet[u] = true;

// Update dist value of the adjacent vertices of the picked vertex.


for (int v = 0; v < V; v++)
// Update dist[v] only if is not in sptSet, there is an edge from u to v, and total
// weight of path from src to v through u is smaller than current value of dist[v]
if (!sptSet[v] && graph[u][v]
&& dist[u] != INT_MAX
&& dist[u] + graph[u][v] < dist[v])
dist[v] = dist[u] + graph[u][v];
}

// print the constructed distance array


printSolution(dist);
}

// driver's code
int main()
{
/* Let us create the example graph discussed above */
int graph[V][V] = { { 0, 4, 0, 0, 0, 0, 0, 8, 0 },
{ 4, 0, 8, 0, 0, 0, 0, 11, 0 },
{ 0, 8, 0, 7, 0, 4, 0, 0, 2 },
{ 0, 0, 7, 0, 9, 14, 0, 0, 0 },
{ 0, 0, 0, 9, 0, 10, 0, 0, 0 },
{ 0, 0, 4, 14, 10, 0, 2, 0, 0 },
{ 0, 0, 0, 0, 0, 2, 0, 1, 6 },
{ 8, 11, 0, 0, 0, 0, 1, 0, 7 },
{ 0, 0, 2, 0, 0, 0, 6, 7, 0 } };

// Function call
dijkstra(graph, 0);

return 0;
}

Output:

5 11
6 9
7 8
8 14

Time Complexity: O(V2)


Auxiliary Space: O(V)

Result:
EXPERIMENT-6
AIM:
Develop a program and Measure the Running Time for Optimal Binary Search Trees with
Dynamic Programming.

Theory:
Optimal Binary Search Tree
An Optimal Binary Search Tree (OBST), also known as a Weighted Binary Search Tree,
is a binary search tree that minimizes the expected search cost. In a binary search tree, the
search cost is the number of comparisons required to search for a given key.
In an OBST, each node is assigned a weight that represents the probability of the key being
searched for. The sum of all the weights in the tree is 1.0. The expected search cost of a node
is the sum of the product of its depth and weight, and the expected search cost of its children.
To construct an OBST, we start with a sorted list of keys and their probabilities. We then
build a table that contains the expected search cost for all possible sub-trees of the original
list. We can use dynamic programming to fill in this table efficiently. Finally, we use this
table to construct the OBST.
The time complexity of constructing an OBST is O(n^3), where n is the number of keys.
However, with some optimizations, we can reduce the time complexity to O(n^2). Once the
OBST is constructed, the time complexity of searching for a key is O(log n), the same as for
a regular binary search tree.
The OBST is a useful data structure in applications where the keys have different
probabilities of being searched for. It can be used to improve the efficiency of searching and
retrieval operations in databases, compilers, and other computer programs.
Given a sorted array key [0.. n-1] of search keys and an array freq[0.. n-1] of frequency
counts, where freq[i] is the number of searches for keys[i]. Construct a binary search tree of
all keys such that the total cost of all the searches is as small as possible.
Let us first define the cost of a BST. The cost of a BST node is the level of that node
multiplied by its frequency. The level of the root is 1.
Examples:
Input: keys[] = {10, 12}, freq[] = {34, 50}
There can be following two possible BSTs
10 12
\ /
12 10
I II
Frequency of searches of 10 and 12 are 34 and 50 respectively.
The cost of tree I is 34*1 + 50*2 = 134
The cost of tree II is 50*1 + 34*2 = 118
Input: keys[] = {10, 12, 20}, freq[] = {34, 8, 50}
There can be following possible BSTs
10 12 20 10 20
\ / \ / \ /
12 10 20 12 20 10
\ / / \
20 10 12 12
I II III IV V
Among all possible BSTs, cost of the fifth BST is minimum.
Cost of the fifth BST is 1*50 + 2*34 + 3*8 = 142

1) Optimal Substructure:
The optimal cost for freq[i..j] can be recursively calculated using the following formula.

We need to calculate optCost(0, n-1) to find the result.


The idea of above formula is simple, we one by one try all nodes as root (r varies from i to j
in second term). When we make rth node as root, we recursively calculate optimal cost
from i to r-1 and r+1 to j.
We add sum of frequencies from i to j (see first term in the above formula)
The reason for adding the sum of frequencies from i to j:
This can be divided into 2 parts one is the freq[r]+sum of frequencies of all elements from i
to j except r. The term freq[r] is added because it is going to be root and that means level of
1, so freq[r]*1=freq[r]. Now the actual part comes, we are adding the frequencies of
remaining elements because as we take r as root then all the elements other than that are
going 1 level down than that is calculated in the subproblem. Let me put it in a more clear
way, for calculating optcost(i,j) we assume that the r is taken as root and calculate min of
opt(i,r-1)+opt(r+1,j) for all i<=r<=j. Here for every subproblem we are choosing one node as
a root. But in reality the level of subproblem root and all its descendant nodes will be 1 greater
than the level of the parent problem root. Therefore the frequency of all the nodes except r
should be added which accounts to the descend in their level compared to level assumed in
subproblem.

Output:

// A naive recursive implementation of optimal binary


// search tree problem
#include <stdio.h>
#include <limits.h>

// A utility function to get sum of array elements


// freq[i] to freq[j]
int sum(int freq[], int i, int j);

// A recursive function to calculate cost of optimal


// binary search tree
int optCost(int freq[], int i, int j)
{
// Base cases
if (j < i) // no elements in this subarray
return 0;
if (j == i) // one element in this subarray
return freq[i];

// Get sum of freq[i], freq[i+1], ... freq[j]


int fsum = sum(freq, i, j);

// Initialize minimum value


int min = INT_MAX;

// One by one consider all elements as root and


// recursively find cost of the BST, compare the
// cost with min and update min if needed
for (int r = i; r <= j; ++r)
{
int cost = optCost(freq, i, r-1) +
optCost(freq, r+1, j);
if (cost < min)
min = cost;
}

// Return minimum value


return min + fsum;
}

// The main function that calculates minimum cost of a Binary Search Tree. It mainly uses
// optCost() to find the optimal cost.

int optimalSearchTree(int keys[], int freq[], int n)


{
// Here array keys[] is assumed to be sorted in
// increasing order. If keys[] is not sorted, then
// add code to sort keys, and rearrange freq[]
// accordingly.
return optCost(freq, 0, n-1);
}

// A utility function to get sum of array elements freq[i] to freq[j]


int sum(int freq[], int i, int j)
{
int s = 0;
for (int k = i; k <=j; k++)
s += freq[k];
return s;
}

// Driver program to test above functions


int main()
{
int keys[] = {10, 12, 20};
int freq[] = {34, 8, 50};
int n = sizeof(keys)/sizeof(keys[0]);
printf("Cost of Optimal BST is %d ",
optimalSearchTree(keys, freq, n));
return 0;
}

Output
Cost of Optimal BST is 142

Result:
EXPERIMENT-7
AIM:
Develop a program and Measure the Running Time for Identifying Solution for Travelling
Salesperson Problem with Dynamic Programming.

Theory:
Travelling Salesman Problem using Dynamic Programming
Given a set of cities and the distance between every pair of cities, the problem is to find the
shortest possible route that visits every city exactly once and returns to the starting point.
Note the difference between Hamiltonian Cycle and TSP. The Hamiltonian cycle problem is
to find if there exists a tour that visits every city exactly once. Here we know that Hamiltonian
Tour exists (because the graph is complete) and in fact, many such tours exist, the problem
is to find a minimum weight Hamiltonian Cycle.

For example, consider the graph shown in the figure on the right side. A TSP tour in the graph
is 1-2-4-3-1. The cost of the tour is 10+25+30+15 which is 80. The problem is a famous NP-
hard problem. There is no polynomial-time know solution for this problem. The following
are different solutions for the traveling salesman problem.
Naive Solution:
1) Consider city 1 as the starting and ending point.
2) Generate all (n-1)! Permutations of cities.
3) Calculate the cost of every permutation and keep track of the minimum cost
permutation.
4) Return the permutation with minimum cost.
Time Complexity: Θ(n!)
Dynamic Programming:
Let the given set of vertices be {1, 2, 3, 4,….n}. Let us consider 1 as starting and ending
point of output. For every other vertex I (other than 1), we find the minimum cost path with
1 as the starting point, I as the ending point, and all vertices appearing exactly once. Let the
cost of this path cost (i), and the cost of the corresponding Cycle would cost (i) + dist(i, 1)
where dist(i, 1) is the distance from I to 1. Finally, we return the minimum of all [cost(i) +
dist(i, 1)] values. This looks simple so far.
Now the question is how to get cost(i)? To calculate the cost(i) using Dynamic Programming,
we need to have some recursive relation in terms of sub-problems.
Let us define a term C(S, i) be the cost of the minimum cost path visiting each vertex in set S
exactly once, starting at 1 and ending at i. We start with all subsets of size 2 and calculate
C(S, i) for all subsets where S is the subset, then we calculate C(S, i) for all subsets S of size
3 and so on. Note that 1 must be present in every subset.
If size of S is 2, then S must be {1, i},
C(S, i) = dist(1, i)
Else if size of S is greater than 2.
C(S, i) = min { C(S-{i}, j) + dis(j, i)} where j belongs to S, j != i and j != 1.
Below is the dynamic programming solution for the problem using top down recursive +
memorized approach:-
For maintaining the subsets we can use the bitmasks to represent the remaining nodes in our
subset. Since bits are faster to operate and there are only few nodes in graph, bitmasks is
better to use.

For example: –
10100 represents node 2 and node 4 are left in set to be processed
010010 represents node 1 and 4 are left in subset.
NOTE:- ignore the 0th bit since our graph is 1-based.

Source Code:
#include <stdio.h>
int tsp_g[10][10] = {
{12, 30, 33, 10, 45},
{56, 22, 9, 15, 18},
{29, 13, 8, 5, 12},
{33, 28, 16, 10, 3},
{1, 4, 30, 24, 20}
};
int visited[10], n, cost = 0;

/* creating a function to generate the shortest path */


void travellingsalesman(int c){
int k, adj_vertex = 999;
int min = 999;

/* marking the vertices visited in an assigned array */


visited[c] = 1;

/* displaying the shortest path */


printf("%d ", c + 1);

/* checking the minimum cost edge in the graph */


for(k = 0; k < n; k++) {
if((tsp_g[c][k] != 0) && (visited[k] == 0)) {
if(tsp_g[c][k] < min) {
min = tsp_g[c][k];
}
adj_vertex = k;
}
}
if(min != 999) {
cost = cost + min;
}
if(adj_vertex == 999) {
adj_vertex = 0;
printf("%d", adj_vertex + 1);
cost = cost + tsp_g[c][adj_vertex];
return;
}
travellingsalesman(adj_vertex);
}

/* main function */
int main(){
int i, j;
n = 5;
for(i = 0; i < n; i++) {
visited[i] = 0;
}
printf("\n\nShortest Path:\t");
travellingsalesman(0);
printf("\n\nMinimum Cost: \t");
printf("%d\n", cost);
return 0;
}

Output:
Shortest Path: 154321

Minimum Cost: 99

Time Complexity : O(n2*2n) where O(n* 2n) are maximum number of unique
subproblems/states and O(n) for transition (through for loop as in code) in every states.

Auxiliary Space: O(n*2n), where n is number of Nodes/Cities here.

Result:
EXPERIMENT-8
AIM:
Develop a program and Measure the Running Time for Identifying Solution for 8-Queens
Problem with Backtracking.
Theory:
The eight queens problem is the problem of placing eight queens on an 8×8 chessboard such
that none of them attack one another (no two are in the same row, column, or diagonal). More
generally, the n queens problem places n queens on an n×n chessboard. There are different
solutions for the problem. Backtracking | Set 3 (N Queen Problem) Branch and Bound | Set
5 (N Queen Problem). You can find detailed solutions
at http://en.literateprograms.org/Eight_queens_puzzle_(C)
Explanation:
• This pseudocode uses a backtracking algorithm to find a solution to the 8
Queen problem, which consists of placing 8 queens on a chessboard in such a way
that no two queens threaten each other.
• The algorithm starts by placing a queen on the first column, then it proceeds to
the next column and places a queen in the first safe row of that column.
• If the algorithm reaches the 8th column and all queens are placed in a safe
position, it prints the board and returns true.
If the algorithm is unable to place a queen in a safe position in a certain column,
it backtracks to the previous column and tries a different row.
• The “isSafe” function checks if it is safe to place a queen on a certain row and
column by checking if there are any queens in the same row, diagonal or anti-
diagonal.
• It’s worth to notice that this is just a high-level pseudocode and it might need to
be adapted depending on the specific implementation and language you are using.

Source Code:
/* C program to solve N Queen Problem using backtracking */
#define N 4
#include <stdbool.h>
#include <stdio.h>
/* A utility function to print solution */
void printSolution(int board[N][N])
{
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++)
printf(" %d ", board[i][j]);
printf("\n");
}
}
/* A utility function to check if a queen can be placed on board[row][col]. Note that this
function is called when "col" queens are already placed in columns from 0 to col -1. So we
need to check only left side for attacking queens */
bool isSafe(int board[N][N], int row, int col)
{
int i, j;
/* Check this row on left side */
for (i = 0; i < col; i++)
if (board[row][i])
return false;
/* Check upper diagonal on left side */
for (i = row, j = col; i >= 0 && j >= 0; i--, j--)
if (board[i][j])
return false;
/* Check lower diagonal on left side */
for (i = row, j = col; j >= 0 && i < N; i++, j--)
if (board[i][j])
return false;
return true;
}
/* A recursive utility function to solve N Queen problem */
bool solveNQUtil(int board[N][N], int col)
{
/* base case: If all queens are placed then return true */
if (col >= N)
return true;
/* Consider this column and try placing this queen in all rows one by one */
for (int i = 0; i < N; i++) {
/* Check if the queen can be placed on board[i][col] */
if (isSafe(board, i, col)) {
/* Place this queen in board[i][col] */
board[i][col] = 1;
/* recur to place rest of the queens */
if (solveNQUtil(board, col + 1))
return true;
/* If placing queen in board[i][col] doesn't lead to a solution, then remove queen from
board[i][col] */
board[i][col] = 0; // BACKTRACK
}
}
/* If the queen cannot be placed in any row in this column col then return false */
return false;
}
/* This function solves the N Queen problem using Backtracking. It mainly uses
solveNQUtil() to solve the problem. It returns false if queens cannot be placed, otherwise,
return true and prints placement of queens in the form of 1s. Please note that there may be
more than one solutions, this function prints one of the feasible solutions.*/
bool solveNQ()
{
int board[N][N] = { { 0, 0, 0,
{ 0, 0, 0, 0 },
{ 0, 0, 0, 0 },
{ 0, 0, 0, 0 } };
if (solveNQUtil(board, 0) == false) {
printf("Solution does not exist");
return false;
}
printSolution(board);
return true;
}
// driver program to test above function
int main()
{
solveNQ();
return 0;
}

Output
..Q.
Q...
...Q
.Q..

Time Complexity: O(N!)


Auxiliary Space: O(N2)

Result:
EXPERIMENT-9
AIM:
Develop a program and Measure the Running Time for Graph Colouring with Backtracking.
Theory:
m Coloring Problem
Given an undirected graph and a number m, determine if the graph can be colored
with at most m colors such that no two adjacent vertices of the graph are colored with
the same color
Note: Here coloring of a graph means the assignment of colors to all vertices
Following is an example of a graph that can be colored with 3 different colors:

Examples:
Input: graph = {0, 1, 1, 1},
{1, 0, 1, 0},
{1, 1, 0, 1},
{1, 0, 1, 0}
Output: Solution Exists: Following are the assigned colors: 1 2 3 2
Explanation: By coloring the vertices with following colors, adjacent vertices does
not have same colors
Input: graph = {1, 1, 1, 1},
{1, 1, 1, 1},
{1, 1, 1, 1},
{1, 1, 1, 1}
Output: Solution does not exist
Explanation: No solution exits

Naive Approach: To solve the problem follow the below idea:


Generate all possible configurations of colors. Since each node can be colored using
any of the m available colors, the total number of color configurations possible is mV.
After generating a configuration of color, check if the adjacent vertices have the same
color or not. If the conditions are met, print the combination and break the loop
Follow the given steps to solve the problem:
• Create a recursive function that takes the current index, number of vertices
and output color array.
• If the current index is equal to number of vertices. Check if the output color
configuration is safe, i.e check if the adjacent vertices do not have same
color. If the conditions are met, print the configuration and break
• Assign a color to a vertex (1 to m).
• For every assigned color recursively call the function with next index and
number of vertices.
• If any recursive function returns true break the loop and returns true.

Source Code:

// C program for the above approach

#include <stdbool.h>

#include <stdio.h>

// Number of vertices in the graph

#define V 4

void printSolution(int color[]);

// check if the colored graph is safe or not

bool isSafe(bool graph[V][V], int color[])

// check for every edge

for (int i = 0; i < V; i++)

for (int j = i + 1; j < V; j++)

if (graph[i][j] && color[j] == color[i])

return false;

return true;

/* This function solves the m Coloring problem using recursion. It returns false if the
m colours cannot be assigned, otherwise, return true and prints assignments of colours
to all vertices. Please note that there may be more than one solutions, this function
prints one of the feasible solutions.*/
bool graphColoring(bool graph[V][V], int m, int i,

int color[V])

// if current index reached end

if (i == V) {

// if coloring is safe

if (isSafe(graph, color)) {

// Print the solution

printSolution(color);

return true;

return false;

// Assign each color from 1 to m

for (int j = 1; j <= m; j++) {

color[i] = j;

// Recur of the rest vertices

if (graphColoring(graph, m, i + 1, color))

return true;

color[i] = 0;

return false;

}
/* A utility function to print solution */

void printSolution(int color[])

printf("Solution Exists:"

" Following are the assigned colors \n");

for (int i = 0; i < V; i++)

printf(" %d ", color[i]);

printf("\n");

// Driver code

int main()

/* Create following graph and test whether it is 3 colorable (3)---(2)

|/|

|/|

|/|

(0)---(1)

*/

bool graph[V][V] = {

{ 0, 1, 1, 1 },

{ 1, 0, 1, 0 },

{ 1, 1, 0, 1 },

{ 1, 0, 1, 0 },

};
int m = 3; // Number of colors

// Initialize all color values as 0.

// This initialization is needed

// correct functioning of isSafe()

int color[V];

for (int i = 0; i < V; i++)

color[i] = 0;

// Function call

if (!graphColoring(graph, m, 0, color))

printf("Solution does not exist");

return 0;

Output
Solution Exists: Following are the assigned colors
1 2 3 2

Time Complexity: O(mV). There is a total O(mV) combination of colors

Auxiliary Space: O(V). Recursive Stack of graph coloring(…) function will require
O(V) space.

Result:
EXPERIMENT-10

AIM: Develop a program and measure the running time to generate solution of Hamiltonian
Cycle problem with Backtracking.
Theory:
Hamiltonian Cycle using Backtracking
The Hamiltonian cycle of undirected graph G <= V , E> is the cycle containing each vertex
in V. -If graph contains a Hamiltonian cycle, it is called Hamiltonian graph otherwise it is
non-Hamiltonian.
Finding a Hamiltonian cycle in a graph is a well-known problem with many real-world
applications, such as in network routing and scheduling.
Hamiltonian Path in an undirected graph is a path that visits each vertex exactly once.
A Hamiltonian cycle (or Hamiltonian circuit) is a Hamiltonian Path such that there is an edge
(in the graph) from the last vertex to the first vertex of the Hamiltonian Path. Determine
whether a given graph contains Hamiltonian Cycle or not. If it contains, then prints the path.

Following are the input and output of the required function.


Input:
A 2D array graph[V][V] where V is the number of vertices in graph and graph[V][V] is
adjacency matrix representation of the graph. A value graph[i][j] is 1 if there is a direct
edge from i to j, otherwise graph[i][j] is 0.
Output:
An array path[V] that should contain the Hamiltonian Path. path[i] should represent the ith
vertex in the Hamiltonian Path. The code should also return false if there is no Hamiltonian
Cycle in the graph.
For example, a Hamiltonian Cycle in the following graph is {0, 1, 2, 4, 3, 0}.
(0)--(1)--(2)
| /\ |
| / \ |
|/ \|
(3)-------(4)
And the following graph doesn’t contain any Hamiltonian Cycle.
(0)--(1)--(2)
| /\ |
| / \ |
|/ \|
(3) (4)
Naive Algorithm
Generate all possible configurations of vertices and print a configuration that satisfies the
given constraints. There will be n! (n factorial) configurations.
while there are untried configuration
{
generate the next configuration
if ( there are edges between two consecutive vertices of this
configuration and there is an edge from the last vertex to
the first ).
{
print this configuration;
break;
}
}
Backtracking Algorithm

Create an empty path array and add vertex 0 to it. Add other vertices, starting from the vertex
1. Before adding a vertex, check for whether it is adjacent to the previously added vertex and
not already added. If we find such a vertex, we add the vertex as part of the solution. If we
do not find a vertex then we return false.
Source code:
/* C program for solution of Hamiltonian Cycle problem using backtracking */
#include<stdio.h>
// Number of vertices in the graph
#define V 5
void printSolution(int path[]);
/* A utility function to check if the vertex v can be added at index 'pos' in the Hamiltonian
Cycle constructed so far (stored in 'path[]') */
bool isSafe(int v, bool graph[V][V], int path[], int pos)
{
/* Check if this vertex is an adjacent vertex of the previously added vertex. */
if (graph [ path[pos-1] ][ v ] == 0)
return false;
/* Check if the vertex has already been included. This step can be optimized by creating an
array of size V */
for (int i = 0; i < pos; i++)
if (path[i] == v)
return false;
return true;
}
/* A recursive utility function to solve hamiltonian cycle problem */
bool hamCycleUtil(bool graph[V][V], int path[], int pos)
{
/* base case: If all vertices are included in Hamiltonian Cycle */
if (pos == V)
{
// And if there is an edge from the last included vertex to the first vertex
if ( graph[ path[pos-1] ][ path[0] ] == 1 )
return true;
else
return false;
}
// Try different vertices as a next candidate in Hamiltonian Cycle.
// We don't try for 0 as we included 0 as starting point in hamCycle()
for (int v = 1; v < V; v++)
{
/* Check if this vertex can be added to Hamiltonian Cycle */
if (isSafe(v, graph, path, pos))
{
path[pos] = v;
/* recur to construct rest of the path */
if (hamCycleUtil (graph, path, pos+1) == true)
return true;
/* If adding vertex v doesn't lead to a solution, then remove it */
path[pos] = -1;
}
}
/* If no vertex can be added to Hamiltonian Cycle constructed so far,
then return false */
return false;
}

/* This function solves the Hamiltonian Cycle problem using Backtracking. It mainly uses
hamCycleUtil() to solve the problem. It returns false if there is no Hamiltonian Cycle possible,
otherwise return true and prints the path. Please note that there may be more than one solutions,
this function prints one of the feasible solutions. */
bool hamCycle(bool graph[V][V])
{
int *path = new int[V];
for (int i = 0; i < V; i++)
path[i] = -1;

/* Let us put vertex 0 as the first vertex in the path. If there is a Hamiltonian Cycle, then the
path can be started from any point of the cycle as the graph is undirected */
path[0] = 0;
if ( hamCycleUtil(graph, path, 1) == false )
{
printf("\nSolution does not exist");
return false;
}
printSolution(path);
return true;
}
/* A utility function to print solution */
void printSolution(int path[])
{
printf ("Solution Exists:"
" Following is one Hamiltonian Cycle \n");
for (int i = 0; i < V; i++)
printf(" %d ", path[i]);
// Let us print the first vertex again to show the complete cycle
printf(" %d ", path[0]);
printf("\n");
}
// driver program to test above function
int main()
{
/* Let us create the following graph
(0)--(1)--(2)
|/\|
|/\|
|/ \|
(3)-------(4) */
bool graph1[V][V] = {{0, 1, 0, 1, 0},
{1, 0, 1, 1, 1},
{0, 1, 0, 0, 1},
{1, 1, 0, 0, 1},
{0, 1, 1, 1, 0},
};
// Print the solution
hamCycle(graph1);
/* Let us create the following graph
(0)--(1)--(2)
|/\|
|/\|
|/ \|
(3) (4) */
bool graph2[V][V] = {{0, 1, 0, 1, 0},
{1, 0, 1, 1, 1},
{0, 1, 0, 0, 1},
{1, 1, 0, 0, 0},
{0, 1, 1, 0, 0},
};
// Print the solution
hamCycle(graph2);
return 0;
}
Output:
Solution Exists: Following is one Hamiltonian Cycle
0 1 2 4 3 0

Solution does not exist

Time Complexity : O(N!), where N is number of vertices.


Auxiliary Space : O(1), since no extra space used.

Result:
EXPERIMENT-11
AIM:
Develop a program and Measure the Running Time to Generate Solution of Knapsack with
Backtracking.
Theory:
0/1 Knapsack Problem
We are given N items where each item has some weight and profit associated with it.
We are also given a bag with capacity W, [i.e., the bag can hold at most W weight in it]. The
target is to put the items into the bag such that the sum of profits associated with them is the
maximum possible.
Note: The constraint here is we can either put an item completely into the bag or cannot put
it at all [It is not possible to put a part of an item into the bag].
Examples:
Input: N = 3, W = 4, profit[] = {1, 2, 3}, weight[] = {4, 5, 1}
Output: 3
Explanation: There are two items which have weight less than or equal to 4. If we select
the item with weight 4, the possible profit is 1. And if we select the item with weight 1, the
possible profit is 3. So the maximum possible profit is 3. Note that we cannot put both the
items with weight 4 and 1 together as the capacity of the bag is 4.
Input: N = 3, W = 3, profit[] = {1, 2, 3}, weight[] = {4, 5, 6}
Output: 0

0/1 Knapsack Problem using recursion:


To solve the problem follow the below idea:
A simple solution is to consider all subsets of items and calculate the total weight and profit
of all subsets. Consider the only subsets whose total weight is smaller than W. From all such
subsets, pick the subset with maximum profit.
Optimal Substructure: To consider all subsets of items, there can be two cases for every
item.
• Case 1: The item is included in the optimal subset.
• Case 2: The item is not included in the optimal set.
Follow the below steps to solve the problem:
The maximum value obtained from ‘N’ items is the max of the following two values.
• Maximum value obtained by N-1 items and W weight (excluding n th item)
• Value of nth item plus maximum value obtained by N-1 items and (W – weight of
the Nth item) [including Nth item].
• If the weight of the ‘Nth‘ item is greater than ‘W’, then the Nth item cannot be
included and Case 1 is the only possibility.

Source code:
/* A Naive recursive implementation of 0-1 Knapsack problem */
#include <stdio.h>
// A utility function that returns maximum of two integers
int max(int a, int b) { return (a > b) ? a : b; }
// Returns the maximum value that can be put in a knapsack of capacity W
int knapSack(int W, int wt[], int val[], int n)
{
// Base Case
if (n == 0 || W == 0)
return 0;
// If weight of the nth item is more than Knapsack capacity W, then this item cannot be
// included in the optimal solution
if (wt[n - 1] > W)
return knapSack(W, wt, val, n - 1);
// Return the maximum of two cases: // (1) nth item included // (2) not included
else
return max(
val[n - 1]
+ knapSack(W - wt[n - 1], wt, val, n - 1),
knapSack(W, wt, val, n - 1));
}
// Driver code
int main()
{
int profit[] = { 60, 100, 120 };
int weight[] = { 10, 20, 30 };
int W = 50;
int n = sizeof(profit) / sizeof(profit[0]);
printf("%d", knapSack(W, weight, profit, n));
return 0;
}
Output
220
Time Complexity: O(2N)
Auxiliary Space: O(N), Stack space required for recursion
Result:

You might also like