0% found this document useful (0 votes)
2 views

Mod 3

The document outlines the Greedy Paradigm and Dynamic Programming Paradigm, detailing their strategies, characteristics, and applications in various algorithms such as the Coin Change Problem, Minimum Spanning Trees (Kruskal’s and Prim’s algorithms), and the Knapsack Problem. It also discusses the Union-Find algorithm and its applications, as well as the relationship between Dijkstra’s and Prim’s algorithms. Example C code is provided for each algorithm to illustrate their implementation.

Uploaded by

mishraroxx7979
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Mod 3

The document outlines the Greedy Paradigm and Dynamic Programming Paradigm, detailing their strategies, characteristics, and applications in various algorithms such as the Coin Change Problem, Minimum Spanning Trees (Kruskal’s and Prim’s algorithms), and the Knapsack Problem. It also discusses the Union-Find algorithm and its applications, as well as the relationship between Dijkstra’s and Prim’s algorithms. Example C code is provided for each algorithm to illustrate their implementation.

Uploaded by

mishraroxx7979
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Greedy Paradigm

1. The Basic Greedy Strategy


Definition: The greedy strategy involves making a sequence of choices, each of which
simply looks the best at the moment, with the hope of finding a global optimum.
Characteristics:
 Locally optimal choice: The algorithm makes the optimal choice at each step.
 Irrevocability: Once a choice is made, it cannot be changed.
 Feasibility: The choice made must be feasible.
Example Problem: Coin Change Problem
 Goal: To make change for a certain amount using the fewest number of coins.
 Approach: At each step, select the largest coin denomination that is not greater than
the remaining amount to be changed.
Example Code (C):
#include <stdio.h>

void coinChange(int coins[], int n, int amount) {


printf("Coins used: ");
for (int i = 0; i < n; i++) {
while (amount >= coins[i]) {
amount -= coins[i];
printf("%d ", coins[i]);
}
}
printf("\n");
}

int main() {
int coins[] = {25, 10, 5, 1};
int amount = 63;
coinChange(coins, 4, amount);
return 0;
}
2. Computing Minimum Spanning Trees
A Minimum Spanning Tree (MST) of a graph is a subset of edges that connects all vertices
with the minimum possible total edge weight.
Algorithms:
1. Kruskal’s Algorithm
2. Prim’s Algorithm
Kruskal’s Algorithm:
 Sort all the edges in non-decreasing order of their weight.
 Pick the smallest edge. Check if it forms a cycle with the spanning-tree formed so far.
If a cycle is not formed, include this edge. Else, discard it.
 Repeat until there are (V-1) edges in the spanning tree.
Example Code (C):
#include <stdio.h>
#include <stdlib.h>

#define MAX 100

typedef struct {
int u, v, w;
} Edge;

Edge edges[MAX];
int parent[MAX], rank[MAX];
int n, e;

int find(int i) {
if (parent[i] != i)
parent[i] = find(parent[i]);
return parent[i];
}

void union_set(int u, int v) {


int rootU = find(u);
int rootV = find(v);
if (rank[rootU] > rank[rootV])
parent[rootV] = rootU;
else if (rank[rootU] < rank[rootV])
parent[rootU] = rootV;
else {
parent[rootV] = rootU;
rank[rootU]++;
}
}

void kruskal() {
int mst_wt = 0;
for (int i = 0; i < e; i++) {
int u = edges[i].u;
int v = edges[i].v;
int w = edges[i].w;
if (find(u) != find(v)) {
printf("%d - %d\n", u, v);
mst_wt += w;
union_set(u, v);
}
}
printf("Minimum Spanning Tree weight: %d\n", mst_wt);
}

int main() {
printf("Enter the number of vertices and edges: ");
scanf("%d %d", &n, &e);
for (int i = 0; i < e; i++) {
printf("Enter edge %d (u, v, w): ", i+1);
scanf("%d %d %d", &edges[i].u, &edges[i].v, &edges[i].w);
}

for (int i = 0; i < n; i++) {


parent[i] = i;
rank[i] = 0;
}

// Sort edges by weight


qsort(edges, e, sizeof(Edge), (int(*)(const void*, const void*))[]
(const Edge *a, const Edge *b) {
return a->w - b->w;
});

kruskal();
return 0;
}
Prim’s Algorithm:
 Initialize a tree with a single vertex, chosen arbitrarily from the graph.
 Grow the tree by one edge: of the edges that connect the tree to vertices not yet in the
tree, find the minimum-weight edge, and transfer it to the tree.
 Repeat until all vertices are in the tree.
Example Code (C):
#include <stdio.h>
#include <limits.h>

#define V 5

int minKey(int key[], int mstSet[]) {


int min = INT_MAX, min_index;
for (int v = 0; v < V; v++)
if (mstSet[v] == 0 && key[v] < min)
min = key[v], min_index = v;
return min_index;
}
void printMST(int parent[], int graph[V][V]) {
printf("Edge \tWeight\n");
for (int i = 1; i < V; i++)
printf("%d - %d \t%d \n", parent[i], i, graph[i][parent[i]]);
}

void primMST(int graph[V][V]) {


int parent[V];
int key[V];
int mstSet[V];

for (int i = 0; i < V; i++)


key[i] = INT_MAX, mstSet[i] = 0;

key[0] = 0;
parent[0] = -1;

for (int count = 0; count < V-1; count++) {


int u = minKey(key, mstSet);
mstSet[u] = 1;

for (int v = 0; v < V; v++)


if (graph[u][v] && mstSet[v] == 0 && graph[u][v] < key[v])
parent[v] = u, key[v] = graph[u][v];
}

printMST(parent, graph);
}

int main() {
int graph[V][V] = { {0, 2, 0, 6, 0},
{2, 0, 3, 8, 5},
{0, 3, 0, 0, 7},
{6, 8, 0, 0, 9},
{0, 5, 7, 9, 0} };

primMST(graph);
return 0;
}
3. Union-Find Algorithm & Their Applications
The Union-Find algorithm, also known as the Disjoint-Set Union (DSU) algorithm, is a data
structure that keeps track of a partition of a set into disjoint subsets.
Operations:
1. Find: Determine which subset a particular element is in.
2. Union: Join two subsets into a single subset.
Applications:
 Finding cycles in a graph.
 Network connectivity.
 Kruskal’s algorithm for finding the MST.
Example Code (C):
#include <stdio.h>

#define MAX 100

int parent[MAX], rank[MAX];

void make_set(int n) {
for (int i = 0; i < n; i++) {
parent[i] = i;
rank[i] = 0;
}
}

int find(int i) {
if (parent[i] != i)
parent[i] = find(parent[i]);
return parent[i];
}
void union_set(int u, int v) {
int rootU = find(u);
int rootV = find(v);
if (rank[rootU] > rank[rootV])
parent[rootV] = rootU;
else if (rank[rootU] < rank[rootV])
parent[rootU] = rootV;
else {
parent[rootV] = rootU;
rank[rootU]++;
}
}

int main() {
int n = 5; // Number of elements
make_set(n);

union_set(0, 2);
union_set(4, 2);
union_set(3, 1);

if (find(4) == find(0))
printf("4 and 0 are in the same set\n");
else
printf("4 and 0 are in different sets\n");

if (find(1) == find(0))
printf("1 and 0 are in the same set\n");
else
printf("1 and 0 are in different sets\n");

return 0;
}
4. The Relationship in Dijkstra’s and Prim’s Algorithms
 Dijkstra’s Algorithm: Used for finding the shortest path from a single source to all
other vertices in a graph.
 Prim’s Algorithm: Used for finding the Minimum Spanning Tree of a graph.
Similarity: Both algorithms use a priority queue to select the next vertex to be processed.
Difference:
 Dijkstra’s algorithm maintains a set of vertices with known shortest path distances,
while Prim’s algorithm maintains a set of vertices that are included in the MST.
 In Dijkstra’s, the priority queue is based on shortest path estimates, while in Prim’s, it
is based on edge weights.
5. Use of Greedy Strategy in Algorithms for the Knapsack Problem and
Huffman Trees
Knapsack Problem (Fractional):
 Items can be divided.
 Sort items by value-to-weight ratio.
 Add items to the knapsack starting with the highest ratio until the capacity is reached.
Example Code (C):
#include <stdio.h>

typedef struct {
int weight, value;
} Item;

void fractionalKnapsack(Item items[], int n, int capacity) {


float totalValue = 0;
int i;
for (i = 0; i < n; i++) {
if (items[i].weight <= capacity) {
capacity -= items[i].weight;
totalValue += items[i].value;
} else {
totalValue += items[i].value * ((float)capacity /
items[i].weight);
break;
}
}
printf("Total value: %f\n", totalValue);
}

int main() {
Item items[] = {{10, 60}, {20, 100}, {30, 120}};
int capacity = 50;
int n = sizeof(items) / sizeof(items[0]);

fractionalKnapsack(items, n, capacity);
return 0;
}
Huffman Trees:
 Used for data compression.
 Build a frequency table of characters.
 Create a priority queue of nodes.
 While there is more than one node in the queue:
o Extract two nodes with the lowest frequency.
o Create a new internal node with these two nodes as children and the sum of
their frequencies as the new frequency.
o Insert the new node into the priority queue.
 The remaining node is the root of the Huffman tree.
Example Code (C):
#include <stdio.h>
#include <stdlib.h>

#define MAX_TREE_HT 100

typedef struct {
char data;
unsigned freq;
} MinHeapNode;

typedef struct {
unsigned size;
unsigned capacity;
MinHeapNode** array;
} MinHeap;

MinHeapNode* newNode(char data, unsigned freq) {


MinHeapNode* temp = (MinHeapNode*)malloc(sizeof(MinHeapNode));
temp->data = data;
temp->freq = freq;
return temp;
}

MinHeap* createMinHeap(unsigned capacity) {


MinHeap* minHeap = (MinHeap*)malloc(sizeof(MinHeap));
minHeap->size = 0;
minHeap->capacity = capacity;
minHeap->array = (MinHeapNode**)malloc(minHeap->capacity *
sizeof(MinHeapNode*));
return minHeap;
}

void swapMinHeapNode(MinHeapNode** a, MinHeapNode** b) {


MinHeapNode* t = *a;
*a = *b;
*b = t;
}

void minHeapify(MinHeap* minHeap, int idx) {


int smallest = idx;
int left = 2 * idx + 1;
int right = 2 * idx + 2;

if (left < minHeap->size && minHeap->array[left]->freq < minHeap-


>array[smallest]->freq)
smallest = left;

if (right < minHeap->size && minHeap->array[right]->freq < minHeap-


>array[smallest]->freq)
smallest = right;

if (smallest != idx) {
swapMinHeapNode(&minHeap->array[smallest], &minHeap->array[idx]);
minHeapify(minHeap, smallest);
}
}

int isSizeOne(MinHeap* minHeap) {


return (minHeap->size == 1);
}

MinHeapNode* extractMin(MinHeap* minHeap) {


MinHeapNode* temp = minHeap->array[0];
minHeap->array[0] = minHeap->array[minHeap->size - 1];
--minHeap->size;
minHeapify(minHeap, 0);
return temp;
}

void insertMinHeap(MinHeap* minHeap, MinHeapNode* minHeapNode) {


++minHeap->size;
int i = minHeap->size - 1;

while (i && minHeapNode->freq < minHeap->array[(i - 1) / 2]->freq) {


minHeap->array[i] = minHeap->array[(i - 1) / 2];
i = (i - 1) / 2;
}
minHeap->array[i] = minHeapNode;
}

void buildMinHeap(MinHeap* minHeap) {


int n = minHeap->size - 1;
int i;
for (i = (n - 1) / 2; i >= 0; --i)
minHeapify(minHeap, i);
}

void printArr(int arr[], int n) {


for (int i = 0; i < n; ++i)
printf("%d", arr[i]);
printf("\n");
}

MinHeap* createAndBuildMinHeap(char data[], int freq[], int size) {


MinHeap* minHeap = createMinHeap(size);

for (int i = 0; i < size; ++i)


minHeap->array[i] = newNode(data[i], freq[i]);

minHeap->size = size;
buildMinHeap(minHeap);
return minHeap;
}

MinHeapNode* buildHuffmanTree(char data[], int freq[], int size) {


MinHeapNode *left, *right, *top;
MinHeap* minHeap = createAndBuildMinHeap(data, freq, size);

while (!isSizeOne(minHeap)) {
left = extractMin(minHeap);
right = extractMin(minHeap);
top = newNode('$', left->freq + right->freq);

insertMinHeap(minHeap, top);
}
return extractMin(minHeap);
}

void printCodes(MinHeapNode* root, int arr[], int top) {


if (root->left) {
arr[top] = 0;
printCodes(root->left, arr, top + 1);
}
if (root->right) {
arr[top] = 1;
printCodes(root->right, arr, top + 1);
}
if (!(root->left) && !(root->right)) {
printf("%c: ", root->data);
printArr(arr, top);
}
}

void HuffmanCodes(char data[], int freq[], int size) {


MinHeapNode* root = buildHuffmanTree(data, freq, size);
int arr[MAX_TREE_HT], top = 0;
printCodes(root, arr, top);
}

int main() {
char arr[] = {'a', 'b', 'c', 'd', 'e', 'f'};
int freq[] = {5, 9, 12, 13, 16, 45};
int size = sizeof(arr) / sizeof(arr[0]);

HuffmanCodes(arr, freq, size);


return 0;
}
Dynamic Programming Paradigm
1. Basic Dynamic Programming Paradigm
Definition: Dynamic Programming (DP) is an algorithmic technique for solving optimization
problems by breaking them down into simpler subproblems and storing the results of
subproblems to avoid redundant computations.
Characteristics:
 Overlapping subproblems: The problem can be broken down into subproblems which
are reused several times.
 Optimal substructure: The optimal solution to the problem can be constructed from
the optimal solutions of its subproblems.
2. Dynamic Programming Solution to the Optimal Matrix Chain
Multiplication
Problem: Given a sequence of matrices, find the most efficient way to multiply these
matrices together.
Approach:
 Use a table to store the results of subproblems.
 Use the table to construct the solution to the original problem.
Example Code (C):
#include <stdio.h>
#include <limits.h>

int MatrixChainOrder(int p[], int n) {


int m[n][n];
int i, j, k, L, q;

for (i = 1; i < n; i++)


m[i][i] = 0;

for (L = 2; L < n; L++) {


for (i = 1; i < n - L + 1; i++) {
j = i + L - 1;
m[i][j] = INT_MAX;
for (k = i; k <= j - 1; k++) {
q = m[i][k] + m[k + 1][j] + p[i - 1] * p[k] * p[j];
if (q < m[i][j])
m[i][j] = q;
}
}
}
return m[1][n - 1];
}

int main() {
int arr[] = {1, 2, 3, 4};
int size = sizeof(arr) / sizeof(arr[0]);

printf("Minimum number of multiplications is %d ",


MatrixChainOrder(arr, size));
return 0;
}
3. Dynamic Programming Solution to the Longest Common Subsequence
Problems
Problem: Given two sequences, find the length of their longest common subsequence (LCS).
Approach:
 Create a table to store the lengths of longest common subsequences of substrings.
 Fill the table in a bottom-up manner.
Example Code (C):
#include <stdio.h>
#include <string.h>

int max(int a, int b) {


return (a > b) ? a : b;
}

int lcs(char *X, char *Y, int m, int n) {


int L[m + 1][n + 1];
for (int i = 0; i <= m; i++) {
for (int j = 0; j <= n; j++) {
if (i == 0 || j == 0)
L[i][j] = 0;
else if (X[i - 1] == Y[j - 1])
L[i][j] = L[i - 1][j - 1] + 1;
else
L[i][j] = max(L[i - 1][j], L[i][j - 1]);
}
}
return L[m][n];
}

int main() {
char X[] = "AGGTAB";
char Y[] = "GXTXAYB";
int m = strlen(X);
int n = strlen(Y);

printf("Length of LCS is %d\n", lcs(X, Y, m, n));


return 0;
}
These notes cover the basics and essential examples of the Greedy Paradigm and Dynamic
Programming, providing a strong foundation for your semester exam preparation.
Greedy Paradigm

Basic Greedy Strategy

The greedy paradigm involves making locally optimal choices at each stage with the hope of
finding a global optimum solution.

Example: Activity Selection Problem

 Given a set of activities with start and finish times, select the maximum number of
activities that don’t overlap.
 Greedy Choice: Select the activity that finishes first.

#include <stdio.h>

// Structure for storing activity details


struct Activity {
int start, finish;
};

// Function to compare activities based on finish time


int compare(const void *a, const void *b) {
return (((struct Activity*)a)->finish - ((struct Activity*)b)->finish);
}

// Function to print selected activities


void printMaxActivities(struct Activity activities[], int n) {
// Sort activities based on finish time
qsort(activities, n, sizeof(activities[0]), compare);

// Select the first activity


printf("Selected activities:\n");
int i = 0;
printf("(%d, %d) ", activities[i].start, activities[i].finish);

// Consider rest of the activities


for (int j = 1; j < n; j++) {
if (activities[j].start >= activities[i].finish) {
printf("(%d, %d) ", activities[j].start, activities[j].finish);
i = j;
}
}
}

// Example usage
int main() {
struct Activity activities[] = {{1, 2}, {3, 4}, {0, 6}, {5, 7}, {8, 9},
{5, 9}};
int n = sizeof(activities) / sizeof(activities[0]);
printMaxActivities(activities, n);
return 0;
}

Output:

yaml
Copy code
Selected activities:
(1, 2) (3, 4) (5, 7) (8, 9)

Computing Minimum Spanning Trees (MST)


Kruskal's Algorithm:

 Sort all edges in increasing order of their weight.


 Add the smallest edge to the MST if it doesn't form a cycle.

Example:

 Consider a graph with edges and weights, find MST using Kruskal's algorithm.

// Example of Kruskal's algorithm


// Assume graph representation and Union-Find implementation

Prim's Algorithm:

 Start with an arbitrary vertex, grow the MST by adding the cheapest edge that
connects the MST to a vertex not yet in the MST.

// Example of Prim's algorithm


// Assume graph representation and priority queue implementation

Union-Find Algorithm (Disjoint Set)

Union-Find Data Structure

Union-Find (Disjoint Set) is used to efficiently manage a partition of a set into disjoint
subsets.

Operations:

 Union: Merge two subsets.


 Find: Determine which subset a particular element is in.

Example:

 Implementing Union-Find to detect cycle in a graph.

// Example of Union-Find implementation

Relationship in Dijkstra's and Prim's Algorithms

Both algorithms use a greedy approach but differ in their application:

 Dijkstra's Algorithm: Finds the shortest path from a single source vertex to all other
vertices.
 Prim's Algorithm: Constructs a minimum spanning tree from a weighted undirected
graph.

Use of Greedy Strategy in Algorithms

Knapsack Problem

Greedy algorithms can be used for fractional knapsack where items can be divided:

// Example of fractional knapsack

Huffman Trees
Constructing Huffman trees uses a greedy algorithm to build an optimal prefix tree for data
compression.

// Example of Huffman coding

Dynamic Programming Paradigm

Basic Dynamic Programming Strategy

Dynamic programming solves problems by breaking them down into overlapping


subproblems and storing results to avoid redundant calculations.

Optimal Matrix Chain Multiplication

Example:

 Find the optimal way to multiply a chain of matrices to minimize multiplications.

// Example of optimal matrix chain multiplication

Longest Common Subsequence (LCS)

Example:

 Find the longest subsequence common to all sequences in a set of sequences.

// Example of longest common subsequence

These examples and explanations cover the essentials you need for your exam preparation on
greedy paradigm and dynamic programming. If you need more detailed explanations or
additional examples, feel free to ask!

Greedy Paradigm

Basic Greedy Strategy

 Definition: The greedy algorithm always makes the locally optimal choice at each
step with the hope of finding a global optimum.
 Example: Greedy algorithm for the Minimum Spanning Tree (MST).

Computing Minimum Spanning Trees (MST)

 Definition: MST of a connected, undirected graph is a subset of the edges that


connects all the vertices together without any cycles and with the minimum possible
total edge weight.
 Algorithms: Prim's and Kruskal's algorithms.
o Prim's Algorithm:

// Example code for Prim's algorithm


// Assume graph is represented as adjacency matrix `graph[][]`
int primMST(int graph[V][V]) {
int parent[V]; // Array to store constructed MST
int key[V]; // Key values used to pick minimum weight
edge in cut
bool mstSet[V]; // To represent set of vertices included in
MST

// Initialize all keys as INFINITE


for (int i = 0; i < V; i++) {
key[i] = INT_MAX, mstSet[i] = false;
}

// Always include first 1st vertex in MST.


key[0] = 0; // Make key 0 so that this vertex is picked as
first vertex
parent[0] = -1; // First node is always root of MST

// The MST will have V vertices


for (int count = 0; count < V-1; count++) {
// Pick the minimum key vertex from the set of vertices
// not yet included in MST
int u = minKey(key, mstSet);

// Add the picked vertex to the MST Set


mstSet[u] = true;

// Update key value and parent index of the adjacent


vertices
// of the picked vertex. Consider only those vertices
which are
// not yet included in MST
for (int v = 0; v < V; v++) {
// graph[u][v] is non zero only for adjacent
vertices of m
// mstSet[v] is false for vertices not yet included
in MST
// Update the key only if graph[u][v] is smaller
than key[v]
if (graph[u][v] && mstSet[v] == false && graph[u]
[v] < key[v]) {
parent[v] = u, key[v] = graph[u][v];
}
}
}
// print the constructed MST
printMST(parent, V, graph);
}

o Kruskal's Algorithm: Uses a disjoint set data structure to detect cycles


efficiently.

Union-Find Algorithm (Disjoint Set)

 Definition: Data structure to efficiently support the union and find operations.
 Applications: Used in Kruskal’s algorithm to detect cycles.

Relationship in Dijkstra’s and Prim’s Algorithms

 Both algorithms use a priority queue (or heap) to greedily select the next vertex to add
to the MST or shortest path.

Use of Greedy Strategy in Algorithms for Knapsack Problem and Huffman Trees

 Knapsack Problem: The greedy method can be used for fractional knapsack where
items can be broken down into fractions.
 Huffman Trees: Greedy algorithm constructs an optimal prefix code (variable-length
codes) given a set of frequencies.

Dynamic Programming

Basic Dynamic Programming Paradigm

 Definition: Method for solving complex problems by breaking them down into
simpler subproblems.
 Properties: Overlapping subproblems and optimal substructure.

Dynamic Programming Solution to Optimal Matrix Chain Multiplication

 Problem: Given a sequence of matrices, find the most efficient way to multiply these
matrices together.
 Approach: Dynamic programming approach computes the minimum number of
scalar multiplications needed to compute the product of matrices.

Dynamic Programming Solution to Longest Common Subsequence (LCS) Problem

 Problem: Given two sequences, find the length of the longest subsequence present in
both of them.
 Approach: Dynamic programming approach computes the length of LCS and
optionally constructs the LCS itself.

For both paradigms, understanding the underlying principles and applying them through
practice with examples and coding exercises will reinforce your understanding. Make sure to
practice writing and understanding the algorithms thoroughly to excel in your exam.

Greedy Paradigm

Basic Greedy Strategy

The greedy strategy involves making locally optimal choices at each stage with the hope of
finding a global optimum. It does not always guarantee an optimal solution but is often
efficient and easy to implement.

Example: Activity Selection Problem

 Problem: Given a set of activities with start and finish times, select the maximum
number of activities that can be performed by a single person, assuming that a person
can only work on one activity at a time.

def select_activities(start, finish):


n = len(start)
activities = []
i = 0
activities.append(i)

for j in range(1, n):


if start[j] >= finish[i]:
activities.append(j)
i = j

return activities

# Example usage:
start_times = [1, 3, 0, 5, 8, 5]
finish_times = [2, 4, 6, 7, 9, 9]
selected_activities = select_activities(start_times, finish_times)
print("Selected activities:", selected_activities) # Output: [0, 1, 3, 4]

Computing Minimum Spanning Trees (MST)

Minimum Spanning Tree is a subset of edges of a connected, edge-weighted graph that


connects all vertices together, without any cycles and with the minimum possible total edge
weight.

Kruskal's Algorithm
 Sort all the edges in non-decreasing order of their weight.
 Pick the smallest edge that does not form a cycle with the spanning tree formed so far.
 Repeat until there are V−1V-1V−1 edges in the spanning tree.

# Example of Kruskal's algorithm using Union-Find (Disjoint Set)


class UnionFind:
def __init__(self, n):
self.parent = list(range(n))
self.rank = [1] * n

def find(self, u):


if self.parent[u] != u:
self.parent[u] = self.find(self.parent[u])
return self.parent[u]

def union(self, u, v):


root_u = self.find(u)
root_v = self.find(v)

if root_u != root_v:
if self.rank[root_u] > self.rank[root_v]:
self.parent[root_v] = root_u
elif self.rank[root_u] < self.rank[root_v]:
self.parent[root_u] = root_v
else:
self.parent[root_v] = root_u
self.rank[root_u] += 1

def kruskal_mst(graph):
edges = []
for u in range(len(graph)):
for v in range(u + 1, len(graph)):
if graph[u][v] != float('inf'):
edges.append((graph[u][v], u, v))

edges.sort()
uf = UnionFind(len(graph))
mst = []

for weight, u, v in edges:


if uf.find(u) != uf.find(v):
uf.union(u, v)
mst.append((u, v, weight))
if len(mst) == len(graph) - 1:
break

return mst

# Example usage:
graph = [
[float('inf'), 2, float('inf'), 6, float('inf')],
[2, float('inf'), 3, 8, 5],
[float('inf'), 3, float('inf'), float('inf'), 7],
[6, 8, float('inf'), float('inf'), 9],
[float('inf'), 5, 7, 9, float('inf')]
]

mst = kruskal_mst(graph)
print("Minimum Spanning Tree (Kruskal's Algorithm):", mst)

Prim's Algorithm

 Start with an arbitrary vertex.


 Grow the tree by adding the cheapest edge that connects the tree to a vertex not yet in
the tree.
 Repeat until all vertices are included in the tree.

# Example of Prim's algorithm using a priority queue


import heapq

def prim_mst(graph):
n = len(graph)
visited = [False] * n
mst = []
heap = []

heapq.heappush(heap, (0, 0, None)) # (weight, current_vertex,


parent_vertex)

while heap:
weight, u, parent = heapq.heappop(heap)

if visited[u]:
continue

visited[u] = True
if parent is not None:
mst.append((parent, u, weight))

for v in range(n):
if not visited[v] and graph[u][v] != float('inf'):
heapq.heappush(heap, (graph[u][v], v, u))

return mst

# Example usage:
graph = [
[float('inf'), 2, float('inf'), 6, float('inf')],
[2, float('inf'), 3, 8, 5],
[float('inf'), 3, float('inf'), float('inf'), 7],
[6, 8, float('inf'), float('inf'), 9],
[float('inf'), 5, 7, 9, float('inf')]
]

mst = prim_mst(graph)
print("Minimum Spanning Tree (Prim's Algorithm):", mst)

Union-Find Algorithm (Disjoint Set)

Union-Find is a data structure that keeps track of a set of elements partitioned into a number
of disjoint subsets.

# Implementation of Union-Find (Disjoint Set)


class UnionFind:
def __init__(self, n):
self.parent = list(range(n))
self.rank = [1] * n

def find(self, u):


if self.parent[u] != u:
self.parent[u] = self.find(self.parent[u])
return self.parent[u]

def union(self, u, v):


root_u = self.find(u)
root_v = self.find(v)

if root_u != root_v:
if self.rank[root_u] > self.rank[root_v]:
self.parent[root_v] = root_u
elif self.rank[root_u] < self.rank[root_v]:
self.parent[root_u] = root_v
else:
self.parent[root_v] = root_u
self.rank[root_u] += 1

# Example usage:
uf = UnionFind(5)
uf.union(0, 1)
uf.union(2, 3)
uf.union(0, 2)
print("Parent array after union operations:", uf.parent)

Dynamic Programming

Basic Dynamic Programming Paradigm

Dynamic Programming (DP) is a method for solving complex problems by breaking them
down into simpler subproblems and storing the results of subproblems to avoid redundant
computations.

Dynamic Programming Solution to Optimal Matrix Chain Multiplication

The problem involves finding the most efficient way to multiply a given sequence of
matrices.

def matrix_chain_order(p):
n = len(p) - 1 # number of matrices
m = [[0] * n for _ in range(n)]
s = [[0] * n for _ in range(n)]

for length in range(2, n + 1):


for i in range(n - length + 1):
j = i + length - 1
m[i][j] = float('inf')
for k in range(i, j):
q = m[i][k] + m[k + 1][j] + p[i] * p[k + 1] * p[j + 1]
if q < m[i][j]:
m[i][j] = q
s[i][j] = k + 1

return m, s

def print_optimal_parens(s, i, j):


if i == j:
print(f"A{i}", end="")
else:
print("(", end="")
print_optimal_parens(s, i, s[i][j] - 1)
print_optimal_parens(s, s[i][j], j)
print(")", end="")

# Example usage:
matrix_dimensions = [30, 35, 15, 5, 10, 20, 25]
m, s = matrix_chain_order(matrix_dimensions)
print("Minimum number of multiplications:", m[0][len(matrix_dimensions) -
2])
print("Optimal parenthesization:", end=" ")
print_optimal_parens(s, 0, len(matrix_dimensions) - 2)

Dynamic Programming Solution to Longest Common Subsequence (LCS)

The LCS problem involves finding the longest subsequence that appears in both given
sequences.

def lcs_length(X, Y):


m = len(X)
n = len(Y)
c = [[0] * (n + 1) for _ in range(m + 1)]
b = [[None] * (n + 1) for _ in range(m + 1)]

for i in range(1, m + 1):


for j in range(1, n + 1):
if X[i - 1] == Y[j - 1]:
c[i][j] = c[i - 1][j - 1] + 1
b[i][j] = '↖'
elif c[i - 1][j] >= c[i][j - 1]:
c[i][j] = c[i - 1][j]
b[i][j] = '↑'
else:
c[i][j] = c[i][j - 1]
b[i][j] = '←'

return c, b

def print_lcs(b, X, i, j):


if i == 0 or j == 0:
return ""
if b[i][j] == '↖':
return print_lcs(b, X, i - 1, j - 1) + X[i - 1]
elif b[i][j] == '↑':
return print_lcs(b, X, i - 1, j)
else:
return print_lcs(b, X, i, j - 1)

# Example usage:
X = "ABCBDAB"
Y = "BDCABA"
c, b = lcs_length(X, Y)
print("Length of Longest Common Subsequence:", c[len(X)][len(Y)])
print("Longest Common Subsequence:", print_lcs(b, X, len(X), len(Y)))

These examples cover the basic concepts and implementations for each topic. Feel free to ask
for further clarification or additional examples!

You might also like