0% found this document useful (0 votes)
39 views

??_♂️DAA solutions

Uploaded by

rajan adhana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

??_♂️DAA solutions

Uploaded by

rajan adhana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

D esign & A nalysis of Algorithm

question bank solutions


by your friendly classmate Deven Gurjar

#VeryShorts

UNIT #1veryshort
a) What is an algorithm?
A step-by-step procedure to solve a problem or perform a task.

b) Define space complexity.


The amount of memory an algorithm uses as a function of the input
size.

c) Define time complexity.


The amount of time an algorithm takes to run as a function of the input
size.

d) What do you mean by the term asymptotic notation?


A way to describe the running time or space requirements of an
algorithm as the input size grows large.

e) Why do we need to analyse the algorithm?


To evaluate its efficiency in terms of time and space usage.

f) Why do we need to design an algorithm?


To create a systematic solution to a problem that is efficient and
feasible.

g) What is order of growth?


It refers to how the time or space complexity of an algorithm increases
with input size.

h) What is greedy algorithm?


An algorithm that makes the locally optimal choice at each step, aiming
for a global solution.

i) Why is correctness of algorithm important?


It ensures that the algorithm produces the correct output for all valid
inputs.
j) What do you mean by correctness of algorithm?
It means that an algorithm accurately solves the problem it was
designed for under all possible valid inputs.

UNIT #2veryshort
a) What is randomized algorithm?
An algorithm that uses random numbers to influence its behavior and
decision-making.

b) What are applications of divide and conquer algorithm?


Merge Sort, Quick Sort, Binary Search, Matrix Multiplication.

c) What are applications of greedy algorithm?


Huffman Encoding, Kruskal's MST, Dijkstra's Algorithm, Activity Selection.

d) What is the difference between greedy method and dynamic


programming?
Greedy makes local optimal choices, while dynamic programming solves
subproblems and builds up to the solution.

e) List the features of dynamic programming?


Optimal substructure, overlapping subproblems, memoization.

f) What are the advantages of greedy algorithm?


Simple to implement, faster, often provides a good solution for
optimization problems.

g) What are applications of dynamic programming?


Fibonacci series, Knapsack problem, Bellman-Ford Algorithm, Longest
Common Subsequence.

h) What is an algorithm design technique?


A method to create algorithms, such as divide and conquer, dynamic
programming, or greedy algorithms.

i) Give two real time problems that could be solved using greedy
algorithm?
Job Scheduling, Minimum Spanning Tree.

j) Give two real time problems that could be solved using divide and
conquer algorithm?
Sorting large datasets (Merge Sort), Searching in large databases
(Binary Search).
UNIT #3veryshort
a) What is sorting?
Arranging data in a specific order (ascending or descending).

b) What are advantages of sorting?


Easier searching, efficient data management, and better organization.

c) What is bubble sort?


A sorting algorithm that repeatedly swaps adjacent elements if they are
in the wrong order.

d) List the various sorting techniques?


Bubble Sort, Quick Sort, Merge Sort, Insertion Sort, Selection Sort, Heap
Sort.

e) What is searching?
Finding a specific element in a collection of data.

f) What are the various types of searching techniques?


Linear Search, Binary Search.

g) Define quick sort?


A divide-and-conquer algorithm that selects a pivot and partitions the
array around it.

h) Define merge sort?


A divide-and-conquer algorithm that splits the array, sorts each part,
and merges them.

i) What is the time complexity of quick sort?


Best and average case: O(n log n), Worst case: O(n²).

j) What is the time complexity of heap sort?


O(n log n).
UNIT #4veryshort
a) Define amortized analysis?
A method to analyze the average time complexity of operations over a
sequence, smoothing out the cost of expensive operations.

b) What are red-black trees?


A type of self-balancing binary search tree where each node has an
extra bit for color (red or black) to ensure balance.

c) What are balanced trees?


Trees that maintain a specific height or balance condition to ensure
efficient operations like insertion, deletion, and search.

d) Name two advanced analysis techniques?


Master Theorem, Recursion Tree Method.

e) Who gave the concept of red-black tree?


Rudolf Bayer introduced the concept in 1972.

f) What is the time complexity of red-black tree?


O(log n) for insertion, deletion, and search.

g) What is the time complexity of decision tree?


O(n) for searching a balanced decision tree, where n is the number of
nodes.

h) What is the full form of B-tree?


Balanced Tree.

i) Write the applications of B-tree?


Database indexing, file systems, and systems that manage large blocks
of data.

j) Write the applications of red-black tree?


Implementing associative arrays, priority queues, and maintaining
sorted lists.
Unit #5veryshort
a) List the name of two graph algorithms.
Dijkstra's Algorithm, Kruskal's Algorithm.

b) What is the full form of BFS?


Breadth-First Search.

c) What is the full form of DFS?


Depth-First Search.

d) What is minimum spanning tree?


A subgraph that connects all vertices with the minimum total edge
weight and no cycles.

e) What is the other name of string matching algorithm?


Substring search algorithm.

f) List the name of any two string matching algorithms.


Knuth-Morris-Pratt (KMP) Algorithm, Rabin-Karp Algorithm.

g) What is the full form of KMP?


Knuth-Morris-Pratt.

h) Who introduced the KMP algorithm?


Donald Knuth, Vaughan Pratt, and Jeffrey Morris.

i) How many spanning trees can a graph have?


A graph can have multiple spanning trees, with the number depending
on the graph's structure.

j) What is graph?
A collection of vertices (nodes) connected by edges (lines), used to
represent relationships.
#ShortAnswers

UNIT #1short
a) What are the characteristics of an algorithm?
An algorithm must be well-defined with a clear start and end. It should
be finite, having a limited number of steps. Each step should be
unambiguous, meaning it’s clear and understandable. It should be
deterministic, giving the same output for a given input every time, and
must be efficient, aiming for the least time and space usage. Lastly, an
algorithm should be input-output specified, taking specific inputs and
producing desired outputs.

b) What are the advantages and disadvantages of an algorithm?


Advantages of algorithms include clear step-by-step procedures,
making it easier to debug, understand, and implement. They ensure
task consistency and are foundational to software development.
However, they can be time-consuming to write, especially for complex
tasks. Also, an algorithm may not always be the most efficient solution
for all problems, and may need to be adjusted for specific programming
environments.

c) Difference between worst case, best case, and average case


efficiency?
Worst-case efficiency considers the longest time an algorithm takes for
an input of size n, often in the most complex scenarios. Best-case
efficiency represents the shortest possible time, usually in optimal
conditions. Average-case efficiency calculates the time complexity
across all possible inputs, reflecting a more practical estimate. Together,
these cases give a comprehensive view of an algorithm’s performance
under different conditions.

d) Write an algorithm for adding n natural numbers.

1. Start
2. Input: The number n
3. Initialize sum = 0
4. For i = 1 to n:
Add i to sum
5. Output: The value of sum
6. End
This algorithm iterates from 1 to n, adding each value to sum, resulting
in the sum of the first n natural numbers.
°•° gpt is cooked here ig

e) What are different types of algorithms?


Algorithms can be categorized as brute force (exhaustively tries all
possibilities), divide and conquer (splits the problem into sub-problems),
greedy algorithms (makes local optimal choices), dynamic programming
(uses overlapping subproblems and memoization), and backtracking
(builds solutions incrementally and abandons unsuitable solutions).
These types help in approaching problems differently based on their
nature.

f) Why analysis of algorithm is important?


Algorithm analysis helps determine its efficiency in terms of time and
space, which is critical for choosing the best solution, especially for
large datasets. It allows developers to predict how an algorithm scales
and choose the most suitable one for real-time applications. Analyzing
algorithms also aids in comparing their efficiency, making the code
more effective and resource-efficient.

g) What is dynamic programming?


Dynamic programming is an algorithmic technique used to solve
problems by breaking them into overlapping subproblems, storing
solutions of these subproblems to avoid redundancy, which optimizes
time complexity. It is commonly used for optimization problems like
shortest path, longest common subsequence, and knapsack, where a
problem has optimal substructure and overlapping subproblems.

h) What is the need of an algorithm?


An algorithm provides a clear, systematic way to solve a problem,
making processes replicable and efficient. It helps in breaking down
complex problems into simpler steps, improves computational
efficiency, and enables consistent and accurate solutions. Algorithms
are essential in software development, data processing, and automated
decision-making.

i) What is the difference between algorithm and flowchart?


An algorithm is a written set of steps to solve a problem, while a
flowchart is a visual representation of these steps using symbols like
arrows, ovals, and rectangles. Algorithms are textual and detailed,
whereas flowcharts offer a graphical view, making it easier to
understand the flow of operations at a glance. Both are used in
planning and debugging processes.
j) Write the time complexity of for loop?
The time complexity of a basic for loop that iterates from 1 to n is O(n),
as the loop executes n times. For nested loops, the complexity multiplies
with each level of nesting, such as O(n²) for two nested loops, O(n³) for
three, and so on. The complexity depends on the number of iterations
and nested structure.
UNIT #2short
a) Write a short note on space-time trade-off.
The space-time trade-off involves balancing memory usage with
execution time. Using more memory (space) can often decrease the time
an algorithm needs to run, and vice versa. For example, storing
precomputed values in arrays (memoization) can speed up a program
but increases memory usage. This trade-off is essential in optimizing
performance, especially in systems with limited memory or when faster
execution is required.

b) Explain the general principle of greedy algorithm.


The greedy algorithm follows a problem-solving approach where it
makes a series of locally optimal choices, hoping to find a global
optimum. It selects the best option at each step without backtracking or
reconsidering choices. Greedy algorithms are often used for
optimization problems and are efficient for problems like minimum
spanning trees and Huffman encoding. However, they don’t always
guarantee a globally optimal solution for all problem types.

c) Give the recurrence relation of divide and conquer algorithm.


In divide-and-conquer algorithms, a problem of size n is divided into
smaller subproblems. If each subproblem is reduced to size n/b and
there are a subproblems, the recurrence relation is:
T(n) = a • T(n/b) + f(n)
where f(n) represents the time taken to divide and combine the
subproblems, often linear or quadratic in nature.

d) Give the recurrence relation of greedy algorithm.


Greedy algorithms typically don’t have a recurrence relation because
they don’t involve recursive steps or subproblem overlap like divide and
conquer or dynamic programming. Instead, greedy algorithms solve
problems by making iterative choices without revisiting previous steps,
leading to linear or polynomial time complexities.

e) Solve the recurrence .


T(n) = 2T(n/2) + n².
Using the Master Theorem, where :
T(n) = a • T(n/b) + f(n):

Here,a=2, b=2 , and f(n)=n², which means nlogba = nlog²2 = n. Since f(n) = n²
grows faster than n, this fits case 3 of the Master Theorem, making
T(n)=Θ(n²)..
f) What are steps involved in the divide and conquer algorithm?
1. Divide: Split the problem into smaller subproblems.
2. Conquer: Recursively solve each subproblem.
3. Combine: Merge the solutions of the subproblems to form the final
solution.

g) What are steps involved in greedy algorithm?


1. Initialize: Start with an initial solution or empty set.
2. Select: Pick the most optimal choice based on a greedy criterion.
3. Check Feasibility: Ensure the choice is valid for the overall solution.
4. Add: Incorporate the choice into the current solution.
5. Repeat: Continue until a complete solution is found.

h) What are steps involved in dynamic programming?


1. Define Subproblems: Break the problem into overlapping
subproblems.
2. Recursive Solution: Solve each subproblem recursively.
3. Memoize: Store solutions to subproblems to avoid recomputation.
4. Build Solution: Use the stored solutions to construct the final solution.

i) Write a short note on algorithm design techniques.


Algorithm design techniques are methods used to systematically
develop algorithms, such as divide and conquer, greedy, dynamic
programming, and backtracking. Each technique addresses specific
problem characteristics. For instance, dynamic programming is ideal for
overlapping subproblems, while greedy algorithms excel in making
optimal local choices. Choosing the right design technique is essential
for creating efficient solutions.

j) Write a short note on iterative techniques.


Iterative techniques repeatedly execute a set of instructions until a
condition is met. Commonly used in loops, iterative methods are
fundamental for handling repetitive tasks in algorithms, particularly in
cases where recursion is inefficient or can cause memory overflow.
They’re widely used in sorting, searching, and data manipulation tasks
due to their simplicity and reduced memory requirements.
Unit #3short
a) Write two elementary sorting techniques.
Two elementary sorting techniques are Bubble Sort and Selection Sort.
Bubble Sort repeatedly swaps adjacent elements if they are out of order,
while Selection Sort repeatedly finds the minimum element and places it
in the correct position.

b) Write the name of two elementary sorting techniques.


Bubble Sort and Insertion Sort are two elementary sorting techniques
commonly used for small datasets and simple applications.

c) Write the name of two linear time sorting techniques.


Radix Sort and Counting Sort are two linear time sorting techniques
that can sort data in O(n) time, given certain constraints on the input
data.

d) How do we analyze the time complexity?


To analyze the time complexity, we examine the number of basic
operations the algorithm performs relative to the input size . This
analysis is expressed in Big O notation (e.g.,O(n), O(n²)), categorizing the
algorithm's growth rate. By considering best, worst, and average cases,
we predict how the algorithm will perform for various inputs and
optimize as needed.

e) Distinguish between radix sort and count sort.


Radix Sort processes each digit of the elements sequentially, sorting
from the least significant to the most significant digit. Count Sort,
however, uses a counting array to track the frequency of each element
and directly sorts data based on frequency. Radix Sort is suitable for
larger numbers but relies on stable sorting, while Count Sort is best for
smaller ranges or integer data.

f) Distinguish between quick sort and merge sort.


Quick Sort uses a pivot element to partition the array and sorts
in-place, leading to an average-case time complexity of O(nlogn) but a
worst-case of O(n²). Merge Sort divides the array into halves, sorts them
recursively, and merges, achieving O(n log n) consistently. Merge Sort
requires extra space, whereas Quick Sort is more space-efficient.

g) What is linear search? In the given array list below, element 15 has to
be searched in it using Linear Search Algorithm.
Linear search is a sequential search method that checks each element
of an array one by one until it finds the target element or reaches the
end.
For the array |92|87|53|10|15|23|67|.
element 15 is found at index 4 after scanning elements from index 0 to 4.

h) What is binary search?


Binary search is a search algorithm for sorted arrays that repeatedly
divides the array in half, comparing the middle element to the target. If
the middle element is the target, it stops; otherwise, it discards half of
the array and continues, achieving a time complexity of O(log n).

i) Compare the complexity of heap sort and quick sort.


Heap Sort has a time complexity of O(n log n) consistently for all cases,
while Quick Sort averages O(n log n) but can degrade to O(n²) in the
worst case. Heap Sort is reliable for worst-case scenarios, but Quick
Sort is often faster in practice due to its in-place sorting and cache
efficiency.

j) Distinguish between linear search and binary search.


Linear search checks each element sequentially, with a time complexity
of O(n), and works for unsorted arrays. Binary search, however, operates
on sorted arrays, has a time complexity of O(log n), and divides the
array in half at each step, making it significantly faster but limited to
sorted data.
UNIT #4short
a) List two advantages of red black trees.
1. Balanced structure: Red-black trees maintain balance, ensuring time
complexity for insertions, deletions, and searches.
2. Efficient performance: They provide faster operations than
unbalanced binary trees, especially for large datasets.

b) List two advantages of decision trees.


1. Interpretability: Decision trees are easy to understand and interpret,
making them suitable for non-technical users.
2. No need for feature scaling: Decision trees do not require
normalization or standardization of data, simplifying the preprocessing
phase.

c) What is lower bound theory?


Lower bound theory defines the minimum time required by any
algorithm to solve a problem, based on the problem’s inherent
complexity. It establishes that no algorithm can solve a given problem
faster than this bound, allowing us to compare algorithm efficiency and
know when we’ve reached the most optimal solution.

d) What is the time complexity of inserting in a red black tree?


The time complexity of inserting a node in a red-black tree is due to its
balanced nature, which ensures that the height of the tree remains
logarithmic in relation to the number of nodes.

e) What is the time complexity of deleting in a red black tree?


The time complexity for deleting a node in a red-black tree is also , as
the tree maintains a balanced structure after deletion through color
adjustments and rotations.

f) What are the uses of red black tree?


Red-black trees are used in databases and file systems to maintain
sorted data efficiently. They’re also used in memory management and
as foundational structures for other data structures like associative
arrays and priority queues, due to their balanced structure and
efficient insertion, deletion, and search operations.

g) What are the uses of decision tree?


Decision trees are widely used in classification and regression tasks
within machine learning. They’re valuable for customer segmentation,
medical diagnoses, and credit scoring because of their interpretability.
Decision trees are also used in decision-making processes due to their
simple, rule-based structure.

h) What are the searching operations in red black tree?


Searching operations in a red-black tree include:
1. Search by key: Finds a specific node by comparing keys, with a time
complexity of .
2. Successor and predecessor searches: Locate the next or previous
element in sorted order, used in interval and range queries.

i) What are the different types of balanced trees?


Types of balanced trees include:
1. AVL Trees: Ensure balance by maintaining a height difference of at
most one between child subtrees.
2. Red-Black Trees: Balance through coloring rules and rotations,
allowing for more flexible height differences than AVL.

j) How can we decide the color of a node in a red black tree?


In a red-black tree, the root node is always colored black. Newly inserted
nodes are typically colored red initially. Recoloring and rotations are
then applied as necessary to maintain the red-black properties, such as
no two consecutive red nodes and ensuring all paths from the root to
leaves have the same black node count.
UNIT #5short
a) What are the various applications of minimum spanning tree?
Minimum spanning trees (MST) are used in network design (e.g.,
designing efficient road, electricity, or telecommunications networks),
approximation algorithms for problems like traveling salesman, and
clustering algorithms in machine learning to connect data points with
minimal cost.

b) What is Depth First Search?


Depth First Search (DFS) is a graph traversal method that explores as
far down a branch as possible before backtracking. It uses a stack
(either implicitly with recursion or explicitly) and is often applied to
pathfinding, detecting cycles, and exploring connected components.

c) What is Breadth First Search?


Breadth First Search (BFS) is a graph traversal method that explores all
neighbors at the present depth before moving on to nodes at the next
depth level. It uses a queue and is commonly used for finding the
shortest path in unweighted graphs and exploring all nodes reachable
from a starting node.

d) Distinguish between BFS and DFS.


1. Traversal Order: BFS explores nodes level by level (breadth-wise), while
DFS goes as deep as possible along each branch before backtracking.
2. Data Structure: BFS uses a queue, whereas DFS uses a stack (or
recursion).
3. Applications: BFS is ideal for shortest-path problems, while DFS is
suited for detecting cycles and exploring connected components.

e) What are applications of BFS?


BFS is used in shortest-path finding in unweighted graphs, level-order
traversal in trees, social network analysis for discovering relationships,
and in Web crawling to explore URLs at different depth levels from a
source page.

f) What are applications of DFS?


DFS is used in pathfinding and cycle detection in graphs, topological
sorting in Directed Acyclic Graphs (DAGs), solving maze problems, and
finding connected components in undirected graphs.

g) What are applications of minimum spanning trees?


Minimum spanning trees are used in network design (to create efficient,
cost-effective infrastructure), approximating the traveling salesman
problem, and clustering data in machine learning. MSTs are also useful
in image segmentation and constructing efficient pipelines or
pathways.

h) What are the components of the KMP algorithm?


The Knuth-Morris-Pratt (KMP) algorithm consists of:
1. Pattern Matching: Uses an efficient comparison of characters to
match patterns within a string.
2. LPS Array (Longest Prefix Suffix): Helps skip unnecessary comparisons
by storing the longest prefix that is also a suffix, optimizing pattern
searching.

i) List the properties of minimum spanning trees.


1. Connected and Acyclic: An MST connects all vertices with the
minimum possible total edge weight without any cycles.
2. Unique or Multiple Solutions: If all edges have unique weights, the
MST is unique; otherwise, there may be multiple MSTs.

j) What are the components of a graph?


1. Vertices (Nodes): The entities or points in a graph.
2. Edges (Links): Connections between vertices, which can be directed or
undirected, weighted or unweighted.
#LongAnswers

UNIT #1long
a) Explain the various techniques of designing an algorithm.
There are several techniques for designing efficient algorithms, each
with its own strengths and ideal use cases:
1. Divide and Conquer: This technique involves dividing the problem into
smaller subproblems, solving each recursively, and combining their
solutions. Examples include Merge Sort and Quick Sort.
2. Dynamic Programming: Used when the problem has overlapping
subproblems and optimal substructure properties, this approach solves
each subproblem only once, storing the results for future use. Examples
are Fibonacci series and Knapsack problem.
3. Greedy Algorithm: Greedy algorithms make locally optimal choices in
each step, aiming for a global optimum. Examples include Prim’s and
Kruskal’s algorithms for finding minimum spanning trees.
4. Backtracking: In this approach, partial solutions are incrementally
built and abandoned if they do not lead to a viable solution. Used in
N-Queens and Sudoku-solving problems.
5. Branch and Bound: This technique is used for optimization problems,
where branches of the solution space tree are evaluated, and bounds
are used to prune non-promising branches. Examples include Traveling
Salesman Problem.
6. Randomized Algorithms: These algorithms use randomness as part of
their logic, often for approximation. Examples are Quick Sort
(randomized pivot) and Randomized Primality Testing.

These techniques provide a structured approach for tackling various


computational problems efficiently.

b) Write an algorithm to find the factorial of a number and find its time
complexity.
Algorithm to find the factorial of a number:

→ In this recursive algorithm:


1. We check if n is 0 or 1, returning 1 since 0! and 1! are both 1.
2. For other values, we recursively call the function by multiplying n with
(n-1)! .
→ Time Complexity:
The time complexity is O(n) because the algorithm makes a single
recursive call for each decrement from n down to 1.

c) What are the general rules followed for writing an algorithm?


When writing an algorithm, several guidelines help ensure clarity,
efficiency, and correctness:

1. Define the Problem Clearly: State the problem, inputs, and expected
outputs.
2. Modular Structure: Break down the solution into logical steps or
modules, making it easier to read and debug.
3. Specify Constraints: Clearly identify constraints to ensure the
algorithm handles edge cases.
4. Correctness and Efficiency: Ensure the algorithm is correct, handling
all inputs, and is efficient in terms of time and space complexity.
5. Use Clear and Simple Language: Avoid unnecessary complexity; use a
consistent naming convention for variables and steps.
6. Choose the Appropriate Data Structures: Use data structures that
support the required operations efficiently.
7. Commenting and Documentation: Add comments to describe the logic
of critical sections, which helps in future modifications or debugging.

Following these rules ensures the algorithm is efficient, understandable,


and easy to maintain.

d) How to find correctness of algorithm? Explain with an example.


To determine an algorithm's correctness, we use two main approaches:

1. Proof by Induction: This approach proves that the algorithm works for
the base case and assumes it works for n to prove for n+1.
2. Loop Invariant: It involves defining a condition that holds true before
and after every iteration of a loop, ensuring that the algorithm
maintains correctness throughout.

Example: Checking correctness of an algorithm using Loop Invariant:

For an array arr of length n, if the loop iterates from 0 to n-1, a loop
invariant could be that max_so_far always holds the maximum element
of arr[0]...arr[i].

→ Proof:
● Initialization: max_so_far is initialized to arr[0], the first element, so
the invariant holds.
● Maintenance: On each iteration, max_so_far is updated to the
maximum of max_so_far and arr[i], maintaining the invariant.
● Termination: At the end of the loop, max_so_far will hold the
maximum of all elements.

This example illustrates how loop invariants help ensure correctness by


consistently maintaining the algorithm's intended properties.

e) Write an algorithm to find the minimum and maximum number in an


array and find its time complexity.
Algorithm:

→ In this algorithm:

1. Initialize min_value and max_value as the first element.


2. Traverse the array and update min_value if a smaller element is found,
and max_value if a larger element is found.

→ Time Complexity:
The time complexity is O(n) as the algorithm only requires a single
traversal of the array.

f) Explain backtracking algorithm along with an example.


Backtracking is a problem-solving technique that incrementally builds a
solution and abandons (“backtracks”) once it determines the solution
cannot be completed successfully. It’s used in scenarios like constraint
satisfaction and combinatorial problems.

Example: Solving the N-Queens problem.

1. The problem requires placing N queens on an N × N chessboard such


that no two queens threaten each other.
2. Start placing queens row by row. For each row, try placing a queen in
each column and check for safety.
3. If placing a queen leads to no valid position in the next row, remove
(backtrack) the queen and try the next column.
Backtracking allows us to explore only the feasible parts of the solution
space, improving efficiency.

g) Explain randomized algorithm along with an example.


A randomized algorithm uses random numbers at least once during
computation, which can help in reducing complexity or providing
approximate solutions.

Example: Randomized Quick Sort.

1. Unlike the standard Quick Sort, which selects the last or first
element as the pivot, randomized Quick Sort selects a random
element as the pivot.
2. Randomly selecting a pivot reduces the chance of encountering
worst-case complexity on sorted or nearly sorted data.

→ Randomized algorithms are helpful in large datasets and offer good


average-case complexity even when deterministic algorithms might fail.

1. Definition: An algorithm is a well-defined step-by-step procedure


to solve a problem, while pseudocode is a high-level description of
an algorithm written in an easy-to-understand format without
strict syntax.
2. Detail Level: Algorithms focus on the logic, whereas pseudocode
may include implementation details to communicate ideas.
3. Purpose: Algorithms are theoretical and conceptual, while
pseudocode bridges the gap between algorithm and actual code,
providing a blueprint.
4. Example Use: Algorithms are used to define solutions in a
formalized way, while pseudocode is used in the development
phase to outline the program structure.

Algorithms focus on logic, while pseudocode translates this logic into a


structured outline for coding.

i) Write an algorithm for binary search along with its time complexity.
Algorithm:
→ Time Complexity:
Binary search operates in O(log n) time since it repeatedly divides the
search interval in half.

j) Explain branch and bound method with example.


Branch and Bound is an algorithm design paradigm used for solving
optimization problems. It involves systematically dividing the solution
space and using bounds to prune suboptimal solutions.

Example: Solving the Knapsack problem.


1. Each item can either be included or excluded, creating a branch in the
decision tree.
2. A bound (estimate of the maximum solution) is calculated at each
branch.
3. Branches that exceed this bound are pruned.

Branch and Bound is efficient for optimization problems and is


commonly used in scenarios where brute-force solutions would be
computationally expensive.
Unit #2long
a) Explain Divide and Conquer Method along with its algorithm.
The Divide and Conquer (D&C) method solves problems by breaking
them down into smaller subproblems, solving these subproblems
recursively, and then combining their solutions. It’s used in sorting,
searching, and other complex problems.

→ Steps in Divide and Conquer:

1. Divide: Split the problem into smaller subproblems.


2. Conquer: Solve each subproblem recursively.
3. Combine: Merge the solutions of the subproblems to form the solution
for the original problem.

Example: Merge Sort Algorithm:

In this example, the array is split until each subarray has one element,
then the elements are merged in sorted order.

→ Time Complexity:
Merge Sort has a time complexity of O(n log n) due to the repeated
division of the array (logarithmic) and merging of elements.

b) What are the iterative design techniques?


Iterative design techniques aim to develop algorithms that solve
problems through repeated cycles (iterations) rather than recursive
calls. Key iterative techniques include:

1. Looping Structures: Use loops to repeat actions, such as in array


traversals or cumulative calculations.
2. Dynamic Programming: Stores results of overlapping subproblems
iteratively, avoiding recursion.
3. Greedy Iteration: Makes local optimal choices at each step, typically in
sorting and selection.
4. Successive Approximation: Refines a solution gradually by iterating
until the desired precision is achieved.
Iterative techniques are often preferred for their efficiency, especially in
memory-limited environments where recursion depth could lead to stack
overflow.

c) Explain the greedy algorithm for the fractional knapsack problem with
its time complexity.
The fractional knapsack problem allows fractions of items to be taken to
maximize profit. A greedy approach is used to take items with the
highest profit-to-weight ratio until the knapsack is full.

→ Algorithm:

1. Calculate the profit-to-weight ratio for each item.


2. Sort items by this ratio in descending order.
3. Iteratively add the item or fraction of the item with the highest ratio
until the knapsack capacity is reached.

Example: Given items with weights [2, 3, 5] and profits [30, 50, 70], and a
knapsack capacity of 5:
1. Calculate ratios: [15, 16.67, 14].
2. Sort by ratio: item with weight 3 (16.67), then 2, then 5.
3. Take items fully or partially until the capacity is filled.

→ Time Complexity:
Sorting items by profit-to-weight ratio takes O(n log n), and iterating
over items takes O(n). Therefore, the total time complexity is O(n log n).

d) Explain the greedy knapsack problem.


The greedy knapsack problem aims to maximize the profit by selecting
items with the highest profit-to-weight ratio without breaking items into
fractions (0/1 knapsack).

→ Greedy Algorithm Steps:

1. Calculate the profit-to-weight ratio for each item.


2. Sort items by their ratios in descending order.
3. Start adding items from the sorted list to the knapsack until the
capacity is full or the next item does not fit.

→ Limitation:
Unlike the fractional knapsack problem, the greedy algorithm does not
guarantee an optimal solution for the 0/1 knapsack problem because
some combinations of smaller items may yield a higher profit than
taking the next highest ratio item.
e) Explain iterative algorithm design issues.
Iterative algorithm design presents several challenges:

1. Efficiency vs. Complexity: Balancing the need for efficient, fast


algorithms with the complexity of implementation.
2. Termination Condition: Ensuring loops have correct termination
conditions to avoid infinite loops.
3. Memory Management: Managing memory usage efficiently, especially
when storing intermediate results.
4. Accuracy and Precision: With iterative approximations, it’s crucial to
define when a solution is “close enough” to the correct answer.
5. Error Propagation: Errors may accumulate in iterative algorithms,
affecting accuracy, especially in floating-point calculations.

Careful planning and testing are required to address these issues for
reliable and efficient iterative algorithms.

f) Compare the efficiency of divide and conquer algorithm and greedy


algorithm.

1. Problem Scope: Divide and Conquer (D&C) is suitable for a wide range
of problems, including sorting and searching, while Greedy algorithms
are ideal for optimization problems.
2. Time Complexity: D&C often has logarithmic time complexity (e.g., O(n
log n) in Merge Sort), while Greedy algorithms can vary (e.g., O(n log n)
for fractional knapsack).
3. Solution Quality: D&C guarantees optimal solutions in all cases, but
Greedy algorithms may provide approximate solutions in problems like
the 0/1 Knapsack problem.
4. Resource Use: Greedy algorithms tend to use fewer resources
(memory and time) since they do not recurse or break problems down
repeatedly, unlike D&C.
5. Implementation Complexity: Greedy algorithms are generally simpler
to implement and understand, while D&C requires managing recursive
calls and combining results.

In summary, Greedy algorithms are often faster and more


straightforward but may not always produce optimal solutions, whereas
D&C is reliable for correctness and optimal solutions but may consume
more resources.

g) Explain in detail any one application of divide and conquer


algorithm.
One popular application of the divide and conquer method is Merge
Sort.
Merge Sort: Merge Sort is a sorting algorithm that follows the divide and
conquer approach.

1. Divide: Split the array into two halves.


2. Conquer: Recursively sort each half.
3. Combine: Merge the two sorted halves to get the final sorted array.

→ Steps:

1. Split the array until each subarray contains only one element.
2. Merge each pair of sorted subarrays until the entire array is
sorted.

→ Advantages:
Merge Sort is highly efficient with a time complexity of O(n log n), making
it ideal for sorting large datasets. It’s also stable, meaning it maintains
the order of equal elements, and can handle data that doesn’t fit in
memory by working in smaller chunks.

h) Explain in detail any one application of greedy algorithm.


An important application of the greedy algorithm is in finding a
Minimum Spanning Tree (MST) using Prim’s or Kruskal’s algorithm.

Minimum Spanning Tree: A spanning tree of a graph connects all


vertices with the minimum possible total edge weight without forming
cycles.

→ Steps in Kruskal’s Algorithm:

1. Sort all edges of the graph by weight.


2. Start adding the smallest edge, ensuring no cycles are formed, until
all vertices are connected.

→ Advantages:
The greedy approach used in Kruskal’s algorithm provides an optimal
solution for MST. Its time complexity is O(E log E), which is efficient for
sparse graphs. MST applications include designing efficient network
connections for utilities, telecommunications, and transportation.

i) Given a weighted graph and a source vertex in the graph, find the
shortest paths from the source to all the other vertices in the given
graph using the Dijkstra’s Algorithm.
To find the shortest paths from a source vertex to all other vertices in
this weighted graph using Dijkstra's Algorithm, let's go through the
algorithm step by step.

Assuming you choose vertex 0 as the source, here's a guide on how


Dijkstra’s Algorithm would work for this graph:

→ Dijkstra's Algorithm Steps

1. Initialize distances: Set the distance to the source vertex (0) as 0 and
all other vertices as infinity.
2. Visit the closest unvisited vertex: In each iteration, pick the vertex with
the smallest distance that hasn't been processed yet.
3. Update distances of adjacent vertices: For the selected vertex, update
the distances to its neighboring vertices if a shorter path is found
through the selected vertex.
4. Repeat until all vertices are visited.

→ Step-by-Step Solution

Here is a simple demonstration of the steps:


1. Start with Vertex 0 (Distance 0):
Distance from 0 to 1 = 4
Distance from 0 to 7 = 8

Current shortest path estimates:


0: 0, 1: 4, 7: 8

2. Visit Vertex 1 (Distance 4):


Distance from 1 to 2 = 4 + 8 = 12
Distance from 1 to 7 = 4 + 11 = 15 (already shorter as 8)

Updated estimates:
0: 0, 1: 4, 2: 12, 7: 8

3. Visit Vertex 7 (Distance 8):


Distance from 7 to 8 = 8 + 7 = 15
Distance from 7 to 6 = 8 + 1 = 9
Updated estimates:
0: 0, 1: 4, 2: 12, 6: 9, 7: 8, 8: 15

4. Visit Vertex 6 (Distance 9):


Distance from 6 to 5 = 9 + 2 = 11
Distance from 6 to 8 = 9 + 6 = 15 (no change as already 15)

Updated estimates:
0: 0, 1: 4, 2: 12, 5: 11, 6: 9, 7: 8, 8: 15

5. Visit Vertex 5 (Distance 11):


Distance from 5 to 3 = 11 + 14 = 25
Distance from 5 to 4 = 11 + 10 = 21

Updated estimates:
0: 0, 1: 4, 2: 12, 3: 25, 4: 21, 5: 11, 6: 9, 7: 8, 8: 15

6. Visit Vertex 2 (Distance 12):


Distance from 2 to 3 = 12 + 7 = 19
Distance from 2 to 8 = 12 + 2 = 14 (update as shorter than 15)

Updated estimates:
0: 0, 1: 4, 2: 12, 3: 19, 4: 21, 5: 11, 6: 9, 7: 8, 8: 14

7. Visit Vertex 8 (Distance 14):


Distance from 8 to 3 = 14 + 4 = 18 (update as shorter than 19)

Updated estimates:
0: 0, 1: 4, 2: 12, 3: 18, 4: 21, 5: 11, 6: 9, 7: 8, 8: 14

8. Visit Vertex 3 (Distance 18) - No further updates.

9. Visit Vertex 4 (Distance 21) - No further updates.

→ Final Shortest Path Distances from Vertex 0


The shortest path distances from the source vertex 0 to all vertices are:

0 -> 0 = 0
0 -> 1 = 4
0 -> 2 = 12
0 -> 3 = 18
0 -> 4 = 21
0 -> 5 = 11
0 -> 6 = 9
0 -> 7 = 8
0 -> 8 = 14

These are the shortest paths from vertex 0 to all other vertices. Let me
know if you need further explanations or details for this problem!

j) What does dynamic programming have in common with divide and


conquer?

Dynamic programming (DP) and divide and conquer (D&C) are both
problem-solving techniques that involve breaking down a problem into
smaller subproblems and then combining the solutions to address the
original problem. They share the following characteristics:

1. Recursive Problem Breakdown: Both approaches recursively divide a


larger problem into smaller subproblems, aiming to simplify and solve
them individually.
2. Optimal Substructure: DP and D&C are suitable for problems with an
optimal substructure property, where an optimal solution to the main
problem can be constructed from optimal solutions to its subproblems.
3. Combination of Results: After solving subproblems, both techniques
combine their solutions to produce the final result for the main problem.

However, a key difference is that DP stores the solutions to overlapping


subproblems to avoid redundant computations, whereas D&C does not
and may solve the same subproblem multiple times.
UNIT #3long
a) Merge Sort Algorithm with Worst Case, Best Case, and Average Case
Analysis

→ Merge Sort Algorithm:

1. If the array has one element, return it (base case).


2. Split the array into two halves.
3. Recursively apply merge sort on each half.
4. Merge the two sorted halves to create a single sorted array.

Algorithm:

Time Complexity:

Worst Case: O(n log n), as the array is always split into halves.
Best Case: O(n log n), even if the array is already sorted.
Average Case: O(n log n), consistently dividing and merging.

b) Quick Sort Algorithm to Sort Numbers 28, 56, 12, 67, 34, 2, 40, 23

1. Select a pivot (e.g., last element).


2. Partition the array into elements less than and greater than the pivot.
3. Recursively apply quick sort on each partition.

→ Steps:

1. Choose pivot = 23.


2. Rearrange around pivot → {12, 2, 23, 67, 34, 28, 40, 56}.
3. Recursively apply to subarrays {12, 2} and {67, 34, 28, 40, 56}.
4. Continue until array is sorted → {2, 12, 23, 28, 34, 40, 56, 67}.
c) Quick Sort Algorithm and Explanation

→ Quick Sort Algorithm:

→ Explanation:

1. Partitioning: The array is divided so elements less than the pivot are
on one side and greater elements on the other.
2. Recursive Sorting: Quick sort is recursively applied to each partition.

d) Insertion Sort Algorithm and Time Complexity

→ Insertion Sort Algorithm:

→ Time Complexity:

Worst Case: O(n²) (when array is sorted in reverse).


Best Case: O(n) (when array is already sorted).
Average Case: O(n²).

e) Sort the Sequence 67, 89, 35, 89, 12, 32, 78

Using any sorting algorithm (e.g., merge sort):

Sorted sequence: 12, 32, 35, 67, 78, 89, 89


°•° drive khud karlena

f) Bucket Sort Algorithm and Time Complexity

→ Bucket Sort Algorithm:


1. Create empty buckets.
2. Distribute elements into buckets based on value ranges.
3. Sort each bucket.
4. Concatenate all buckets.

→ Time Complexity:

Average Case: O(n + k) where k is the number of buckets.


Worst Case: O(n²) if all elements fall into the same bucket.

g) Count Sort Algorithm and Time Complexity

→ Count Sort Algorithm:

1. Create a count array to store the frequency of each element.


2. Modify the count array to store cumulative frequencies.
3. Place elements in their sorted positions in the output array.

→ Time Complexity:

O(n + k), where k is the range of input values.

h) Methods for Finding the Pivot Element in Quick Sort

1. First Element: Use the first element as pivot.


2. Last Element: Use the last element as pivot (common).
3. Middle Element: Use the middle element as pivot.
4. Median-of-Three: Use the median of the first, middle, and last
elements.

i) Five Real-Time Applications of Merge Sort

1. Sorting Large Data Files: Merge sort is efficient for large datasets.
2. Inversion Counting: Useful in counting inversions in arrays.
3. External Sorting: Ideal for data stored on external storage.
4. Parallel Computing: Easily parallelizable due to its recursive nature.
5. Data Organization: Useful in organizing data for fast access.

j) Trace Heap Sort Algorithm for Data {2, 9, 3, 12, 15, 8, 11}

1. Build a max heap.


2. Swap the root (max element) with the last element and reduce the
heap size.
3. Heapify to restore the heap property, repeat until sorted.
→ Heap Sort Steps:

1. Initial Max Heap: {15, 12, 11, 2, 9, 3, 8}


2. Swap root and last element, heapify.
3. Continue until sorted: {2, 3, 8, 9, 11, 12, 15}
UNIT #4long
a) Properties of Red-Black Trees

Red-black trees are a type of self-balancing binary search tree with


specific properties that ensure the tree remains approximately
balanced:

1. Node Color: Each node is either red or black.


2. Root Property: The root of the tree is always black.
3. Leaf Property: All leaves (NIL nodes) are black.
4. Red Property: Red nodes cannot have red children, i.e., no two
consecutive red nodes.
5. Black-Height Property: Every path from a node to its descendant NIL
nodes must contain the same number of black nodes.

These properties ensure that the longest path from the root to a leaf is
no more than twice the length of the shortest path, maintaining a
balanced structure.

b) Steps for Inserting 15, 17, 90, 56, 23, 12 in a Red-Black Tree

1. Insert 15: Insert as the root node and color it black.


2. Insert 17: Insert as a red node to the right of 15.
3. Insert 90: Insert as a red node to the right of 17.
4. Insert 56: Insert as a red node to the left of 90.
● Recolor or Rotate as needed to maintain properties.
5. Insert 23: Insert as a red node under 56.
6. Insert 12: Insert as a red node to the left of 15.
● Adjustments: Rotate or recolor to maintain the properties.

Continue adjusting to ensure no two consecutive red nodes exist, and


the black-height property is maintained.

c) Rules for Creating Red-Black Trees

1. The root is always black.


2. All leaves are black.
3. Red nodes cannot have red children (no consecutive red nodes).
4. Every path from a node to its descendants has the same number of
black nodes.
5. New nodes are inserted as red, and rotations/recoloring ensure tree
properties.
d) Algorithm for Searching in a Red-Black Tree

Algorithm

1. Start at the root.


2. Recursively move left or right based on the target value.
3. Return the node if found, or NIL if not found.

e) Algorithm for Searching in a Decision Tree

Algorithm

1. Start from the root and check the decision criteria.


2. Based on feature values, recursively move to the left or right child.
3. Return the predicted class at a leaf node.

f) Different Methods for Amortized Analysis

1. Aggregate Analysis: Total cost over a sequence of operations is


divided by the number of operations to find the average cost.
2. Accounting Method: Assigns “amortized” costs to each operation,
potentially storing or borrowing credits for future operations.
3. Potential Method: Uses a potential function that estimates the “stored
energy” of a data structure, balancing costs over multiple operations.

g) Comparing Efficiency of Red-Black Trees and Decision Trees


● Red-Black Trees: Balanced search trees with time complexity of
O(log n) for search, insert, and delete operations. They are
efficient for sorted data operations and ensure balanced height.

● Decision Trees: Primarily used in classification problems; their


efficiency depends on the depth of the tree, and complexity varies
with the data structure. Red-black trees are generally faster for
traditional search/insert operations.

h) Implementing a Binary Search Tree and Its Time Complexity

→ Binary Search Tree (BST) Insertion Algorithm:

1. Start at the root.


2. If the tree is empty, the new node becomes the root.
3. Otherwise, compare the value with the root:
● If smaller, go left; if larger, go right.
● Repeat until reaching a leaf position, then insert the node.

→ Time Complexity:

Average Case: O(log n) if the tree is balanced.


Worst Case: O(n) if the tree is unbalanced (e.g., linked list-like structure).

i) Decision Tree Algorithm with Example

Decision trees are used in classification problems where decisions are


made based on feature values:

1. Start with all data points.


2. Select a feature and a threshold that best splits the data (using
metrics like Gini or entropy).
3. Create branches based on this split, assigning data to branches.
4. Repeat recursively for each branch until all data in a node belongs to
the same class or maximum depth is reached.

Example: Classifying if someone should play sports based on weather


and temperature:
● Split based on whether it's sunny or not, then split based on
temperature.
● Each leaf node represents the decision to “play” or “not play.”

j) Lower Bounding Techniques


1. Comparison Trees: Used to show that certain problems (like
comparison-based sorting) have a minimum time complexity of O(n log
n).
2. Decision Trees: Analyzes the minimum number of decisions required
to reach an outcome, setting bounds for problem-solving complexity.
3. Reduction: Transforms one problem into another known problem with
a lower bound, showing that a solution cannot be faster than the
reduced problem’s complexity.
4. Information-Theoretic Bounds: Based on information theory, these
bounds estimate the minimum information required to distinguish
between outputs, setting theoretical limits on efficiency.
UNIT #5long
a) Advantages, Disadvantages, and Applications of Graphs

→ Advantages:

1. Graphs represent complex networks effectively, such as social


media connections, computer networks, and transport routes.
2. Graphs allow easy modeling of relationships and dependencies
between entities, useful in algorithmic problems like shortest path
and connectivity.

→ Disadvantages:

1. Graphs can become complex and challenging to handle as the


number of nodes and edges increases.
2. Memory consumption is high, especially for dense graphs, as they
require large adjacency matrices or lists.

→ Applications:

1. Social Networks: Representing relationships between individuals


(e.g., friends on Facebook).
2. Network Routing: Shortest path and optimal route finding in
communication and transportation networks.
3. Web Page Ranking: Google’s PageRank uses graphs to rank pages
based on link structure.

b) Explain Kruskal’s minimum cost spanning tree algorithm with the


given graph

To explain Kruskal's Minimum Cost Spanning Tree (MST) algorithm using


this graph, let’s go over the steps of Kruskal’s algorithm and apply them
to the graph provided.

→ Kruskal’s Algorithm Steps

1. Sort all edges in the graph by their weights in ascending order.


2. Initialize an empty MST set, which will store the edges of the MST.
3. Add edges one by one: For each edge in the sorted list, add it to the
MST if it doesn’t form a cycle with the edges already in the MST.
4. Stop when there are V - 1 edges in the MST (where V is the number of
vertices in the graph).
5. Result: The edges in the MST set will form the minimum spanning tree.

→ Applying Kruskal’s Algorithm to the Given Graph

Step 1: List and Sort the Edges

The edges in the graph are:

● Edge (0-1) with weight 1


● Edge (0-3) with weight 4
● Edge (0-2) with weight 5
● Edge (1-3) with weight 3
● Edge (1-4) with weight 7
● Edge (2-4) with weight 6

Sorted by weight, the edges are:

1. (0-1) = 1
2. (1-3) = 3
3. (0-3) = 4
4. (0-2) = 5
5. (2-4) = 6
6. (1-4) = 7

Step 2: Initialize MST Set

The MST set is initially empty.

Step 3: Add Edges One by One

● Add (0-1): No cycle is formed, so add it to the MST.


● Add (1-3): No cycle is formed, so add it to the MST.
● Add (0-3): Adding this edge would form a cycle (0-1-3-0), so skip it.
● Add (0-2): No cycle is formed, so add it to the MST.
● Add (2-4): No cycle is formed, so add it to the MST.

Now, we have V - 1 = ‘ edges in the MST, so we can stop.

Step 4: Result
The MST consists of the following edges:

● (0-1) with weight 1


● (1-3) with weight 3
● (0-2) with weight 5
● (2-4) with weight 6

Final Minimum Spanning Tree Cost

The total weight of the MST is:


1 + 3 + 5 + 6 = 15

So, the MST for this graph using Kruskal's algorithm includes edges (0-1),
(1-3), (0-2), and (2-4) with a total cost of 15.

c) Algorithm for Generating Minimum Cost Spanning Tree (Kruskal’s


Algorithm)

1. Sort all edges in increasing order of weight.


2. Initialize an empty MST and a union-find data structure to detect
cycles.
3. For each edge in sorted order:
● If it doesn’t form a cycle, add it to the MST.
4. Repeat until the MST has n - 1 edges.

→ Time Complexity: O(E log E), where E is the number of edges (due to
sorting and union-find operations).

d) Depth-First Search (DFS) with Example

DFS is a graph traversal technique that explores as far as possible


along each branch before backtracking.

1. Initialize: Mark the start node as visited and push it onto a stack.
2. Traverse: Pop the top node, visit unvisited adjacent nodes, and push
them onto the stack.
3. Backtrack: If no adjacent unvisited nodes remain, pop from the stack
until finding a node with unvisited neighbors.

Example: For a graph with nodes 1-2-3-4 connected in a line, DFS from
node 1 will visit nodes in the order 1 -> 2 -> 3 -> 4.

Time Complexity: O(V + E), where V is vertices, E is edges.


e) Breadth-First Search (BFS) with Example

BFS is a graph traversal technique that explores all neighbors of a node


before moving to the next level.

1. Initialize: Mark the starting node as visited and enqueue it.


2. Traverse: Dequeue the front node, visit all unvisited adjacent nodes,
and enqueue them.
3. Repeat until the queue is empty.

Example: For a graph with nodes 1 connected to 2, 2 connected to 3, and


3 connected to 4, BFS from node 1 will visit nodes in the order 1 -> 2 -> 3
-> 4.

→ Time Complexity: O(V+E).

f) Spanning Trees with Example

A spanning tree of a graph is a subset of edges that connects all


vertices without any cycles and includes exactly V - 1 edges for V
vertices.

Example: For a triangle graph with vertices A, B, and C, and edges AB,
BC, and CA:
● A spanning tree could include edges AB and BC, connecting all
vertices without forming a cycle.

g) Remove All Occurrences of Substring Using KMP Algorithm

→ To remove all occurrences of t in s using KMP:

1. Use the KMP algorithm to search for occurrences of t in s.


2. For each match, remove t from s by adjusting the pointers or using
string slicing.
3. Continue searching until all occurrences are removed.

Example:

● Input: s = "abcdefgabcabcabdefghabc", t = "abc"


● Output: "defgabdefgh"

→ Complexity: O(n + m) for building the prefix table, where n is the length
of s and m is the length of t.

h) Algorithm for BFS and Its Time Complexity


Algorithm

→ Time Complexity: O(V + E), where V is vertices and E is edges, as it


visits each node and edge once.

i) Algorithm for DFS and Its Time Complexity

Algorithm

→ Time Complexity: O(V + E), where V is vertices and E is edges. DFS


explores each node and edge once.

j) Explain Prim’s minimum cost spanning tree algorithm? And Construct


the minimum spanning tree (MST) for the given graph using Prim’s
Algorithm.

Prim’s algorithm is a greedy approach used to find the Minimum Spanning Tree
(MST) of a weighted, connected, and undirected graph. The MST of a graph is a
subset of edges that connects all vertices without cycles and with the minimum
possible total edge weight.

→ Steps of Prim's Algorithm

1. Start from any vertex (let’s assume vertex 1 for this example).
2. Add the minimum weight edge that connects a vertex in the MST to a vertex
outside the MST.
3. Repeat step 2 until all vertices are included in the MST.
4. The MST is complete when there are V - 1 edges in the MST (where V is the
number of vertices).

→ Applying Prim's Algorithm to the Given Graph

Step 1: Start from Vertex 1

1. Choose vertex 1 as the starting vertex.


2. Look at all edges from vertex 1:
● (1-6) = 10
● (1-2) = 28
3. Choose edge (1-6) with the minimum weight of 10.

MST edges: (1-6)

Step 2: Expand the MST


Now the MST includes vertices {1, 6}.

1. Look at all edges connecting vertices in the MST to those outside:


● (6-5) = 25
● (6-7) = 24
● (1-2) = 28
2. Choose edge (6-5) with the minimum weight of 25.

MST edges: (1-6), (6-5)

Step 3: Continue Expanding


Now the MST includes vertices {1, 6, 5}.

1. Look at all edges connecting vertices in the MST to those outside:


● (5-7) = 24
● (6-7) = 24
● (1-2) = 28
2. Choose edge (5-7) with the minimum weight of 24.

MST edges: (1-6), (6-5), (5-7)

Step 4: Continue Expanding


Now the MST includes vertices {1, 6, 5, 7}.

1. Look at all edges connecting vertices in the MST to those outside:


● (7-2) = 14
● (7-4) = 18
● (6-7) = 24 (already included in MST)
2. Choose edge (7-2) with the minimum weight of 14.

MST edges: (1-6), (6-5), (5-7), (7-2)


Step 5: Continue Expanding
Now the MST includes vertices {1, 6, 5, 7, 2}.

1. Look at all edges connecting vertices in the MST to those outside:


● (2-3) = 16
● (7-4) = 18
2. Choose edge (2-3) with the minimum weight of 16.

MST edges: (1-6), (6-5), (5-7), (7-2), (2-3)

Step 6: Add Final Vertex


Now the MST includes vertices {1, 6, 5, 7, 2, 3}.

1. The only remaining vertex is 4.


2. Look at all edges connecting the MST to vertex 4:
● (7-4) = 18
● (3-4) = 12
3. Choose edge (3-4) with the minimum weight of 12.

MST edges: (1-6), (6-5), (5-7), (7-2), (2-3), (3-4)

Final MST and Total Cost

The edges in the MST are:


● (1-6) with weight 10
● (6-5) with weight 25
● (5-7) with weight 24
● (7-2) with weight 14
● (2-3) with weight 16
● (3-4) with weight 12

The total cost of the MST is:


10 + 25 + 24 + 14 + 16 + 12 = 101

→ Conclusion

The MST for the given graph, using Prim’s algorithm starting from vertex 1,
includes the edges: (1-6), (6-5), (5-7), (7-2), (2-3), and (3-4), with a total minimum
cost of 101.

°•° most welcome ּ ֶָ֢.

You might also like