SEM5 - ADA - RMSE - Questions Solution1
SEM5 - ADA - RMSE - Questions Solution1
SEM: 5
Subject Name: Analysis and Design Of Algorithm (ADA)
Subject Code: 3150703
What
1. is algorithm? Explain with example. Which are the characteristics of algorithm?
An algorithm is any well-defined computational procedure that takes some value, or set of values, as
input and produces some value, or set of values, as output. An algorithm is thus a sequence of
computational steps that transform the input into the output.
We can also view an algorithm as a tool for solving a well-specified computational problem. The
statement of the problem specifies in general terms the desired input/output relationship. The
algorithm describes a specific computational procedure for achieving that input/output relationship.
For example, we might need to sort a sequence of numbers into non-decreasing order. This problem
arises frequently in practice and provides fertile ground for introducing many standard design
techniques and analysis tools.
For example, given the input sequence (31; 41; 59; 26; 41; 58), a sorting algorithm returns as output
the sequence (26; 31; 41; 41; 58; 59).
Such an input sequence is called an instance of the sorting problem. In general, an instance of a
problem consists of the input (satisfying whatever constraints are imposed in the problem statement)
needed to compute a solution to the problem.
CHARACTERISTICS OF AN ALGORITHM:
Not all procedures can be called an algorithm. An algorithm should have the following characteristics
−
Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or phases), and their
inputs/outputs should be clear and must lead to only one meaning.
Input − An algorithm should have 0 or more well-defined inputs.
Output − An algorithm should have 1 or more well-defined outputs, and should match the desired
output.
Finiteness − Algorithms must terminate after a finite number of steps.
Feasibility − Should be feasible with the available resources.
Independent − An algorithm should have step-by-step directions, which should be independent of
any programming code.
Explain
2. following terms with example
1. Set
2. Relation
3. Function
1. Set: A set is a collection of distinct or well-defined members or elements. In mathematics,
members of a set are written within curly braces or brackets {}. Members of assets can be anything
such as; numbers, people, or alphabetical letters, etc. For example,
{a, b, c, …, x, y, z} is a set of alphabet letters
{…, −4, −2, 0, 2, 4, …} is a set of even numbers.
{2, 3, 5, 7, 11, 13, 17, …} is a set of prime numbers
2. Relation: The relation shows the relationship between INPUT and OUTPUT. A relation in
mathematics defines the relationship between two different sets of information. If two sets are
considered, the relation between them will be established if there is a connection between the
elements of two or more non-empty sets.
In the morning assembly at schools, students are supposed to stand in a queue in ascending
Explain
6. why the Heap sort method is called an efficient sorting algorithm.
The Heap sort algorithm is very efficient. While other sorting algorithms ma y grow exponentially slower
as the number of items to sort increase, the time required to perform Heap sort increases logarithmically.
This suggests that Heap sort is particularly suitable for sorting a huge list of items. Furthermore, the
performance of Heap sort is optimal. This implies that no other sorting algorithms can perform better in
comparison.
The Heap sort algorithm can be implemented as an in-place sorting algorithm. This means that its
memory usage is minimal because apart from what is necessary to hold the initial list of items to be
sorted, it needs no additional memory space to work. In contrast, the Merge sort algorithm requires more
memory space. Similarly, the Quick sort algorithm requires more stack space due to its recursive nature.
The Heap sort algorithm is simpler to understand than other equally efficient sorting algorithms. Because
it does not use advanced computer science concepts such as recursion, it is also easier for programmers to
implement correctly.
The Heap sort algorithm exhibits consistent performance. This means it performs equally well in the best,
average and worst cases. Because of its guaranteed performance, it is particularly suitable to use in
systems with critical response time.
1. Sort the following data using Heap sort method and Selection Sort Method.
20, 50, 30, 75, 90, 60, 80, 25, 10, 40. (Consider only blue pen written)
A heap is a complete binary tree, whose entries satisfy the heap ordering property.
The heap ordering property states that the parent always precedes the children. There is no precedence
required between the children. The precedence must be an order realtionship. That is, we must be able to
determine the precedence between any two objects that can be placed in the heap, and this precedence must
be transitive. Consequently the root node will precede all other nodes in the heap as long as the heap
ordering property is maintained.
A heap is a complete binary tree. This means that visually nodes are added to the binary tree from top-to-
bottom, and left-to-right. More formally this means that we can arrange the nodes in a contiguous array
(indexed from 0) using the following formulas for determining the parent-child relationships.
For example, if we have a heap containing 6 nodes, the last node would be node 5. Its parent would be node
Key Property
Since the heap is a complete binary tree, we can count the nodes in each level.
Note that if the binary tree is complete and there is a node in level L, then each preceding level must contain
all the possible nodes for that level. Consequently, a tree with L levels will have N nodes where
2^0 + 2^1 + . . . + 2^(L-1) + 1 = 2^L <= N <= 2^0 + 2^1 + . . . + 2^L = 2^(L+1) - 1
This meas that the longest branch from the root to any leaf contains at most log(N) nodes. In turn this means
that the complexity of any algorithm which only accesses nodes on a single branch will be O(log(N)).
Growing a heap
1. Adding a node with the new value at the next possible position that maintains the tree as a complete
binary tree,
2. Comparing the new node with its parent node and exchanging the nodes whenever the new node
precedes the parent, and
3. Repeating this last step until the new node reaches a position where its parent precedes it or when it
becomes the root.
At the end of the preceding steps the heap has grown with the addition of one node and the heap property has
been maintained. This process is sometimes described as the bubble up phase.
The root always contains a node that precedes all the other nodes in the heap. We want to remove that root
node and repair the collection so that it satisfies the definition and properties of a heap.
1. Remove the root node and replace it with the last node. The heap is now a complete binary tree with
N-1 nodes. Refer to the root node as the parent node.
2. Compare the parent node with its children. If necessary, exchange the parent with the child that will
maintain the heap ordering property.
3. Repeat this last step until the parent node reaches a position where it precedes all of its children or
where it becomes a leaf.
give
7. best case, worst case and average case complexity with example for following
a. Insertion sort
Find
8. an optimal Huffman code for the following set of frequency. a : 50, b: 20, c: 15, d:
30.
Worst Case: The worst case occurs when the partition process always picks greatest or smallest element as
pivot. If we consider above partition strategy where last element is always picked as pivot, the worst case
would occur when the array is already sorted in increasing or decreasing order. Following is recurrence for
worst case.
T(n) = T(0) + T(n-1) + (n)
which is equivalent to
T(n) = T(n-1) + n
< c (n-1)^2 + n, assume c>1 wlog
< c n^2 - 2cn + c + n
< c n^2 - (2c - 1)n + c
< c n^2
Search a sorted array by repeatedly dividing the search interval in half. Begin with an interval covering the
whole array. If the value of the search key is less than the item in the middle of the interval, narrow the
interval to the lower half. Otherwise, narrow it to the upper half. Repeatedly check until the value is found
or the interval is empty.
1.It deals (involves) three steps at each level of 1.It involves the sequence of four steps:
recursion:
o Characterize the structure of optimal
Divide the problem into a number of subproblems.
Conquer the subproblems by solving them solutions.
recursively.
o Recursively defines the values of optimal
Combine the solution to the subproblems into the
solution for original subproblems. solutions.
o Compute the value of optimal solutions in
a Bottom-up minimum.
o Construct an Optimal Solution from
computed information.
3. It does more work on subproblems and hence 3. It solves subproblems only once and then stores
has more time consumption. in the table.
6. For example: Merge Sort & Binary Search etc. 6. For example: Matrix Multiplication.
Optimal Solution - An optimal solution is a feasible solution where the objective function reaches its
maximum (or minimum) value – for example, the most profit or the least cost.
Principal of Optimality - A problem is said to satisfy the Principle of Optimality if the subsolutions
of an optimal solution of the problem are themesleves optimal solutions for their subproblems.
Solve
13. the following Knapsack Problem using Dynamic Method. Write the equation for solving above
problem.
The Floyd–Warshall algorithm (also known as Floyd's algorithm) is an algorithm for finding shortest
paths in a weighted graph with positive or negative edge weights (but with no negative cycles).
It computes Distance matrix of weighted graph with n vertices through series of n x n matrices.
Each matrix Dk the shortest distance dij has to be computed between vertex vi and vj. Series starts
Graph vertices represents city and weighted edges represents distance between two cities.
Everybody is often interested in moving from one city to other as quickly as possible
Algorithms: Dijkstra’s ,
Greedy is an algorithmic paradigm that builds up a solution piece by piece, always choosing the next piece that
offers the most obvious and immediate benefit. So the problems where choosing locally optimal also leads to
global solution are best fit for Greedy.
A Dynamic programming is an
A greedy method follows the problem algorithmic technique which is usually
solving heuristic of making the locally based on a recurrent formula that uses
Recursion optimal choice at each stage. some previously calculated states.
Explain
19. Kruskal’s algorithm and Prim’s method
a. Generate minimum spanning tree of fig, A using Kruskal’s algorithm.
b. Generate minimum spanning tree of fig, A using Prim’s algorithm
Here column shows profit as per the index of the element that has given in the array.
Following
21. are the details of various jobs to be scheduled on multiple processors such that no two processes
execute at the same on the same processor.
Show schedule of these jobs on minimum number of processors using greedy approach.
Derive an algorithm for the same. What is the time complexity of this algorithm?
Acyclic Directed Graph - A directed acyclic graph is a directed graph with no directed cycles. That is, it
consists of vertices and edges (also called arcs), with each edge directed from one vertex to another, such
that following those directions will never form a closed loop.
Articulation Point - A vertex in an undirected connected graph is an articulation point (or cut vertex) if
Analysis and Design Of Algorithm 2021 Page 41
removing it (and edges through it) disconnects the graph. Articulation points represent vulnerabilities in a
connected network – single points whose failure would split the network into 2 or more components. They
are useful for designing reliable networks.
Dense Graph:
Breadth First Search Traversal - Breadth-first search is an algorithm for searching a tree data structure for
a node that satisfies a given property. It starts at the tree root and explores all nodes at the present depth prior
to moving on to the nodes at the next depth level.
Depth First Search Traversal. - Depth First Search (DFS) algorithm traverses a graph in a depthward
motion and uses a stack to remember to get the next vertex to start a search, when a dead end occurs in any
iteration.
Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Push it in a stack.
Rule 2 − If no adjacent vertex is found, pop up a vertex from the stack. (It will pop up all the
vertices from the stack, which do not have adjacent vertices.)
23.a. Explain Breadth First Traversal Method for Graph with algorithm.
Breadth-first search is an algorithm for searching a tree data structure for a node that satisfies a given
property. It starts at the tree root and explores all nodes at the present depth prior to moving on to the
nodes at the next depth level.
BFS is a traversing algorithm where we start traversing from a selected source node layerwise by
exploring the neighboring nodes.
The data structure used in BFS is a queue and a graph. The algorithm makes sure that every node is
visited not more than once.
BFS follows the following 4 steps:
1. Begin the search algorithm, by knowing the key which is to be searched. Once the
key/element to be searched is decided the searching begins with the root (source) first.
2. Visit the contiguous unvisited vertex. Mark it as visited. Display it (if needed). If this is the
required key, stop. Else, add it in a queue.
3. On the off chance that no neighboring vertex is discovered, expel the first vertex from the
Queue.
4. Repeat step 2 and 3 until the queue is empty.
The above algorithm is a search algorithm that identifies whether a node exists in the graph. We can
convert the algorithm to traversal algorithm to find all the reachable nodes from a given node.
For a directed graph, the sum of the sizes of the adjacency lists of all the nodes is E. So, the time
complexity in this case is O(V) + O(E) = O(V + E).
Complexity Analysis:
Time complexity: O(V + E), where V is the number of vertices and E is the number of edges in
the graph.
Space Complexity :O(V).
Since an extra visited array is needed of size V.
Write
24. an algorithm to find out the articulation points of an undirected graph.
Find out articulation points for the following graph. Consider vertex A as the starting point.
Travelling Salesman Problem (TSP): Given a set of cities and distance between every pair of cities, the
problem is to find the shortest possible route that visits every city exactly once and returns to the starting
point.
Note the difference between Hamiltonian Cycle and TSP. The Hamiltoninan cycle problem is to find if
there exist a tour that visits every city exactly once. Here we know that Hamiltonian Tour exists (because
the graph is complete) and in fact many such tours exist, the problem is to find a minimum weight
Hamiltonian Cycle.
For example, consider the graph shown in figure on right side. A TSP tour in the graph is 1 -2-4-3-1. The
cost of the tour is 10+25+30+15 which is 80.
Naive Solution:
Analysis and Design Of Algorithm 2021 Page 51
1) Consider city 1 as the starting and ending point.
2) Generate all (n-1)! Permutations of cities.
3) Calculate cost of every permutation and keep track of minimum cost permutation.
4) Return the permutation with minimum cost.
Time Complexity: Θ(n!)
Dynamic Programming:
Let the given set of vertices be {1, 2, 3, 4,….n}. Let us consider 1 as starting and ending point of output.
For every other vertex i (other than 1), we find the minimum cost path with 1 as the starting point, i as the
ending point and all vertices appearing exactly once. Let the cost of this path be cost(i), the cost of
corresponding Cycle would be cost(i) + dist(i, 1) where dist(i, 1) is the distance from i to 1. Finally, we
return the minimum of all [cost(i) + dist(i, 1)] values. This looks simple so far. Now the question is how to
get cost(i)?
To calculate cost(i) using Dynamic Programming, we need to have some recursive relation in terms of
sub-problems. Let us define a term C(S, i) be the cost of the minimum cost path visiting each vertex in set
S exactly once, starting at 1 and ending at i.
We start with all subsets of size 2 and calculate C(S, i) for all subsets where S is the subset, then we
calculate C(S, i) for all subsets S of size 3 and so on. Note that 1 must be present in every subset.
if (j == M):
print("Pattern found at index ", i)
The number of comparisons in the worst case is O(m*(n-m+1))
What
29. is finite automata? Explain with example how finite automaton is used for string matching?
A Finite Automata is collection of
1. Finite set of states Q.
2. Start state q0 Є Q.
3. Final state qf Є Q.
4. Finite set of input ∑.
5. A mapping function or transition function δ from Q x ∑.
Finite automaton is used for string matching:
Let L1 and L2 be two decision problems. Suppose algorithm A2 solves L2. That is, if y is an input for
Analysis and Design Of Algorithm 2021 Page 57
L2 then algorithm A2 will answer Yes or No depending upon whether y belongs to L 2 or not.
The idea is to find a transformation from L1 to L2 so that algorithm A2 can be part of an algorithm A1 to
solve L1.
Learning reduction, in general, is very important. For example, if we have library functions to solve
certain problems and if we can reduce a new problem to one of the solved problems, we save a lot of time.
Consider the example of a problem where we have to find the minimum product path in a given directed
graph where the product of path is the multiplication of weights of edges along the path. If we have code
for Dijkstra’s algorithm to find the shortest path, we can take the log of all weights and use Dijkstra’s
algorithm to find the minimum product path rather than writing a fresh code for this new problem.