0% found this document useful (0 votes)
91 views58 pages

SEM5 - ADA - RMSE - Questions Solution1

The document provides explanations and examples for several key concepts in algorithm analysis including: 1) It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. Characteristics of algorithms include being unambiguous, finite, and feasible. 2) It explains common terms like sets, relations, and functions with examples. A function maps each input to only one output. 3) Analysis of algorithms is important for predicting behavior, comparing alternatives, and determining best solutions. Complexity is analyzed in worst-case, average-case, and best-case scenarios. 4) A nested loop algorithm running n times for each of n iterations is O(n^2) quadratic time complexity.

Uploaded by

Devanshu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views58 pages

SEM5 - ADA - RMSE - Questions Solution1

The document provides explanations and examples for several key concepts in algorithm analysis including: 1) It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. Characteristics of algorithms include being unambiguous, finite, and feasible. 2) It explains common terms like sets, relations, and functions with examples. A function maps each input to only one output. 3) Analysis of algorithms is important for predicting behavior, comparing alternatives, and determining best solutions. Complexity is analyzed in worst-case, average-case, and best-case scenarios. 4) A nested loop algorithm running n times for each of n iterations is O(n^2) quadratic time complexity.

Uploaded by

Devanshu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

L.

J Institutes of Engineering and Technology

Remedial MSE List of Questions

SEM: 5
Subject Name: Analysis and Design Of Algorithm (ADA)
Subject Code: 3150703

What
1. is algorithm? Explain with example. Which are the characteristics of algorithm?
 An algorithm is any well-defined computational procedure that takes some value, or set of values, as
input and produces some value, or set of values, as output. An algorithm is thus a sequence of
computational steps that transform the input into the output.
 We can also view an algorithm as a tool for solving a well-specified computational problem. The
statement of the problem specifies in general terms the desired input/output relationship. The
algorithm describes a specific computational procedure for achieving that input/output relationship.
 For example, we might need to sort a sequence of numbers into non-decreasing order. This problem
arises frequently in practice and provides fertile ground for introducing many standard design
techniques and analysis tools.
 For example, given the input sequence (31; 41; 59; 26; 41; 58), a sorting algorithm returns as output
the sequence (26; 31; 41; 41; 58; 59).
 Such an input sequence is called an instance of the sorting problem. In general, an instance of a
problem consists of the input (satisfying whatever constraints are imposed in the problem statement)
needed to compute a solution to the problem.
CHARACTERISTICS OF AN ALGORITHM:
 Not all procedures can be called an algorithm. An algorithm should have the following characteristics

 Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or phases), and their
inputs/outputs should be clear and must lead to only one meaning.
 Input − An algorithm should have 0 or more well-defined inputs.
 Output − An algorithm should have 1 or more well-defined outputs, and should match the desired
output.
 Finiteness − Algorithms must terminate after a finite number of steps.
 Feasibility − Should be feasible with the available resources.
 Independent − An algorithm should have step-by-step directions, which should be independent of
any programming code.
Explain
2. following terms with example
1. Set
2. Relation
3. Function
1. Set: A set is a collection of distinct or well-defined members or elements. In mathematics,
members of a set are written within curly braces or brackets {}. Members of assets can be anything
such as; numbers, people, or alphabetical letters, etc. For example,
 {a, b, c, …, x, y, z} is a set of alphabet letters
 {…, −4, −2, 0, 2, 4, …} is a set of even numbers.
 {2, 3, 5, 7, 11, 13, 17, …} is a set of prime numbers
2. Relation: The relation shows the relationship between INPUT and OUTPUT. A relation in
mathematics defines the relationship between two different sets of information. If two sets are
considered, the relation between them will be established if there is a connection between the
elements of two or more non-empty sets.
 In the morning assembly at schools, students are supposed to stand in a queue in ascending

Analysis and Design Of Algorithm 2021 Page 1


order of the heights of all the students. This defines an ordered relation between the students
and their heights.
 Therefore, we can say, ‘A set of ordered pairs is defined as a relation.’
3. Function: A function is a relation which describes that there should be only one output for each
input (or) we can say that a special kind of relation (a set of ordered pairs), which follows a rule i.e.,
every X-value should be associated with only one y-value is called a function.
 The notation f : X →Y means that f is a function from X to Y. X is called the domain of f and
Y is called the co-domain of f.
 Given an element x ∈ X, there is a unique element y in Y that is related to x. The unique
element y to which f relates x is denoted by f (x) and is called f of x, or the value of f at x, or
the image of x under f. The set of all values of f(x) taken together is called the range of f or
image of X under f. Symbolically. range of f = { y ∈ Y | y = f (x), for some x in X}
Explain
3. why analysis of algorithms is important? Explain: Worst Case, Best Case & Average Case
Complexity.
Analysis Of Algorithm is important because:
 To predict the behavior of an algorithm without implementing it on a specific computer.
 It is much more convenient to have simple measures for the efficiency of an algorithm than to
implement the algorithm and test the efficiency every time a certain parameter in the underlying
computer system changes.
 It is impossible to predict the exact behavior of an algorithm. There are too many influencing factors.
 The analysis is thus only an approximation; it is not perfect.
 More importantly, by analyzing different algorithms, we can compare them to determine the best one
for our purpose.
 Running time depends not only on an input size
 But also on the specifics of a particular input
 Consider Example: Sequential search
Worst Case Analysis:
 In the worst-case analysis, we calculate the upper limit of the execution time of an algorithm. It is
necessary to know the case which causes the execution of the maximum number of operations.
For linear search, the worst case occurs when the element to search for is not present in the array.
When x is not present, the search () function compares it with all the elements of arr [] one by one.
Therefore, the temporal complexity of the worst case of linear search would be Θ (n).
Average Case Analysis:
 The average case efficiency provide an algorithm’s behavior on a “typical” or “random” input.
 Assumptions
 The probability of a successful search is p (0 ≤ p ≤ 1)
 The probability of the first match occurring in the ith position of the list is the same for every i.
 In the case of a successful search
 The probability of match occurring in the ith position of the list is p/n for every i
 In the case of an unsuccessful search
 The probability of unsuccessful search is (1− p).
 In the average case analysis, we take all possible inputs and calculate the computation time for all
inputs. Add up all the calculated values and divide the sum by the total number of entries.
 We need to predict the distribution of cases. For the linear search problem, assume that all cases are
uniformly distributed. So we add all the cases and divide the sum by (n + 1).

Analysis and Design Of Algorithm 2021 Page 2


Best Case Analysis:
 In the best case analysis, we calculate the lower bound of the execution time of an algorithm. It is
necessary to know the case which causes the execution of the minimum number of operations. In the
linear search problem, the best case occurs when x is present at the first location.
 The number of operations in the best case is constant. The best-case time complexity would therefore
be Θ (1)
 Most of the time, we perform worst-case analysis to analyze algorithms. In the worst analysis, we
guarantee an upper bound on the execution time of an algorithm which is good information.
 Best Case: Cbest(n) = 1 =O (1)
When key element is found at first location of the list
 Worst Case: Cworst(n) = n = O(n)
When key element is not found in the list
 Average Case: Cavg(n) = p(n+1)/2 + n(1-p) = O(n)
When key element is found at random location
Find
4. out time complexity for the following pseudo code using O-notation
for(i = 0; i < n; i++)
{
for(j = n ; j > 0 ; j--)
{
if( i < j )
c = c + 1;
}
}
 Execution time for above Function:
 Here we're nesting two loops.
Analysis and Design Of Algorithm 2021 Page 3
 If our array has n items, our outer loop runs n times and our inner loop runs n times for each iteration
of the outer loop, giving us n^2 total iterations.
 Thus this function runs in O(n^2) time (or "quadratic time").
What
5. is recurrence? Solve recurrence equation
1. T (n) =T (n-1) + n using forward substitution and backward substitution method.
Solution:
a) Forward Substitution Method:
Consider T(0) =0,
In given formula
 Let n=1,
T(1) = T(1-1) +1
=T(0)+1
=0+1 =1
 Let n=2,
T(2) = T(2-1) +2
=T(1)+2
=1+2
 Let n=3,
T(3) = T(3-1) +3
=T(2)+3
=1+2+3
T(1) =1
T(2) =1+2
T(3) =1+2+3
i.e. T(n) = 1+2+3+…+n

T(n) = n(n+1) /2 , because


T(n) =(n2/2)+ (n/2)
T(n) =O(n^2)
b)Backward Substitution Method:
T (n) =T (n-1) + n , Consider T(0) =0,s
In given formula
 Let n=n-1,
T(n-1) = T(n-1-1) +n-1
T(n-1)=T(n-2)+(n-1)
Put this result in given formula,
T(n) = T(n-2)+(n-1)+ n ---(i)
 Let n=n-2 in given formula T(n) = T(n-1)+n,
T(n-2) = T(n-2-1) +n-2
T(n-2) =T(n-3)+(n-2)
Put this result in formula (i),
T(n) = T(n-3) + (n-2) + (n-1) + n ---(ii)
 Let n=n-3 in given formula T(n) = T(n-1)+n,
T(n-3) = T(n-3-1) +n-3
T(n-3)=T(n-4)+(n-3)
Put this result in formula (ii),
T(n) = T(n-4)+(n-3) + (n-2) + (n-1) + n ---(iii)
 T(n) = T(n-2)+(n-1)+ n --(i)

Analysis and Design Of Algorithm 2021 Page 4


 T(n) = T(n-3) + (n-2) + (n-1) + n --(ii)
 T(n) = T(n-4)+(n-3) + (n-2) + (n-1) + n --(iii)
 T(n) = T(n-k) + (n-k+1) +(n-k+2) + … +n
 Let k=n , therefore n-k=0
T(n) = T(0) + 1+2+..+n
Since T(0) =0
T(n) = 1+2+3+..+n
• T(n) = n(n+1) /2
• T(n) =(n2/2)+ (n/2)
• T(n) =O(n2)

2. using master method T(n) = T(2n/3) + 1


Thus, for this example, a = 1, b =3/ 2, and d = 0;
Since f(n) = 1= n0
As per master’s method:

hence, since a =bd, i.e. 1=20


Therefore, According to rule2,
T(n) ∈ Θ(nd log n) = Θ(n0 log2n) = Θ(log2n)

Explain
6. why the Heap sort method is called an efficient sorting algorithm.

The Heap sort algorithm is very efficient. While other sorting algorithms ma y grow exponentially slower
as the number of items to sort increase, the time required to perform Heap sort increases logarithmically.
This suggests that Heap sort is particularly suitable for sorting a huge list of items. Furthermore, the
performance of Heap sort is optimal. This implies that no other sorting algorithms can perform better in
comparison.
The Heap sort algorithm can be implemented as an in-place sorting algorithm. This means that its
memory usage is minimal because apart from what is necessary to hold the initial list of items to be
sorted, it needs no additional memory space to work. In contrast, the Merge sort algorithm requires more
memory space. Similarly, the Quick sort algorithm requires more stack space due to its recursive nature.
The Heap sort algorithm is simpler to understand than other equally efficient sorting algorithms. Because
it does not use advanced computer science concepts such as recursion, it is also easier for programmers to
implement correctly.
The Heap sort algorithm exhibits consistent performance. This means it performs equally well in the best,
average and worst cases. Because of its guaranteed performance, it is particularly suitable to use in
systems with critical response time.
1. Sort the following data using Heap sort method and Selection Sort Method.
20, 50, 30, 75, 90, 60, 80, 25, 10, 40. (Consider only blue pen written)

Analysis and Design Of Algorithm 2021 Page 5


Analysis and Design Of Algorithm 2021 Page 6
Analysis and Design Of Algorithm 2021 Page 7
Selection Sort:

Analysis and Design Of Algorithm 2021 Page 8


Analysis and Design Of Algorithm 2021 Page 9
2. Give the properties of Heap Tree.

A heap is a complete binary tree, whose entries satisfy the heap ordering property.

The heap ordering property states that the parent always precedes the children. There is no precedence
required between the children. The precedence must be an order realtionship. That is, we must be able to
determine the precedence between any two objects that can be placed in the heap, and this precedence must
be transitive. Consequently the root node will precede all other nodes in the heap as long as the heap
ordering property is maintained.

A heap is a complete binary tree. This means that visually nodes are added to the binary tree from top-to-
bottom, and left-to-right. More formally this means that we can arrange the nodes in a contiguous array
(indexed from 0) using the following formulas for determining the parent-child relationships.

 left child = 2 * parent + 1


 right child = 2 * parent + 2
 parent = floor( (child-1) / 2)

For example, if we have a heap containing 6 nodes, the last node would be node 5. Its parent would be node

Analysis and Design Of Algorithm 2021 Page 10


2, and it would be the left child of its parent. Furthermore, its parent, node 2, would be the right child of the
root node, node 0.

Key Property

Since the heap is a complete binary tree, we can count the nodes in each level.

 Level 0 contains the root node.


 Level 1 contains the children of the root node.
 Level L contains the children of the nodes in Level L-1.

Note that if the binary tree is complete and there is a node in level L, then each preceding level must contain
all the possible nodes for that level. Consequently, a tree with L levels will have N nodes where

2^0 + 2^1 + . . . + 2^(L-1) + 1 = 2^L <= N <= 2^0 + 2^1 + . . . + 2^L = 2^(L+1) - 1

Turning this around we can conclude that

L <= log(N) < L+1

where log means the log base 2.

This meas that the longest branch from the root to any leaf contains at most log(N) nodes. In turn this means
that the complexity of any algorithm which only accesses nodes on a single branch will be O(log(N)).

Growing a heap

A heap can be grown by

1. Adding a node with the new value at the next possible position that maintains the tree as a complete
binary tree,
2. Comparing the new node with its parent node and exchanging the nodes whenever the new node
precedes the parent, and
3. Repeating this last step until the new node reaches a position where its parent precedes it or when it
becomes the root.

At the end of the preceding steps the heap has grown with the addition of one node and the heap property has
been maintained. This process is sometimes described as the bubble up phase.

Removing items from a heap in order

The root always contains a node that precedes all the other nodes in the heap. We want to remove that root
node and repair the collection so that it satisfies the definition and properties of a heap.

1. Remove the root node and replace it with the last node. The heap is now a complete binary tree with
N-1 nodes. Refer to the root node as the parent node.
2. Compare the parent node with its children. If necessary, exchange the parent with the child that will
maintain the heap ordering property.
3. Repeat this last step until the parent node reaches a position where it precedes all of its children or
where it becomes a leaf.

Analysis and Design Of Algorithm 2021 Page 11


At the end of the preceding steps the node which precedes all the other nodes in the heap has been removed
and the heap property has been maintained. This process is sometimes described as the sifting down phase.

give
7. best case, worst case and average case complexity with example for following
a. Insertion sort

Analysis and Design Of Algorithm 2021 Page 12


Analysis and Design Of Algorithm 2021 Page 13
b. Selection sort
Here, Analysis remains same for Best Worst and average case. It is given as below:

Analysis and Design Of Algorithm 2021 Page 14


c. Bubble sort
For bubble sort, Analysis remains same for Best Worst and average case. It is given as below:
n = Input size = numbers or elements in a list
Basic operation is comparison followed by swap()
C(n)= outer-loop X inner-loop X Basic operation

Analysis and Design Of Algorithm 2021 Page 15


d. Heap Sort
Here, Analysis remains same for Best Worst and average case. It is given as below:
• Time complexity of heapify is O(Log n)

• Time complexity of create And Build Heap is O(n)

• Heap Sort is O(n Log n)

Find
8. an optimal Huffman code for the following set of frequency. a : 50, b: 20, c: 15, d:
30.

Analysis and Design Of Algorithm 2021 Page 16


9. a. Write Quick sort algorithm and derive the worst case time complexity of quick sort algorithm

quickSort(arr[], low, high)


{
if (low < high)
{
/* pi is partitioning index, arr[pi] is now
at right place */
pi = partition(arr, low, high);

quickSort(arr, low, pi - 1); // Before pi


quickSort(arr, pi + 1, high); // After pi
}
}
partition (arr[], low, high)

Analysis and Design Of Algorithm 2021 Page 17


{
// pivot (Element to be placed at right position)
pivot = arr[high];

i = (low - 1) // Index of smaller element and indicates the


// right position of pivot found so far

for (j = low; j <= high- 1; j++)


{
// If current element is smaller than the pivot
if (arr[j] < pivot)
{
i++; // increment index of smaller element
swap arr[i] and arr[j]
}
}
swap arr[i + 1] and arr[high])
return (i + 1)
}

Worst Case: The worst case occurs when the partition process always picks greatest or smallest element as
pivot. If we consider above partition strategy where last element is always picked as pivot, the worst case
would occur when the array is already sorted in increasing or decreasing order. Following is recurrence for
worst case.
T(n) = T(0) + T(n-1) + (n)
which is equivalent to

T(n) = T(n-1) + (n)

T(n) = T(n-1) + n
< c (n-1)^2 + n, assume c>1 wlog
< c n^2 - 2cn + c + n
< c n^2 - (2c - 1)n + c
< c n^2

The solution of above recurrence is (n2).

Analysis and Design Of Algorithm 2021 Page 18


b. Sort the following list using Merge Sort Algorithm :
<25,15,23,16,5,1,34,11,22,12,23>.

Analysis and Design Of Algorithm 2021 Page 19


Analysis and Design Of Algorithm 2021 Page 20
Explain
10. the use of Divide and Conquer Technique for Binary Search Method. What is the complexity of
Binary Search Method? Explain it with example

Search a sorted array by repeatedly dividing the search interval in half. Begin with an interval covering the
whole array. If the value of the search key is less than the item in the middle of the interval, narrow the
interval to the lower half. Otherwise, narrow it to the upper half. Repeatedly check until the value is found
or the interval is empty.

 One comparison KEY with A[mid]


 After comparison list with n elements is reduced to list of n/2 elements
 Therefore, T(n) = T(n/2) + 1
 If there is only one element in list, then only 1 comparison is made. I.e. T(1) =1

T(n) = T(n/2) + 1 and T(1) = 1


T(n) = T(n/2) + 1
= [ T(n/4)+1 ] +1
= T(n/4) + 2
= [ T(n/8)+1 ] +2
= T(n/8) + 3
= T(n/23) + 3

= T(n/2k) + k
= T(1) + log2n
= 1 + log2n
= O (log2n)

Analysis and Design Of Algorithm 2021 Page 21


11.a. Show how divide and conquer technique is used to compute product of two n digit no with example.

Analysis and Design Of Algorithm 2021 Page 22


b. Explain Strasson’s algorithm for matrix multiplication with suitable example.

Analysis and Design Of Algorithm 2021 Page 23


Analysis and Design Of Algorithm 2021 Page 24
Analysis and Design Of Algorithm 2021 Page 25
12.a. Explain difference between divide and conquer method and dynamic programming.
Divide and Conquer Method Dynamic Programming

1.It deals (involves) three steps at each level of 1.It involves the sequence of four steps:
recursion:
o Characterize the structure of optimal
Divide the problem into a number of subproblems.
Conquer the subproblems by solving them solutions.
recursively.
o Recursively defines the values of optimal
Combine the solution to the subproblems into the
solution for original subproblems. solutions.
o Compute the value of optimal solutions in
a Bottom-up minimum.
o Construct an Optimal Solution from
computed information.

2. It is Recursive. 2. It is non Recursive.

3. It does more work on subproblems and hence 3. It solves subproblems only once and then stores
has more time consumption. in the table.

4. It is a top-down approach. 4. It is a Bottom-up approach.

5. In this subproblems are independent of each 5. In this subproblems are interdependent.


other.

6. For example: Merge Sort & Binary Search etc. 6. For example: Matrix Multiplication.

b. Define: Optimal Solution, Feasible solution, Principle of Optimality.


Feasible solution – A solution (set of values for the decision variables) for which all of the constraints in
the Solver model are satisfied is called a feasible solution. In some problems, a feasible solution is already
known; in others, finding a feasible solution may be the hardest part of the problem.

Optimal Solution - An optimal solution is a feasible solution where the objective function reaches its
maximum (or minimum) value – for example, the most profit or the least cost.

Principal of Optimality - A problem is said to satisfy the Principle of Optimality if the subsolutions
of an optimal solution of the problem are themesleves optimal solutions for their subproblems.
Solve
13. the following Knapsack Problem using Dynamic Method. Write the equation for solving above
problem.

Total maximum profit we can add in bag is 156.

Analysis and Design Of Algorithm 2021 Page 26


Explain
14. how to find out Longest Common Subsequence of two strings using dynamic programming method.
Find Longest Common Subsequence using Dynamic Programming Technique with illustration
X={A,B,C,B,D,A,B} Y={B,D,C,A,B,A}

Analysis and Design Of Algorithm 2021 Page 27


Analysis and Design Of Algorithm 2021 Page 28
The longest common subsequence (LCS) is defined as the longest subsequence that is common to all the
given sequences, provided that the elements of the subsequence are not required to occupy consecutive
positions within the original sequences.
Generate
15. equation for Matrix chain multiplication using Dynamic programming.
Find out minimum number of multiplications required for multiplying: A[1 × 5], B[5 × 4], C[4 × 3], D[3 ×
2], and E[2 × 1]. Also give the optimal parenthesization of matrices.

Analysis and Design Of Algorithm 2021 Page 29


Analysis and Design Of Algorithm 2021 Page 30
Solve
16. Making Change problem using Dynamic Programming. (Denominations:
d1=1, d2=4, d3=6). Give your answer for making change of Rs. 9.

Analysis and Design Of Algorithm 2021 Page 31


17.a. Write the equation for finding out shortest path using Floyd’s algorithm. Use Floyd’s method to find
shortest path for below mentions all pairs.

 The Floyd–Warshall algorithm (also known as Floyd's algorithm) is an algorithm for finding shortest

paths in a weighted graph with positive or negative edge weights (but with no negative cycles).

 It computes Distance matrix of weighted graph with n vertices through series of n x n matrices.

 Each matrix Dk the shortest distance dij has to be computed between vertex vi and vj. Series starts

with D0 with no intermediate vertex.

 The recurrence relation to generate Dk follows:

Dk[i,j] = min{Dk-1[i,j] , Dk-1[i,k]+Dk-1[k,j]}

Using Floyd Warshall Algorithm, write the following 4 matrices

Analysis and Design Of Algorithm 2021 Page 32


Analysis and Design Of Algorithm 2021 Page 33
b. Explain Dijkstra’s shortest path algorithm with example. If we want to display intermediate node
than what change we should make in the algorithm.
 To find shortest paths from a source vertex v to all other vertices in the graph.

 Graph must be directed and weighted.

 Problem can be viewed as follows :

Graph vertices represents city and weighted edges represents distance between two cities.
Everybody is often interested in moving from one city to other as quickly as possible

 Algorithms: Dijkstra’s ,

 It assumes the distance/ edge weights are non-negative.

 It won’t work for negative. Distances

Analysis and Design Of Algorithm 2021 Page 34


Explain
18. in brief characteristics of greedy algorithms. Compare Greedy Method with Dynamic Programming
Method.

Greedy is an algorithmic paradigm that builds up a solution piece by piece, always choosing the next piece that
offers the most obvious and immediate benefit. So the problems where choosing locally optimal also leads to
global solution are best fit for Greedy.

Characteristics of Greedy approach


1. There is an ordered list of resources(profit, cost, value, etc.)
2. Maximum of all the resources(max profit, max value, etc.) are taken.
3. For example, in fractional knapsack problem, the maximum value/weight is taken first according to
available capacity.

Feature Greedy method Dynamic programming

In a greedy Algorithm, we make In Dynamic Programming we make


whatever choice seems best at the decision at each step considering current
moment in the hope that it will lead to problem and solution to previously solved
Feasibility global optimal solution. sub problem to calculate optimal solution .

In Greedy Method, sometimes there is It is guaranteed that Dynamic


Optimality no such guarantee of getting Optimal Programming will generate an optimal

Analysis and Design Of Algorithm 2021 Page 35


Solution. solution as it generally considers all
possible cases and then choose the best.

A Dynamic programming is an
A greedy method follows the problem algorithmic technique which is usually
solving heuristic of making the locally based on a recurrent formula that uses
Recursion optimal choice at each stage. some previously calculated states.

It is more efficient in terms of memory


as it never look back or revise previous It requires dp table for memoization and it
Memoization choices increases it’s memory complexity.

Greedy methods are generally faster.


For example, Dijkstra’s shortest Dynamic Programming is generally
Time path algorithm takes O(ELogV + slower. For example, Bellman Ford
complexity VLogV) time. algorithm takes O(VE) time.

The greedy method computes its Dynamic programming computes its


solution by making its choices in a solution bottom up or top down by
serial forward fashion, never looking synthesizing them from smaller optimal
Fashion back or revising previous choices. sub solutions.

Fractional knapsack . 0/1 knapsack problem


Example

Explain
19. Kruskal’s algorithm and Prim’s method
a. Generate minimum spanning tree of fig, A using Kruskal’s algorithm.
b. Generate minimum spanning tree of fig, A using Prim’s algorithm

Analysis and Design Of Algorithm 2021 Page 36


Analysis and Design Of Algorithm 2021 Page 37
Analysis and Design Of Algorithm 2021 Page 38
Using
20. greedy algorithm find an optimal solution for knapsack instance n=7, M =
15 (P1,P2,P3,P4,P5,P6,P7) = (10,5,15,7,6,18,3) and (w1,w2,w3,w4, w5,w6,w7) =(2,3,5,7,1,4,1).

Here column shows profit as per the index of the element that has given in the array.

Total maximum profit we can achive using Dynamic programing is 54.


Items which are included in knapsack are 10,5,15,7,6,18,3.

Following
21. are the details of various jobs to be scheduled on multiple processors such that no two processes
execute at the same on the same processor.

Show schedule of these jobs on minimum number of processors using greedy approach.
Derive an algorithm for the same. What is the time complexity of this algorithm?

Analysis and Design Of Algorithm 2021 Page 39


Analysis and Design Of Algorithm 2021 Page 40
 Time Complexity:
 When activities are sorted by their finish time: O(N)
 When activities are not sorted by their finish time, the time complexity is O(N log N) due to
complexity of sorting
Explain:
22. Acyclic Directed Graph, Articulation Point, Dense Graph, Breadth First Search Traversal, Depth
First Search Traversal.

Acyclic Directed Graph - A directed acyclic graph is a directed graph with no directed cycles. That is, it
consists of vertices and edges (also called arcs), with each edge directed from one vertex to another, such
that following those directions will never form a closed loop.

Articulation Point - A vertex in an undirected connected graph is an articulation point (or cut vertex) if
Analysis and Design Of Algorithm 2021 Page 41
removing it (and edges through it) disconnects the graph. Articulation points represent vulnerabilities in a
connected network – single points whose failure would split the network into 2 or more components. They
are useful for designing reliable networks.

Dense Graph:

Breadth First Search Traversal - Breadth-first search is an algorithm for searching a tree data structure for
a node that satisfies a given property. It starts at the tree root and explores all nodes at the present depth prior
to moving on to the nodes at the next depth level.

Depth First Search Traversal. - Depth First Search (DFS) algorithm traverses a graph in a depthward
motion and uses a stack to remember to get the next vertex to start a search, when a dead end occurs in any
iteration.
 Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Push it in a stack.
 Rule 2 − If no adjacent vertex is found, pop up a vertex from the stack. (It will pop up all the
vertices from the stack, which do not have adjacent vertices.)

Analysis and Design Of Algorithm 2021 Page 42


 Rule 3 − Repeat Rule 1 and Rule 2 until the stack is empty.

23.a. Explain Breadth First Traversal Method for Graph with algorithm.
Breadth-first search is an algorithm for searching a tree data structure for a node that satisfies a given
property. It starts at the tree root and explores all nodes at the present depth prior to moving on to the
nodes at the next depth level.

 BFS is a traversing algorithm where we start traversing from a selected source node layerwise by
exploring the neighboring nodes.
 The data structure used in BFS is a queue and a graph. The algorithm makes sure that every node is
visited not more than once.
 BFS follows the following 4 steps:
1. Begin the search algorithm, by knowing the key which is to be searched. Once the
key/element to be searched is decided the searching begins with the root (source) first.
2. Visit the contiguous unvisited vertex. Mark it as visited. Display it (if needed). If this is the
required key, stop. Else, add it in a queue.
3. On the off chance that no neighboring vertex is discovered, expel the first vertex from the
Queue.
4. Repeat step 2 and 3 until the queue is empty.
 The above algorithm is a search algorithm that identifies whether a node exists in the graph. We can
convert the algorithm to traversal algorithm to find all the reachable nodes from a given node.

 For a directed graph, the sum of the sizes of the adjacency lists of all the nodes is E. So, the time
complexity in this case is O(V) + O(E) = O(V + E).

b. Explain Depth First Traversal Method for Graph with algorithm.


Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures.
The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a
graph) and explores as far as possible along each branch before backtracking.

Analysis and Design Of Algorithm 2021 Page 43


 Approach: Depth-first search is an algorithm for traversing or searching tree or graph data
structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the
case of a graph) and explores as far as possible along each branch before backtracking. So the basic
idea is to start from the root or any arbitrary node and mark the node and move to the adjacent
unmarked node and continue this loop until there is no unmarked adjacent node. Then backtrack and
check for other unmarked nodes and traverse them. Finally, print the nodes in the path.
 Algorithm:
1. Create a recursive function that takes the index of the node and a visited array.
2. Mark the current node as visited and print the node.
3. Traverse all the adjacent and unmarked nodes and call the recursive function with the index
of the adjacent node.

Complexity Analysis:
 Time complexity: O(V + E), where V is the number of vertices and E is the number of edges in
the graph.
 Space Complexity :O(V).
Since an extra visited array is needed of size V.

Write
24. an algorithm to find out the articulation points of an undirected graph.
Find out articulation points for the following graph. Consider vertex A as the starting point.

Analysis and Design Of Algorithm 2021 Page 44


Analysis and Design Of Algorithm 2021 Page 45
Explain
25. Backtracking Method. What is N-Queens Problem? Give solution of using Backtracking Method.
a. 4-Queens Problem
b. 8-Queens Problem
 Backtracking is a technique based on algorithm to solve problem. It uses recursive calling to find the
solution by building a solution step by step increasing values with time. It removes the solutions that
doesn't give rise to the solution of the problem based on the constraints given to solve the problem.
 Backtracking algorithm is applied to some specific types of problems,
 Decision problem used to find a feasible solution of the problem.
 Optimisation problem used to find the best solution that can be applied.
 Enumeration problem used to find the set of all feasible solutions of the problem.
 In backtracking problem, the algorithm tries to find a sequence path to the solution which has some
small checkpoints from where the problem can backtrack if no feasible solution is found for the
problem.
 Example,

Analysis and Design Of Algorithm 2021 Page 46


 Here,
 Green is the start point, blue is the intermediate point, red are points with no feasible solution, dark
green is end solution.
 Here, when the algorithm propagates to an end to check if it is a solution or not, if it is then returns
the solution otherwise backtracks to the point one step behind it to find track to the next point to find
solution.

Analysis and Design Of Algorithm 2021 Page 47


Similarly 8 queens problem can be solved:

Analysis and Design Of Algorithm 2021 Page 48


Demonstrate
26. Binary Search method to search Key = 14, form the array
A=<2,4,7,8,10,13,14,60>

Analysis and Design Of Algorithm 2021 Page 49


Analysis and Design Of Algorithm 2021 Page 50
.
Explain
27. Traveling salesman problem with example

Travelling Salesman Problem (TSP): Given a set of cities and distance between every pair of cities, the
problem is to find the shortest possible route that visits every city exactly once and returns to the starting
point.
Note the difference between Hamiltonian Cycle and TSP. The Hamiltoninan cycle problem is to find if
there exist a tour that visits every city exactly once. Here we know that Hamiltonian Tour exists (because
the graph is complete) and in fact many such tours exist, the problem is to find a minimum weight
Hamiltonian Cycle.

For example, consider the graph shown in figure on right side. A TSP tour in the graph is 1 -2-4-3-1. The
cost of the tour is 10+25+30+15 which is 80.

Naive Solution:
Analysis and Design Of Algorithm 2021 Page 51
1) Consider city 1 as the starting and ending point.
2) Generate all (n-1)! Permutations of cities.
3) Calculate cost of every permutation and keep track of minimum cost permutation.
4) Return the permutation with minimum cost.
Time Complexity: Θ(n!)

Dynamic Programming:
Let the given set of vertices be {1, 2, 3, 4,….n}. Let us consider 1 as starting and ending point of output.
For every other vertex i (other than 1), we find the minimum cost path with 1 as the starting point, i as the
ending point and all vertices appearing exactly once. Let the cost of this path be cost(i), the cost of
corresponding Cycle would be cost(i) + dist(i, 1) where dist(i, 1) is the distance from i to 1. Finally, we
return the minimum of all [cost(i) + dist(i, 1)] values. This looks simple so far. Now the question is how to
get cost(i)?
To calculate cost(i) using Dynamic Programming, we need to have some recursive relation in terms of
sub-problems. Let us define a term C(S, i) be the cost of the minimum cost path visiting each vertex in set
S exactly once, starting at 1 and ending at i.
We start with all subsets of size 2 and calculate C(S, i) for all subsets where S is the subset, then we
calculate C(S, i) for all subsets S of size 3 and so on. Note that 1 must be present in every subset.

If size of S is 2, then S must be {1, i},


C(S, i) = dist(1, i)
Else if size of S is greater than 2.
C(S, i) = min { C(S-{i}, j) + dis(j, i)} where j belongs to S, j != i and j != 1.
For a set of size n, we consider n-2 subsets each of size n-1 such that all subsets don’t have nth in them.
Using the above recurrence relation, we can write dynamic programming based solution. There are at most
O(n*2n) subproblems, and each one takes linear time to solve. The total running time is therefore
O(n2*2n). The time complexity is much less than O(n!), but still exponential. Space required is also
exponential. So this approach is also infeasible even for slightly higher number of vertices.
28.a. Explain naïve string matching algorithm
Given a text txt[0..n-1] and a pattern pat[0..m-1], write a function search(char pat[], char txt[]) that prints
all occurrences of pat[] in txt[]. You may assume that n > m.
Examples:
Input: txt[] = "THIS IS A TEST TEXT"
pat[] = "TEST"
Output: Pattern found at index 10

Input: txt[] = "AABAACAADAABAABA"


pat[] = "AABA"
Output: Pattern found at index 0
Pattern found at index 9
Pattern found at index 12

def search(pat, txt):


M = len(pat)
N = len(txt)

Analysis and Design Of Algorithm 2021 Page 52


# A loop to slide pat[] one by one */
for i in range(N - M + 1):
j=0

# For current index i, check


# for pattern match */
while(j < M):
if (txt[i + j] != pat[j]):
break
j += 1

if (j == M):
print("Pattern found at index ", i)
The number of comparisons in the worst case is O(m*(n-m+1))

b. Discuss Rabin-karp algorithm for string matching


Given a text txt[0..n-1] and a pattern pat[0..m-1], write a function search(char pat[], char
txt[]) that prints all occurrences of pat[] in txt[]. You may assume that n > m.

Input: txt[] = "THIS IS A TEST TEXT"


pat[] = "TEST"
Output: Pattern found at index 10

Input: txt[] = "AABAACAADAABAABA"


pat[] = "AABA"
Output: Pattern found at index 0
Pattern found at index 9
Pattern found at index 12

What
29. is finite automata? Explain with example how finite automaton is used for string matching?
A Finite Automata is collection of
1. Finite set of states Q.
2. Start state q0 Є Q.
3. Final state qf Є Q.
4. Finite set of input ∑.
5. A mapping function or transition function δ from Q x ∑.
Finite automaton is used for string matching:

Analysis and Design Of Algorithm 2021 Page 53


Analysis and Design Of Algorithm 2021 Page 54
And so on, the transition table will be:

Analysis and Design Of Algorithm 2021 Page 55


Analysis and Design Of Algorithm 2021 Page 56
30.a. Write a brief note on NP-completeness and the classes-P, NP and NPC.
Can all computational problems be solved by a computer? There are computational problems
that can not be solved by algorithms even with unlimited time. For example Turing Halting
problem (Given a program and an input, whether the program will eventually halt when run with
that input, or will run forever). Alan Turing proved that a general algorithm to solve the halting
problem for all possible program-input pairs cannot exist. A key part of the proof is, Turing
machine was used as a mathematical definition of a computer and program (Source Halting
Problem).
Status of NP Complete problems is another failure story, NP complete problems are problems
whose status is unknown. No polynomial time algorithm has yet been discovered for any NP
complete problem, nor has anybody yet been able to prove that no polynomial-time algorithm
exists for any of them. The interesting part is, if any one of the NP complete problems can be
solved in polynomial time, then all of them can be solved.
What are NP, P, NP-complete and NP-Hard problems?
P is a set of problems that can be solved by a deterministic Turing machine in Polynomial time.
NP is set of decision problems that can be solved by a Non-deterministic Turing Machine in Polynomial
time. P is subset of NP (any problem that can be solved by a deterministic machine in polynomial time can
also be solved by a non-deterministic machine in polynomial time).
Informally, NP is a set of decision problems that can be solved by a polynomial-time via a “Lucky
Algorithm”, a magical algorithm that always makes a right guess among the given set of choices
(Source Ref 1).
NP-complete problems are the hardest problems in the NP set. A decision problem L is NP-complete if:
1) L is in NP (Any given solution for NP-complete problems can be verified quickly, but there is no
efficient known solution).
2) Every problem in NP is reducible to L in polynomial time (Reduction is defined below).
A problem is NP-Hard if it follows property 2 mentioned above, doesn’t need to follow property 1.
Therefore, the NP-Complete set is also a subset of the NP-Hard set.

Decision vs Optimization Problems


NP-completeness applies to the realm of decision problems. It was set up this way because it’s easier to
compare the difficulty of decision problems than that of optimization problems. In reality, though, being
able to solve a decision problem in polynomial time will often permit us to solve the corresponding
optimization problem in polynomial time (using a polynomial number of calls to the decision problem).
So, discussing the difficulty of decision problems is often really equivalent to discussing the difficulty of
optimization problems.
b. Explain polynomial reduction.

Let L1 and L2 be two decision problems. Suppose algorithm A2 solves L2. That is, if y is an input for
Analysis and Design Of Algorithm 2021 Page 57
L2 then algorithm A2 will answer Yes or No depending upon whether y belongs to L 2 or not.
The idea is to find a transformation from L1 to L2 so that algorithm A2 can be part of an algorithm A1 to
solve L1.

Learning reduction, in general, is very important. For example, if we have library functions to solve
certain problems and if we can reduce a new problem to one of the solved problems, we save a lot of time.
Consider the example of a problem where we have to find the minimum product path in a given directed
graph where the product of path is the multiplication of weights of edges along the path. If we have code
for Dijkstra’s algorithm to find the shortest path, we can take the log of all weights and use Dijkstra’s
algorithm to find the minimum product path rather than writing a fresh code for this new problem.

Analysis and Design Of Algorithm 2021 Page 58

You might also like