0% found this document useful (0 votes)
8 views37 pages

QP3

The document contains examination questions and answers for a B.Tech course on Algorithm Analysis and Design. It covers topics such as Big Oh notation, AVL trees, recurrence relations, and comparisons between different algorithms and techniques. Additionally, it includes definitions and explanations of concepts like P and NP problems, graph coloring, and the principle of optimality.

Uploaded by

Anthony Stark
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views37 pages

QP3

The document contains examination questions and answers for a B.Tech course on Algorithm Analysis and Design. It covers topics such as Big Oh notation, AVL trees, recurrence relations, and comparisons between different algorithms and techniques. Additionally, it includes definitions and explanations of concepts like P and NP problems, graph coloring, and the principle of optimality.

Uploaded by

Anthony Stark
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

APJ ABDUL KALAM TECHNOLOGICAL UNIVERSITY

B.Tech Degree S6 (R, S) / S6 (PT) (R) Examination June 2023 (2019 Scheme)

Course Code: CST306

Course Name: ALGORITHM ANALYSIS AND DESIGN

PART A

1. Show that for any real constants a and b, where b > 0, (n + a)b = O(nb )
Ans:
We need to find c and n0 such that (n + a)b ≤ c n b, for all n>= n0

(n + a) ≤ (n + |a|)
(n + a)b ≤ (n + |a|)b
|𝑎|
(n + |a|)b = nb (1 + 𝑛 )b
|𝑎| b
(1 + ) ≤c
𝑛

Assume that c=2b and 𝑛0=max(1,∣𝑎|)


Now the following relation is true.
(n + a)b ≤ 2b n b, for all n>= max(1,∣𝑎|)

Therefore, (n + a)b = O(nb )


2. Solve the following recurrence equations using Master theorem
a. T(n) = 3T(n/2) + n2
b. T(n) = 2T(n/2) + n log n
Ans:
a) T(n) = 3T(n/2) + n2
a=3 b=2 n2 =Ɵ (n2 log0(n)) k=2 p=0
k 2
b =2 = 4
Here a< bk and p≥0
T(n) = Ɵ(nk logp(n))
= Ɵ(n2 log0(n))
= Ɵ(n2)
b) T(n) = 2T(n/2) + n log n
a=2 b=2 n log n =Ɵ (n1 log1(n)) k=1 p=1
k 1
b =2 = 2
Here a= bk and p>-1
T(n) = Ɵ(n(log b a) logp+1(n))
= Ɵ(n(log 2 2) log2(n))
= Ɵ(nlog2(n))
3. Define AVL tree. Explain the rotations performed for insertion in AVL tree.
Ans:
 AVL Tree can be defined as height balanced binary search tree in which each node
is associated with a balance factor.

 Balance Factor
 Balance Factor of a node = height of left subtree – height of right subtree
 In an AVL tree balance factor of every node is -1,0 or +1
 Otherwise the tree will be unbalanced and need to be balanced.

 An AVL tree becomes imbalanced due to some insertion or deletion operations


 We use rotation operation to make the tree balanced.
 There are 4 types of rotations

 Single Left Rotation(LL Rotation)


 In LL rotation every node moves one position to left from the current position

 Single Right Rotation(RR Rotation)


 In RR rotation every node moves one position to right from the current
position
 Left-Right Rotation(LR Rotation)
 The LR rotation is the combination of single left rotation followed by single
right rotation.

 Right-Left Rotation(RL Rotation)


 The RL rotation is the combination of single right rotation followed by single
left rotation.

4. Find the different topological ordering of the given graph

Ans:
A, B, C, D, E
A, B, D, C, E
5. Write the control abstraction of divide and conquer strategy
Ans:

Algorithm DAndC(P)
{
if Small(P) then
return S(P)
else
{
Divide P into smaller instances P1, P2, . . . . Pk, k≥1;
apply DAndC to each of these sub-problems;
return Combine(DAndC(P1), DAndC(P2), . . . . , DAndC(Pk));
}
}

6. Compare Strassen’s matrix multiplication with ordinary matrix multiplication


Ans:
 Ordinary Matrix Multiplication
 For multiplying two matrices of size n x n, we make n3 matrix multiplications.
 So the time complexity = O(n3)
 Strassen’s matrix multiplication
 For multiplying two matrices of size n x n, we make 7 matrix multiplications
and 10 matrix additions and subtractions
 Addition/Subtraction of two matrices takes O(n2) time.
 Time complexity = 7 T(n/2) + O(n2) = O(nlog 7) = O(n2.81)

7. Differentiate backtracking technique from branch and bound technique


Ans:
Backtracking Branch and Bound
Backtracking is a problem-solving Branch n bound is a problem-solving
technique so it solves the decision problem. technique so it solves the optimization
problem.
Backtracking uses a Depth first search. Branch and bound uses Depth first
search/D Search/Least cost search.
In backtracking, all the possible solutions In branch and bound, based on search;
are tried. If the solution does not satisfy the bounding values are calculated. According
constraint, then we backtrack and look for to the bounding values, we either stop there
another solution. or extend.
Applications of backtracking are n-Queens Applications of branch and bound are
problem, Sum of subset. knapsack problem, travelling salesman
problem, etc.
Backtracking is more efficient than the Branch n bound is less efficient.
Branch and bound.
8. What is Principle of Optimality?
Ans:
 Definition: The principle of optimality states that an optimal sequence of decisions
has the property that whatever the initial state and decision are, the remaining
decisions must constitute an optimal decision sequence with regard to the state
resulting from the first decision.
 A problem is said to satisfy the Principle of Optimality if the subsolutions of an
optimal solution of the problem are themesleves optimal solutions for their
subproblems.
 Examples:
 The shortest path problem satisfies the Principle of Optimality.
 This is because if a,x1,x2,...,xn,b is a shortest path from node a to node b in a
graph, then the portion of xi to xj on that path is a shortest path from xi to xj.

9. Differentiate P and NP problems. Give one example to each


Ans:
o Class P
 Class P consists of those problems that are solvable in polynomial time.
 P problems can be solved in time O(nk). Here n is the size of input and k is some
constant.
 Example:
 PATH Problem: Given directed graph G, determine whether a directed path
exists from s to t.
 Complexity of this algorithm = O(n).
 This is a polynomial time algorithm.
o Class NP
 Some problems can be solved in exponential or factorial time. Suppose these
problems have no polynomial time solution. We can verify these problems in
polynomial time. These are called NP problems.
 NP is a class of problem that having only non-polynomial time algorithm and a
polynomial time verifier.
 Example:
 Hamiltonian path(HAMPATH) Problem
o A Hamiltonian path in a directed graph G is a directed path that goes
through each node exactly once.

o The Hamiltonian path of the above graph is as follows


o There is no polynomial solution to find the Hamiltonian path from s to t
in a given graph. But we can verify these problems in polynomial time.
So HAMPATH problem is a NP problem.

10. Define graph coloring problem


Ans:
Graph coloring is the procedure of assignment of colors to each vertex of a graph G such
that no adjacent vertices get same color. The objective is to minimize the number of
colors while coloring a graph. The smallest number of colors required to color a graph is
called its chromatic number of that graph. Graph coloring problem is a NP Complete
problem.

3 Colorable graph

PART B

11.
a) Define Big Oh, Big Omega and Theta notations and illustrate them graphically.
Ans:
 Asymptotic Notations
 It is the mathematical notations to represent frequency count.
 Big Oh (O)
 The function f(n) = O(g(n)) iff there exists 2 positive constants c and
n0 such that 0 ≤ f(n) ≤ c g(n) for all n ≥ n0
 It is the measure of longest amount of time taken by an
algorithm(Worst case).
 It is asymptotically tight upper bound
 Omega (Ω)
 The function f(n) = Ω (g(n)) iff there exists 2 positive constant c and n0
such that f(n) ≥ c g(n) ≥ 0 for all n ≥ n0
 It is the measure of smallest amount of time taken by an
algorithm(Best case).
 It is asymptotically tight lower bound

 Theta (Ɵ)
 The function f(n) = Ɵ (g(n)) iff there exists 3 positive constants c1, c2
and n0 such that 0 ≤ c1 g(n) ≤ f(n) ≤ c2 g(n) for all n ≥ n0
 It is the measure of average amount of time taken by an
algorithm(Average case).

b) Find the time complexity of following code segment

i) for (int i = 1; i <= n; i *= c) {


// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}
(ii) for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
for (int i = n; i > 0; i -= c) {
// some O(1) expressions
}
Ans:
i)
Time complexity of First for loop = O(logc n )
Time complexity of Sevond for loop = O(logc n )
Altogether the time complexity = O(logc n )

ii)
Frequency count of first for loop = n/c
Frequency count of second for loop = n/c
Total frequency count = 2n/c
Time complexity = O(n )

12.
a) Find the best case, worst case and average case time complexity of binary search
Ans:

Algorithm BinarySearch(A, low, high, search_data)


{
flag=0
while low<=high do
{
mid= (low+high)/2
if A[mid]=search_data then
{
flag = 1
break
}
else if A[mid] >search_data then
high=mid-1
else
low=mid+1
}
if flag=0 then
Print "Search data not found"
else
Print "Search data found at index "mid
}

■ Best Case Time Complexity of Binary Search


 The search data is at the middle index.
 So total number of iterations required is 1
 Therefore, Time complexity = O(1)

■ Worst Case Time Complexity of Binary Search


 Assume that length of the array is n
 At each iteration, the array is divided by half.
 At Iteration 1, Length of array = n
 At Iteration 2, Length of array = n/2
 At Iteration 3, Length of array = (n/2)/2 = n/2²
 At Iteration k, Length of array =n/2k-1
 After k divisions, the length of array becomes 1
n/2k-1 = 1
n = 2k-1
 Applying log function on both sides:
log2 (n) = log2 (2k-1)
log₂ (n) = (k-1) log₂ (2)
k = log2 (n) +1
 Hence, the time complexity = O(log2 (n) )
■ Average Case Time Complexity of Binary Search
 Total number of iterations required = k/2 = log2 (n)/2
 Hence, the time complexity = O(log₂ (n))

b) Find the time complexity of following function using recursion tree method
(i) T(n) = 2 T(n/2) + n2
(ii) T(n) = T(n/3) + T(2n/3) +n
Ans:

(i) T(n) = 2 T(n/2) + n2

Assume n/2k=1  2k=n  k=log2 n

T(n) = n2+(n2 /2) +(n2 / 22)+……..+(n2 / 2k)


= n2[1+(1/2)+(1/2) 2+. . . . . .+ (1/2) k]
= n2 [ [1-(1/2) k+1] / [1-(1/2)] ]
= 2n2 [ 1- (1/2)x(1/2) k]
= 2n2 [ 1- (1/2)x(1/2k)]
= 2n2 [ 1- (1/2)x(1/n)]
= 2n2 – n
= O(n2)
(ii) T(n) = T(n/3) + T(2n/3) +n

Assume that 2kn/3k = 1  (3/2)k = n  k= log(3/2) n


T(n) = (k+1) n
= (log(3/2) n + 1) n
= n log(3/2) n + n
= O(n log(3/2) n)
13.

a) Construct AVL tree by inserting following elements appeared in the order.


21, 26, 30, 9, 4, 14, 28, 18,15
Ans:
b) Explain union and find algorithms in disjoint datasets
Ans:
o Find Operation
 Determine which subset a particular element is in.
 This will return the representative(root) of the set that the element belongs.
 This can be used for determining if two elements are in the same subset.

Find(3) will return 1, which is the root of the tree that 3 belongs
Find(6) will return 6, which is the root of the tree that 6 belongs

 Find Algorithm
Algorithm Find(n)
1. while nparent != NULL do
1.1 n = nparent
2. return n

Worst case Time Complexity = O(d), where d is the depth of the tree

o Union Operation
 Join two subsets into a single subset.
 Here first we have to check if the two subsets belong to same set. If no, then
we cannot perform union
i 1 2 3 4 5 6 7
P -1 1 1 1 6 1 6
 Union Algorithm
Algorithm Union(a, b)
1. X =Find(a)
2. Y = Find(b)
3. If X != Y then
1. Yparent = X

Worst case Time Complexity = O(d), where d is the depth of the tree

14.

a) Write DFS algorithm for graph traversal. Also derive its time complexity.
Ans:
Algorithm DFS(G, u)
1. Mark vertex u as visited
2. For each adjacent vertex v of u
2.1 if v is not visited
2.1.1 DFS(G, v)

Algorithm main(G,u)
1. Set all nodes are unvisited.
2. DFS(G, u)
3. For any node x which is not yet visited
3.1 DFS(G, x)

 Complexity
 If the graph is represented as an adjacency list
 Each vertex is visited atmost once. So the time devoted is O(V)
 Each adjacency list is scanned atmost once. So the time devoted is
O(E)
 Time complexity of DFS = O(V + E).
 If the graph is represented as an adjacency matrix
 There are V2 entries in the adjacency matrix. Each entry is checked
once.
 Time complexity of DFS = O(V2)
b) Find the strongly connected components of the given directed graph.

Ans:
PASS-1
Perform a depth first search on the whole graph. If a vertex has no unvisited neighbor,
then push this vertex to the stack.
Final stack will look like:

PASS-2
Now reverse the original graph.

Set all nodes are unvisited.


Pop an item from the stack. If it is unvisited and perform DFS on the reversed graph.
It will generate the first strongly connected component.

Again pop the next item from the stack and if it is unvisited perform DFS. It will
generate the next strongly connected component.
Again pop the next item from the stack and if it is unvisited perform DFS. It will
generate the next strongly connected component.

Thus the strongly connected components are:

0-3-2-1
4-6-5
7
15.

a) Explain 2- way merge sort algorithm with an example and derive its time complexity
Ans:

Algorithm MergeSort(low, high)


{
mid = (low + high )/2;
MergeSort(low, mid);
MergeSort(mid+1, high);
Merge(low, mid, high);
}
Algorithm Merge(low, mid, high)
{
i= low; x= low; y= mid + 1;
while((x ≤ mid) and (y ≤ high)) do
{
if ( a[x] ≤ a[y] ) then
{
b[i] = a[x];
x = x+1;
}
else
{
b[i] = a[y];
y = y+1;
}
i=i+1;
}
if( x ≤ mid) then
{
for k=x to mid do
{
b[i] = a[k];
i =i+1;
}
}
else
{
for k=y to high do
{
b[i] = a[k];
i =i+1;
}
}
for k= low to high do
a[k] = b[k];
}

 Complexity

T(n) = a if n=1
2 T(n/2) + cn Otherwise

a is the time to sort an array of size 1


cn is the time to merge two sub-arrays
2 T(n/2) is the complexity of two recursion calls
T(n) = 2 T(n/2) + c n
= 2(2 T(n/4)+c(n/2)) + c n
= 22T(n/22) + 2 c n
= 23T(n/23) + 3 c n
..............
= 2kT(n/2k) + k c n [Assume that 2k =n, k=log n]
= n T(1) + c n log n
= a n + c n log n
= O(n log n)
Best Case, Average Case and Worst Case Complexity of Merge Sort = O(n log n)

b) Find the optimal solution for the following Fractional Knapsack problem. Given the
number of items(n) = 7, capacity of sack(m) = 15,
W={1, 3, 5, 4, 1, 3, 2} and P = {10, 15, 7, 8, 9, 4}
Ans:
There are 7 weights and 6 profits. So we cannot solve this problem.
16.

a) Apply Kruskal’s algorithm for finding minimum cost spanning tree

Ans:
This is minimum cost spanning tree and its cost = 21

b) Apply Dijikstra’s algorithm for finding the shortest path from vertex A to all other
vertices.
Ans:

A B C D E F G
A 0 ∞ ∞ ∞ ∞ ∞ ∞
B 5 ∞ ∞ ∞ ∞ ∞
C 4 12 13 ∞ ∞
D 6 6 13 10 ∞
F 10 10 18
E 10 13
G 13

C D C E

PATH SHORTESTPATH SHORTESTDIST


ANCE
A→B A→B 5
A→C A→C 6
A→D A→C→D 8
A→E A→C→D→E 10
A→F A→C→F 10
A→G A→C→D→E→G 13

17.

a) Find the optimal parenthesis of matrix chain product whose sequence of dimensions
is 5 x 4, 4 x 6, 6 x 2, 2 x 7
Ans:

Minimum number of scalar multiplications = 158


b) Explain 4 queen problem. Draw the state space tree for 4 queen problem.
Ans:
 4-Queens Problem
 4 queens are to be placed on a 4 x 4 chessboard so that no two attack. That is, no
two queens are on the same row, column, or diagonal.

 State Space Tree of 4 Queens Problem

18.

a) Define TSP problem. Apply branch and bound algorithm for solving TSP.

Ans:
Adjacency Matrix : ∞ 10 50 45
10 ∞ 25 25
50 25 ∞ 40
45 25 40 ∞
b) Write Floyd Warshall’s algorithm for finding all pairs shortest path algorithm.
Ans:
Algorithm FloydWarshall(cost[][], n)
{
for i=1 to n do
for j=1 to n do
D[i, j] = cost[i, j]
for k := 1 to n do
for i := 1 to n do
for j := 1 to n do
D[i, j] = min{D[i, j] , D[i, k] + D[k, j]
Return D
}
 Time Complexity
o Floyd Warshall Algorithm consists of three loops over all the nodes. Each
loop has constant complexities.
o Hence, the time complexity of Floyd Warshall algorithm = O(n3), where n is
the number of nodes in the given graph.

19.

a) Explain the first fit-decreasing strategy of bin packing algorithm.


Ans:
 First Fit Decreasing Algorithm
o Sort the items in the descending order of their size
o Apply First fit algorithm
o Time Complexity
 Best case Time Complexity = θ(n log n)
 Average case Time Complexity = θ(n2)
 Worst case Time Complexity = θ(n2)
 Example: bin capacity=10, sizes of the items are {5, 7, 5, 2, 4, 2, 5, 1, 6}.
o Arrange the items in the decreasing order of the weight
{7, 6, 5, 5, 5, 4, 2, 2, 1}
Number of bins required = 4

b) Prove that Clique Decision problem is NP-complete


Ans:
 CLIQUE problem is NP Complete: Proof
 Step 1: Write a polynomial time verification algorithm to prove that the given
problem is NP
o Algorithm: Let G= (V,E), we use the set V’ ⊆ V of k vertices in the clique
as a certificate of G
1. Test whether V’ is a set of k vertices in the graph G
2. Check whether for each pair (u,v) ∈ V’, the edge (u,v) belongs to E.
3. If both steps pass, then accept. Otherwise reject.
o This algorithm will execute in polynomial time. Therefore CLIQUE
problem is a NP problem.
 Step 2: Write a polynomial time reduction algorithm from 3-CNF-SAT problem
to CLIQUE problem(3-CNF-SAT ≤p CLIQUE)
o Algorithm
 Let Ф = C1 ˄ C2 . . . . . ˄ Ck be a Boolean formula in 3CNF with k
clauses
 Each clause Cr has exactly three distinct literals lr1, lr2, lr3.
 Construct a graph G such that Ф is satisfiable iff G has a click of size k.
 The graph G is constructed as follows
 For each clause Cr = (lr1 V lr2 V lr3) in Ф, we place a triple of
vertices Vr1, Vr2 and Vr3 in to V.
 Put an edge between Vri to Vsi if following two conditions hold
o Vri and Vsi in different triples( that is r!=s)
o lri is not a negation of lsi .

 Example: Ф = (x1 V ‫ך‬x2 V ‫ך‬x3) ˄ (‫ך‬x1 V x2 V x3) ˄ (x1 V x2 V x3)


o The graph G equivalent to Ф is as follows

o If G has a clique of size k, then Ф has a satisfying assignment. Here k=3.


o G can easily be constructed from Ф in polynomial time.
o So CLIQUE problem is NP Hard.
 Conclusion
o CLIQUE problem is NP and NP Hard. So it is NP-Complete

20.

a) Differentiate Las Vegas and Monte Carlo algorithms


Ans:
 Randomized Las Vegas Algorithms
 Output is always correct and optimal.
 Running time is a random number
 Running time is not bounded
 Example: Randomized Quick Sort
 Randomized Monte Carlo Algorithms:
 May produce correct output with some probability
 A Monte Carlo algorithm runs for a fixed number of steps. That is the running
time is deterministic
 Example: Finding an ‘a’ in an array of n elements
 Input: An array of n≥2 elements, in which half are ‘a’s and the other half are
‘b’s
 Output: Find an ‘a’ in the array

 Las Vegas algorithm

Algorithm findingA_LV(A, n)
{
repeat
{
Randomly choose one element out of n elements
}until(‘a’ is found)
}
 The expected number of trials before success is 2.
 Therefore the time complexity = O(1)

 Monte Carlo algorithm

Algorithm findingA_MC(A, n, k )
{
i=0;
repeat
{
Randomly select one element out of n elements
i=i+1;
}until(i=k or ‘a’ is found);
}
 This algorithm does not guarantee success, but the run time is bounded.
The number of iterations is always less than or equal to k.
 Therefore the time complexity = O(k)

b) Explain randomized quick sort with the help of suitable examples


Ans:

Algorithm randQuickSort(A[], low, high)

1. If low >= high, then EXIT


2. While pivot 'x' is not a Central Pivot.
2.1. Choose uniformly at random a element from A[low..high]. Let the randomly picked
element be x.
2.2. Count elements in A[low..high] that are smaller than x. Let this count be sc.
2.3. Count elements in A[low..high] that are greater than x. Let this count be gc.
2.4. Let n = (high-low+1). If sc >= n/4 and gc >= n/4, then x is a central pivot.
3. Partition A[low..high] into two subarrays. The first subarray has all the elements of A
that are less than x and the second subarray has all those that are greater than x. Now the
index of x be pos.
4. randQuickSort(A, low, pos-1)
5. randQuickSort(A, pos+1, high)

Example:
Consider an unsorted array : [3, 6, 8, 10, 1, 2, 4]
Randomly choose one element as pivot element with the condition that n/4 elements
are greater than that pivot element and n/4 elements are less than that pivot element.
Suppose we select element 4 as the pivot element. Here 3 elements are greater than 4
and 3 elements are less than 4.
n/4 = 7/4 ≈ 1
So the condition satisfied and the element 4 is the pivot element.
Swap the pivot with the 1st element of the array.
Now the array is : [4, 6, 8, 10, 1, 2, 3]
Partition the array

4 6 8 10 1 2 3
low high

4 3 8 10 1 2 6
low high

4 3 8 10 1 2 6
low high

4 3 2 10 1 8 6
low high

4 3 2 10 1 8 6
low high

4 3 2 1 10 8 6
low high

4 3 2 1 10 8 6
high low

1 3 2 4 10 8 6
high low

1 3 2 4 10 8 6
Now the location of 4 is fixed. It partition the array into 2 subarrays
Subarray1: [1, 3, 2]
Subarray2: [10, 8, 6]

Then recursively call the two subarrays and perform the above operations.
First consider the subarray1: [1, 3, 2]
Suppose 2 is the randomly selected pivot element. Swap it with the 1st element in that
array. Now the array becomes [2, 3, 1].
Partition this array

2 3 1
low high

2 1 3
low high

2 1 3
high low

1 2 3
high low

1 2 3
Now 2 is placed in its actual location. It partition the array in to 2 subarrays. These
subarrays contain only one element. So no need for further sorting.

Now consider the subarray2: [10, 8, 6]


Suppose 8 is the randomly selected pivot element. Swap it with the 1st element in that
array. Now the array becomes [8, 10, 6].

Partition this array

8 10 6
low high

8 6 10
low high
8 6 10
high low

6 8 10
high low

6 8 10
Now 8 is placed in its actual location. It partition the array in to 2 subarrays. These
subarrays contain only one element. So no need for further sorting.

The resultant sorted array is :

1 2 3 6 8 10

You might also like