0% found this document useful (0 votes)
220 views4 pages

DAA Exam Preparation Questions

Uploaded by

saiv3328
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
220 views4 pages

DAA Exam Preparation Questions

Uploaded by

saiv3328
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

UNIT-1

1) Apply Divide and Conquer Paradigm To explain Merge Sort Algorithm with Example.
2) Explain the Following list of elements in Quick Sort Algorithm:
65,70,75,80,85,60,55,50,45
3) Define an algorithm. What are the different criteria that satisfy the algorithm? (algorithm
specifications)
4) Explain how algorithms performance is analysed? Describe asymptotic notations?
5) What is meant by time complexity? Define different time complexity notations? Define
example for one of each?
6) What is Strasson’s matrix multiplication? explain its time complexity?
7) Develop the algorithms for the following
i). UNION
ii) FIND
iii) WEIGHTED UNION.
8) a) Derive the time complexity for Quick Sort
b) Draw the tree of calls of merge sort for the following set.
(35, 25,15,10,45, 75, 85, 65, 55, 5, 20, 18)

UNIT-2
1. (a) Present a Greedy Algorithm for Sequencing Unit Time Jobs with deadlines and profits.
(b) What is greedy method? Explain with example.
2. a) Solve Dijkstra’s Algorithm using Greedy Method?

(b) What is the solution generated by the function JS when n = 4

(P1,P2,P3,P4)=(100,10,15,27) (d1,d2,d3,d4)=(2,1,2,1)
3. Explain about spanning tree and write Kruskal’s algorithm and explain with example graph
4. Explain, how to find the minimum cost spanning tree by using Prim’s Algorithm.

5. What is a Minimum cost spanning tree? Identify an efficient data structure for implementation
of Krushkal’s Algorithm for the connected Weighted Graph.
UNIT-3

DYNAMIC PROGRAMMING

1. Find the shortest path from node 1 to every other node in the graph given below using all pairs
shortest algorithm.

2. what is optimal binary search tree? Construct an optimal binary search tree for the given
instance n=4

(a1,a2,a3,a4)=(do,if,int,while)

P[1,2,3,4]=(3,3,1,1) q[0,1,2,3,4]=(2,3,1,1,1)

3. what is Travelling sales person problem? Find the travelling sales person problem for the
given directed graph 0 10 15 20

5 0 9 10

6 13 0 12

8 8 9 0

4. Solve 0/1 knapsack problem for the following data using sets method
5. Design a system with maximum reliability which consists of the three types of devices
D1,D2,D3 whose costs are $30, $15, $20 respectively. The cost of the system is no more than
$105 and reliability of each device type is 0.9, 0.8 and 0.5 respectively.

6. Solve the travelling salesman problem for the following graph

UNIT-4

BACKTRACKING , BRANCH AND BOUND

Problems which deal with searching for a set of solutions or which ask for an optimal solution
satisfying some constraints can be solved using the backtracking formulation.

1. What are explicit constraints and implicit constraints? Specify the explicitand implicit
constraints for 8-queens problem and sum of subsets problem.

2. Explain in detail about statespace tree and write the algorithms for recursive and iterative
backtracking algorithms.

3. Write the backtracking algorithm to solve n-queens problem.

4. Explain the backtracking algorithm to solve sum of subsets problem along with state space
tree.
5. Explain the backtracking algorithm to solve graph coloring problem with an example.

6. Explain the backtracking algorithm to solve find Hamiltonian cycle.

7. Explain briefly how backtracking can be used to solve 0/1 Knapsack problem.

8. What are FIFO, LIFO and Least cost branch and bound paradigms?

** 9. Explain the control abstraction for least cost search (pg. 407 in text)

10. How does branch and bound differ from backtracking?

11. Consider the knapsack intance n=4, (p1,p2,p3,p4)=(10,10,12,18),(w1,w2,w3,w4)=(2,4,6,9)


and m=15. Solve this 0/1 knapsack problem using least cost branch and bound and FIFO branch
and bound( text book example from pg. 4`14 to 418).

12. Solve the following travelling sales problem using Least cost branch and bound.

UNIT-5

ALGEBRAIC PROBLEMS, LOWER BOUND THEORY and NP-HARD AND NP-


COMPLETE PROBLEMS

1. (a) Write short notes on

i) Classes of NP-hard
ii) Classes of NP-complete
(b) Prove that if NP ≠ CO − NP, then P ≠ NP
2. Explain P, NP-hard, NP-complete problems and their relationship using pi diagram. Also write
the nondeterministic polynomial time algorithms for clique, 0/1 knapsack and satisfiability
problems.

3. Explain with an example the steps involved in integer multiplication using mod p
transformations.

4. Explain briefly the steps involved in polynomial multiplication using algebraic


transformations

5. Distinguish between deterministic and non-deterministic algorithms

Common questions

Powered by AI

An optimal binary search tree minimizes the expected search cost based on a given set of keys, each associated with a probability of search occurrence. For an example instance with n=4, keys (a1,a2,a3,a4)=(do,if,int,while), and probabilities P=(3,3,1,1) and Q=(2,3,1,1,1), the optimal tree is constructed by calculating the weighted cost of all possible trees and choosing the configuration with the lowest cost. Using dynamic programming techniques to compute the expected cost matrix for subtrees ensuring that frequently searched keys are near the root, the process ensures minimal search path lengths on average. This is determined by assessing the dynamic programming table filled with scrutinized combinations of keys and calculating the optimal cost for all subproblems .

The FIFO (First-In-First-Out) branch and bound strategy processes nodes in the state space tree just like in a queue, prioritizing the exploration of nodes in the order they were generated without considering their cost-effectiveness. This strategy is systematic and guarantees eventual coverage of all nodes necessary to find the optimal solution . However, it might not efficiently lead to optimal solutions quickly because it doesn’t leverage cost-focusing strategies to prioritize potentially optimal nodes. Thus, it can end up exploring a large number of suboptimal paths, especially in problems where the search space is vast and optimal nodes are not among the initially generated states, thereby increasing time complexity and potential computational costs .

Kruskal’s algorithm builds a minimum spanning tree by sorting all the edges of the graph by increasing order of weight. It then takes the smallest edge that does not form a cycle with the spanning tree formed so far, repeating until there are exactly V-1 edges in the tree, where V is the number of vertices . The algorithm is efficient for sparse graphs and generally operates with a time complexity of O(E log E), where E is the number of edges, primarily due to the edge sorting process . Efficient data structures like disjoint-set (also known as union-find) are used to keep track of the subsets of vertices as the tree is expanded, enhancing performance especially when union by rank and path compression techniques are applied .

Branch and bound algorithms use a systematic way of exploring the solution space by evaluating explicit (defined) states and implicit (hidden or potential) states, often using a cost function to guide the selection of the next state to explore. This can involve least-cost search paradigms and strategies like FIFO and LIFO queues . Backtracking, on the other hand, uses a simpler state space tree exploration and is often utilized for constraint satisfaction problems without necessarily calculating a cost function beyond feasibility checks, such as in the n-queens and sum of subsets problems . While backtracking explores the space in a depth-first manner, branch and bound uses cost comparison to prune sub-optimal paths, making it more suited for optimization problems .

The weighted union technique significantly enhances the efficiency of disjoint set operations, such as UNION and FIND, by always attaching the smaller tree under the root of the larger tree. This helps in maintaining a shallow tree structure, reducing the time complexity for FIND operations significantly to almost constant time. When combined with path compression, which flattens the structure of the tree whenever FIND is called, the overall amortized time complexity of disjoint set operations becomes nearly O(1). This improvement leads to more efficient implementations for a variety of applications, including Kruskal's algorithm for minimum spanning trees, thereby optimizing overall algorithm performance in those contexts .

NP-Complete problems are a class of computational problems for which no known polynomial-time solutions exist, yet any solved instance can be verified quickly in polynomial time . These problems are significant because they sit at the intersection of P and NP, and solving any NP-Complete problem efficiently would imply P equals NP, a major unsolved question in computer science. A common example is the Travelling Salesperson Problem (TSP), where the objective is to determine the shortest possible route visiting each city exactly once and returning to the origin city. The significance lies in the universality of NP-Complete problems in computational theory, serving as benchmarks for complexity and influencing how algorithms are developed for real-world problem-solving .

Quick Sort uses a divide-and-conquer approach, and its time complexity mainly depends on how well the pivot partitions the array. The best-case scenario, where the pivot divides the array into two equal halves each time, results in a time complexity of O(n log n). The average-case reflects similar behavior, also leading to O(n log n). However, in the worst-case scenario, where the pivot is always the smallest or largest element (like in already sorted arrays), the time complexity degrades to O(n^2). Nevertheless, these scenarios are rare in practice due to strategies like random pivot selection, which can help maintain average-case performance. Moreover, Quick Sort's efficiency is aided by its in-place sorting mechanism, which minimizes additional memory overhead .

The dynamic programming approach to solving the Travelling Salesperson Problem (TSP) involves breaking down the problem into simpler subproblems and storing their solutions. It uses a state-space representation where the state is a pair consisting of a set of visited vertices and the current vertex, and builds solutions by extending them with new edges . This method calculates the minimal cost path recursively and utilizing memorization ensures that each subproblem is computed only once, leading to reduced redundancy. However, the limitation is that this approach has exponential time complexity O(n^2*2^n), which is more computationally intense as the number of vertices increases. Although more efficient than a naive approach, it remains impractical for very large graphs .

Asymptotic notations provide a means to describe the limiting behavior of an algorithm's time or space complexity as the input size approaches infinity. They help in evaluating and comparing the efficiency of algorithms abstracted from lower-level factors like language or machine dependencies . The main types include Big O, which gives an upper bound on the time complexity (worst-case scenario); Big Omega (Ω), which provides a lower bound (best-case scenario); and Big Theta (Θ), representing a tight bound when an algorithm's performance scales reliably between the upper and lower limits . These notations enable developers to focus on significant factors affecting efficiency and ignore constant factors and lower order terms, thus guiding the optimization and selection of algorithms effectively .

Strassen’s matrix multiplication algorithm is an improvement over the conventional O(n^3) multiplication technique. It reduces the number of necessary multiplications between two n x n matrices by using a divide-and-conquer approach to break each matrix into smaller submatrices. Strassen's method involves seven multiplications and 18 addition/subtraction operations for 2x2 matrices, contrary to the typical eight multiplication operations . This results in a reduced time complexity to approximately O(n^2.81), which is a significant improvement for large matrices. This method's implication is most profound on large data sets due to its reduced computational time, despite its increased complexity in managing the submatrix calculations .

You might also like