DAA Important Questions for Exams
DAA Important Questions for Exams
Quicksort is a divide-and-conquer algorithm. It selects a pivot and partitions the array such that elements less than the pivot precede those greater. Recursively sorting the partitions leads to an overall sorted array. Its average and best-case complexity is O(n log n) while the worst case is O(n^2), often mitigated by randomizing the pivot selection. For example, sorting [6, 3, 8, 5, 2], choosing 5 as pivot, splits into [3, 2] and [6, 8], recursively sorting these and merging gives [2, 3, 5, 6, 8]. Quicksort is efficient on average, more so with tweaks like median-of-three pivot selection.
Heap Sort involves building a max heap from the input data, followed by repeatedly extracting the maximum element and restoring heap order. For A = {6, 14, 3, 25, 2, 10, 20, 7, 6}, start by building a max-heap, adjusting [6,14,3,25,2,10,20,7,6] to [25,14,20,7,2,10,3,6,6]. Extract 25 (the root), replacing it with the last element and reheapify, repeating this until the array is in order. The heap is reduced in size by 1 after each extraction, leading to sorted array [2,3,6,6,7,10,14,20,25]. Time complexity is O(n log n), where n is the number of elements.
The recursion tree method helps by visualizing the breakdown of the recursive calls. Each call branches into two new calls, with parameters αn and (1-α)n, respectively. The depth and width of the tree depend on α and 1-α, and leaf nodes handle base cases or small subproblems requiring O(n) time each. Summing the node contributions gives the dominant cost. For this recurrence structure, the work at each level of the tree creates a geometric series with a sum resolving to O(n log n) assuming c is positive and α is a fraction in (0,1). Hence, T(n) is O(n log n)
Recursive backtracking explores all possibilities for subsets, adding elements and checking if their sum matches the target. If it overshoots, backtrack to explore without the last element. For set {5, 7, 10, 12, 15, 18, 20}, recursively check subsets start at each element, maintaining the current sum and path. It finds subsets like {5,7,10,13}, {5,10,20}, that sum 35 by adding elements to the solution path recursively and backtracking when path invalidates. This method’s complexity is exponential in the number of items due to checking all subsets.
Prim's algorithm grows the MST from an initial vertex, repeatedly adding the smallest edge that extends the MST to a new vertex. It uses a priority queue to track the minimum edge crossing the cut between MST and unvisited vertices. Starting from any node, it picks the least weight edge to a non-MST node and repeats until all nodes are included. For example, given a graph, starting at vertex A with edges AB(1), AC(3), Prim’s selects AB then examines B’s edges, continually adding minimum edges until all nodes are spanned efficiently. It guarantees MST with complexity O(V^2) for adjacency matrices or O(E log V) using priority queues with adjacency lists.
Dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems, storing their solutions to avoid redundant work. In the 0/1 Knapsack problem, we create a table where rows represent items and columns represent weight capacity up to the knapsack size. By filling the table based on maximum achievable value without exceeding capacity, we deduce the optimal composition. For weights {5,10,15,20} and values {50,60,120,100} with knapsack size 25, evaluate inclusion/exclusion of each item. Resulting in maximum value 170 achieved by selecting items of weights 5 and 20, values 50 and 120. Complexity is O(nW), n being number of items, W the knapsack capacity.
The Naïve-String Matching algorithm checks for pattern appearance by systematically aligning the pattern with each substring of the text with the same length, allowing shifts by one character when mismatches arise. For pattern 'aab' and text 'acaabc', align from index 0: check 'aca', no match, shift; check 'caa', no match, shift; check 'aab', match as 'aac' - mismatch; check 'abc' - no match found starting from index positions. Only one occurrence at index 2 of 'acaabc'. Thus, only one complete match is found. Its complexity is O((n-m+1)m) for text length n and pattern length m.
A binomial heap is a collection of binomial trees that satisfies the heap property. Trees are linked to each other such that each tree’s root is either the smallest (min-heap) or largest (max-heap) element. The union operation merges two binomial heaps by linking trees of the same degree, similar to the merging process of linked lists, while maintaining the heap property through a series of 'carries'. Its complexity is O(log n) due to the hierarchical structure of the trees.
Approximation algorithms are used to find near-optimal solutions to NP-hard problems within bounds on the deviation from the best solution. For the set cover problem (finding a minimum set of subsets whose union covers a set), a common heuristic selects subsets with the most uncovered elements iteratively. This greedy approach yields a solution within a logarithmic factor of the optimal size (ln n factor for n elements). For example, given universal set U and a collection of subsets, iteratively select the subset covering the maximum number of uncovered elements until U is covered. Despite not guaranteeing optimality, it solves instances efficiently and is practically useful with predictable performance bounds.
To determine this, we compare the two recurrences. For T(n) = 7T(n/2) + n^2, using the Master Theorem, the critical exponent is 2, giving us a complexity of Θ(n^log₂7). For T'(n) = aT'(n/4) + n^2, using the Master Theorem, the critical exponent here is 2 also, which simplifies to Θ(n^log₄a). For T'(n) to be asymptotically faster than T(n), we need Θ(n^log₄a) < Θ(n^log₂7). Therefore, log₄a < log₂7, which implies a < 7^log₄2. Computing 7^log₄2, we find a < 3.5. Hence, the largest integer value for 'a' is 3.