This is a substantial request
This is a substantial request
I'll create questions covering topics from the provided PDFs, including:
I'll try to vary the difficulty levels. Remember that generating perfect and comprehensive
questions is an iterative process, and I may need some feedback to fine-tune the questions to
your specific exam requirements.
Here's the first set of questions. I will provide the rest in the follow-up replies.
Data Structures Fundamentals
a) class
b) union
c) struct
d) typedef
Answer: c) struct
Explanation: The struct keyword is specifically designed for defining structures, which
are user-defined data types that group together variables of different data types.
2. What is a characteristic of structures?
Explanation: Structures can contain elements of various data types, including pointers.
They are built using primitive data types as components.
3. How do you access members of a structure variable?
Explanation: The dot operator (.) is used to access members of a structure variable
directly, while the arrow operator (->) is used for pointer variables pointing to a structure.
4. Which is true about singly linked lists?
a) Each node contains pointers to both the next and previous nodes.
b) They allow random access to any element in the list.
c) They require knowing the size of the list in advance.
d) Each node contains a pointer to the next node.
Explanation: Singly linked lists have nodes that point to the subsequent node, while
doubly linked lists have pointers to both next and previous nodes.
5. What is a major advantage of linked lists over arrays?
Explanation: Linked lists dynamically allocate memory as needed, avoiding the need to
predefine the array size.
6. In a singly linked list, what does the 'next' pointer of the last node typically contain?
Answer: c) NULL.
a) Static
b) Dynamic
c) Compile-time
d) Fixed-size
Answer: b) Dynamic
Explanation: Linked lists are dynamic data structures because their size can change
during runtime as nodes are added or removed.
Algorithm Analysis
Explanation: Big-O notation describes the upper bound of an algorithm's growth rate,
representing the worst-case scenario.
2. An algorithm has a time complexity of O(n log n). What does this imply?
a) The algorithm's execution time grows linearly with the input size.
b) The algorithm's execution time grows quadratically with the input size.
c) The algorithm's execution time grows in proportion to n multiplied by the logarithm of
n.
d) The algorithm's execution time is constant regardless of the input size.
**Answer:** b) O(log n)
**Explanation:** Logarithmic time complexity is generally the most efficient for large input
sizes.
content_copydownload
Use code with caution.
a) O(n)
b) O(log n)
c) O(1)
d) O(n log n)
Answer: c) O(1)
a) Finiteness
b) Ambiguity
c) Definiteness
d) Effectiveness
Answer: b) Ambiguity
a) Small-o
b) Big-Omega
c) Theta
d) Big-O
Answer: d) Big-O
a) O(1)
b) O(log n)
c) O(n)
d) O(n²)
Answer: c) O(n)
Explanation: Linear search has to potentially check every element in the list in the worst
case.
2. Under what condition is Binary Search applicable?
a) O(1)
b) O(log n)
c) O(n)
d) O(n log n)
Answer: b) O(log n)
Explanation: Binary search halves the search space in each step, resulting in a
logarithmic time complexity.
4. Which of the following sorting algorithms has a time complexity of O(n²) in the worst
case?
a) Merge Sort
b) Quick Sort
c) Insertion Sort
d) Heap Sort
a) Bubble Sort
b) Insertion Sort
c) Selection Sort
d) Merge Sort
Explanation: Merge Sort has a time complexity of O(n log n) in all cases and is
generally faster than other quadratic algorithms.
6. Which of the following sorting algorithms is an in-place algorithm?
a) Merge Sort
b) Bubble Sort
c) Counting Sort
d) Bucket Sort
a) Quick Sort
b) Bubble Sort
c) Insertion Sort
d) Selection Sort
Explanation: Insertion Sort has a time complexity of O(n) for nearly sorted data.
8. Which of the following sorting algorithms is a stable sorting algorithm?
a) Quick Sort
b) Heap Sort
c) Selection Sort
d) Merge Sort
Explanation: A stable sorting algorithm preserves the relative order of equal elements in
the sorted output.
9. Which of the following is NOT a comparison-based sorting algorithm?
a) Quick Sort
b) Merge Sort
c) Counting Sort
d) Heap Sort
a) O(n)
b) O(n^2)
c) O(n log n)
d) O(log n)
I will provide 75 more questions in the follow-up replies. Please let me know if you want
questions on specific topics first.
24.9s
edit
more_vert
yeah continue
edit
more_vert
Okay, here is the next batch of questions, covering various aspects of the topics.
Answer: c) To define a structure that contains a pointer to another instance of the same
structure type.
Explanation: Self-referential structures are crucial for building dynamic data structures
like linked lists and trees, where each element (node) needs to refer to other elements of
the same type.
2. Which operation is not typically associated with Abstract Data Types (ADTs)?
a) insert()
b) delete()
c) display()
d) implementation()
Answer: d) implementation()
Explanation: ADTs define the what (operations and data), not the how (implementation).
Implementation is a separate step.
3. Which of the following is the best definition of abstraction in the context of data
structures?
a) Hiding the internal details of a data structure from the user.
b) Creating a general-purpose data structure that can be used in any situation.
c) Implementing a data structure in a specific programming language.
d) Optimizing a data structure for performance.
Answer: a) Hiding the internal details of a data structure from the user.
Explanation: Abstraction focuses on what a data structure does rather than how it does
it, simplifying the user's interaction.
4. What is the primary difference between a singly linked list and a doubly linked list?
a) Doubly linked lists are more memory-efficient.
b) Singly linked lists can only store primitive data types.
c) Doubly linked lists allow traversal in both directions.
d) Singly linked lists are dynamically allocated, while doubly linked lists are statically allocated.
**Explanation:** Doubly linked lists maintain pointers to both the next and previous nodes,
enabling bidirectional traversal.
content_copydownload
Use code with caution.
Answer: c) Accessing an element in a linked list requires traversing the list from the
beginning.
Explanation: Arrays allow direct access to elements via their index, whereas linked lists
require sequential traversal.
2. What is the term for inserting a node at the beginning of a linked list?
a) Appending
b) Prepending
c) Postpending
d) Inserting
Answer: b) Prepending
Explanation: Prepending means adding something to the beginning. Appending is
adding at the end.
3. What scenario best suits the use of a doubly linked list over a singly linked list?
Explanation: Doubly linked lists allow efficient bidirectional traversal, which is not possible
with singly linked lists.
1. What is the advantage of using a linked list for a queue implementation compared to an
array-based queue?
a) Fixed size
b) Dynamic size
c) Faster access to elements
d) Efficient searching
Explanation: Linked lists allow dynamic addition or removal of elements without being
restricted by a predetermined fixed size. Arrays require the queue to have a fixed max
size.
a) O(log n)
b) O(n log n)
c) O(n)
d) O(n^2)
Answer: c) O(n)
a) O(n)
b) O(m)
c) O(n + m)
d) O(n * m)
**Answer:** d) O(n * m)
**Explanation:** The code has nested loops, one iterating 'n' times and the other iterating 'm'
times. The inner statement `sum++` executes n * m times.
content_copydownload
Use code with caution.
**Answer:** d) O(log n)
**Explanation:** Logarithmic growth is slower than linear, n log n, and quadratic growth.
content_copydownload
Use code with caution.
1. What is the space complexity of an algorithm that uses a fixed amount of memory,
regardless of the input size?
a) O(n)
b) O(log n)
c) O(1)
d) O(n²)
**Answer:** c) O(1)
**Explanation:** O(1) represents constant space complexity, meaning the memory usage does
not depend on the input size.
content_copydownload
Use code with caution.
1. If algorithm A takes 1000n^2 operations and algorithm B takes 2^n operations, for small
values of n, algorithm A might be faster. However, what happens as n grows very large?
a) Algorithm A will always be faster.
b) Algorithm B will always be faster.
c) Algorithm B will eventually become much slower than Algorithm A.
d) The performance will remain approximately the same.
**Explanation:** Exponential algorithms (like 2^n) grow much faster than polynomial
algorithms (like n^2) as the input size increases.
content_copydownload
Use code with caution.
a) O(1)
b) O(n)
c) O(n^2)
d) O(n^3)
Answer: d) O(n^3)
Explanation: In Big-O notation, we only consider the term with the highest growth rate,
which is n^3 in this case.
More Sorting and Searching:
a) Linear Search
b) Breadth-First Search
c) Binary Search
d) Depth-First Search
Explanation: Binary Search requires the data to be sorted for efficient operation.
2. Which sorting algorithm has the best average-case time complexity?
a) Bubble Sort
b) Insertion Sort
c) Selection Sort
d) Quick Sort
Explanation: Quick Sort generally has the best average-case time complexity of O(n log
n) compared to O(n^2) of Bubble Sort, Insertion Sort, and Selection Sort.
3. What is a stable sorting algorithm?
Explanation: Stability is a specific property related to how the algorithm handles equal
elements.
a) Quick Sort
b) Insertion Sort
c) Bubble Sort
d) Merge Sort
a) Linear Search
b) Binary Search
c) Interpolation Search
d) Exponential Search
Explanation: If the element we are searching for is located at the first position in the list,
then linear search will find it in O(1) time.
3. What is the primary advantage of using binary search over linear search?
Explanation: Binary search has a time complexity of O(log n), which is significantly
faster than O(n) for linear search, especially for large datasets.
4. The selection sort algorithm divides the input array into two parts: the sorted part and the
unsorted part. Which of the following statements best describes how Selection Sort
works?
a) It repeatedly selects the largest element from the unsorted part and swaps it with the
last element of the sorted part.
b) It repeatedly selects the smallest element from the unsorted part and swaps it with the
first element of the sorted part.
c) It repeatedly selects the smallest element from the unsorted part and swaps it with the
first element of the unsorted part.
d) It repeatedly selects the largest element from the sorted part and swaps it with the first
element of the unsorted part.
Answer: c) It repeatedly selects the smallest element from the unsorted part and swaps it
with the first element of the unsorted part.
Explanation: Selection sort finds the minimum element and places it at the beginning of
the array.
5. What is the best-case time complexity of Insertion Sort?
a) O(n log n)
b) O(1)
c) O(n)
d) O(n²)
Answer: c) O(n)
Explanation: When the array is already sorted, Insertion Sort only performs n-1
comparisons, resulting in O(n) time complexity.
6. In which scenario would Bubble Sort be a reasonable choice?
a) Sorting a large dataset with millions of elements.
b) Sorting a dataset that is already mostly sorted.
c) Sorting a dataset where memory usage is a primary concern.
d) Sorting a dataset where performance is critical.
**Explanation:** Optimized Bubble Sort can perform well on nearly sorted data, achieving O(n)
time complexity.
content_copydownload
Use code with caution.
a) Yes, Quick Sort always maintains the relative order of equal elements.
b) Yes, but only when the input array is small.
c) No, Quick Sort does not guarantee the preservation of the relative order of equal
elements.
d) No, Quick Sort cannot sort equal elements.
Answer: c) No, Quick Sort does not guarantee the preservation of the relative order of
equal elements.
I'll give you 50 more questions next. Please tell me if there are any specific areas where you'd
like more practice questions.
19.7s
edit
more_vert
do not worry i need from all chapters and pdfs
edit
more_vert
Okay, I understand. I'll continue providing questions covering the data structures and algorithms
topics discussed in the PDFs, aiming for a broad and comprehensive coverage. Here is the next
batch.
a) f(n) = O(g(n))
b) f(n) = Ω(g(n))
c) f(n) = O(g(n)) and f(n) = Ω(g(n))
d) f(n) = o(g(n))
Explanation: Θ notation (Theta) provides a tight bound, indicating that f(n) grows at the
same rate as g(n) - both upper and lower bounds.
2. Which of the following is the correct ordering of growth rates from slowest to fastest?
a) O(1) < O(log n) < O(n) < O(n log n) < O(n^2) < O(2^n)
b) O(1) < O(n) < O(log n) < O(n log n) < O(n^2) < O(2^n)
c) O(1) < O(log n) < O(n) < O(n^2) < O(n log n) < O(2^n)
d) O(log n) < O(1) < O(n) < O(n log n) < O(n^2) < O(2^n)
Answer: a) O(1) < O(log n) < O(n) < O(n log n) < O(n^2) < O(2^n)
Explanation: little-o notation specifies a non-tight upper bound. That is, f(n) is
insignificant compared to g(n) as n approaches infinity.
4. If f(n) = O(g(n)), then g(n) is:
Answer: b) To provide a simple way to get tight asymptotic bounds for recurrence
relations.
Explanation: The Master Theorem is a tool for analyzing the time complexity of
recursive algorithms.
6. Which of the following statements is correct regarding the relationship between O(n) and
Ω(n)?
a) O(n) represents the best-case complexity, while Ω(n) represents the worst-case complexity.
b) O(n) represents the upper bound on the growth rate, while Ω(n) represents the lower bound on
the growth rate.
c) O(n) and Ω(n) are always equal for any given algorithm.
d) O(n) represents the average-case complexity, while Ω(n) represents the worst-case
complexity.
**Answer:** b) O(n) represents the upper bound on the growth rate, while Ω(n) represents the
lower bound on the growth rate.
**Explanation:** Big-O (O) provides an asymptotic upper bound, while Big-Omega (Ω)
provides an asymptotic lower bound.
content_copydownload
Use code with caution.
Trees (Introduction)
a) Child node
b) Root node
c) Leaf node
d) Parent node
a) Root node
b) Internal node
c) Leaf node
d) Branch node
a) The number of nodes in the longest path from the root to a leaf node.
b) The number of leaf nodes in the tree.
c) The number of levels in the tree.
d) The number of edges in the tree.
Answer: a) The number of nodes in the longest path from the root to a leaf node.
Explanation: The height of a tree is the length of the longest path from the root to a leaf.
Heaps
1. What is a heap?
Answer: b) A complete tree-based data structure that satisfies the heap property
Explanation: Heaps are typically implemented using complete binary trees and maintain
either a min-heap or max-heap property.
2. What is the heap property in a min-heap?
a) The value of each node is greater than or equal to the value of its children.
b) The value of each node is less than or equal to the value of its children.
c) The value of each node is equal to the value of its children.
d) The value of the root node is always the largest element in the heap.
Answer: b) The value of each node is less than or equal to the value of its children.
Explanation: In a min-heap, the smallest element is always at the root, and the parent is
always smaller than its children.
3. What is the time complexity of inserting an element into a heap?
a) O(1)
b) O(log n)
c) O(n)
d) O(n log n)
Answer: b) O(log n)
Explanation: Insertion into a heap involves adding the element at the bottom and then
"bubbling up" to maintain the heap property.
4. Which data structure is commonly used to implement a priority queue?
a) Linked list
b) Array
c) Stack
d) Heap
Answer: d) Heap
Explanation: Heaps are efficient for priority queues because they provide O(log n) time
complexity for insertion and deletion of the highest-priority element.
Graphs (Introduction)
Explanation: An adjacency matrix is a common way to represent graphs, with rows and
columns representing nodes and entries indicating the presence or absence of an edge.
4. What is the main difference between a depth-first search (DFS) and a breadth-first search
(BFS) algorithm for traversing a graph?
a) DFS explores vertices level by level, while BFS explores vertices along a branch as far as
possible.
b) DFS explores vertices along a branch as far as possible, while BFS explores vertices level by
level.
c) DFS is guaranteed to find the shortest path, while BFS is not.
d) DFS requires more memory than BFS.
**Answer:** b) DFS explores vertices along a branch as far as possible, while BFS explores
vertices level by level.
**Explanation:** DFS uses a stack (implicitly through recursion) and explores deeply, while
BFS uses a queue and explores breadth-wise.
content_copydownload
Use code with caution.
1. What is the time complexity of Breadth-First Search (BFS) and Depth-First Search (DFS)
on a graph represented by an adjacency list?
a) O(V + E)
b) O(V^2)
c) O(E^2)
d) O(V * E)
Answer: a) O(V + E)
Explanation: Both BFS and DFS visit each vertex (V) and edge (E) once, resulting in a
time complexity of O(V + E).
Answer: c) A data structure that maps keys to values using a hash function
Explanation: Hash tables use a hash function to compute the index (position) for each
key.
2. What is a hash function?
Explanation: A hash function takes a key as input and produces an index within the hash
table's storage.
3. What is a collision in a hash table?
Answer: c) When two different keys map to the same index in the hash table
Explanation: Collisions are inevitable when the number of possible keys is greater than
the number of available slots in the hash table.
4. Which of the following is a common technique for resolving collisions in a hash table?
a) Linear probing
b) Binary search
c) Merge sort
d) Breadth-first search
a) O(1)
b) O(log n)
c) O(n)
d) O(n log n)
Answer: a) O(1)
Explanation: With a good hash function that distributes keys evenly, the average search
time is constant.
That brings us to 75 questions. I'll provide the last 25 questions in the next response. Let me
know if you'd like me to prioritize any specific concept for this final set.
21.6s
edit
more_vert
where is on chapter two and three
edit
more_vert
You are right! I apologize. I was primarily focusing on the generic Data Structure and
Algorithms fundamentals but neglected to prioritize questions specifically derived from Chapters
2 and 3 of your original documents.
For this final set of 25 questions, I will concentrate on creating problems directly inspired by the
material presented in those chapters (Sorting & Searching, Structures, Linked Lists).
Here's the final set of questions, weighted towards content from Chapters 2 & 3.
a) Sequential Search
b) Linear Search
c) Binary Search
d) Random Search
**Answer:** c) O(n)
**Explanation:** The time complexity of sequential search is O(n) as it may require examining
each element in the list until the target is found or the entire list has been searched.
content_copydownload
Use code with caution.
1. Chapter 2 mentions simple sorting algorithms for small-sized lists. Which of these is
NOT one of the algorithms mentioned?
a) Insertion Sort
b) Selection Sort
c) Bubble Sort
d) Merge Sort
Explanation: The chapter specifically highlights Insertion Sort, Selection Sort, and
Bubble Sort as simpler algorithms suitable for smaller datasets.
2. According to Chapter 2, what is the best-case time complexity of Insertion Sort?
a) O(1)
b) O(log n)
c) O(n)
d) O(n log n)
**Answer:** c) O(n)
**Explanation:** Insertion sort can be O(n) in the best case, where the array is already sorted.
content_copydownload
Use code with caution.
**Explanation:** Insertion sort is empirically known to be faster because Insertion Sort does
fewer comparisons.
content_copydownload
Use code with caution.
1. Chapter 3 introduces the concept of structures in C++. What is a structure primarily used
for?
a) To define functions.
b) To group together related data items of different types.
c) To implement object-oriented programming principles.
d) To manage memory allocation.
Explanation: Structures are used to create composite data types that group variables of
different types under a single name.
2. According to Chapter 3, what is the purpose of the "arrow operator" (->) when working
with structures?
a) To access members of a structure variable directly.
b) To access members of a structure through a pointer to that structure.
c) To define a new structure.
d) To perform arithmetic operations on structure members.
**Explanation:** The arrow operator is used when you have a pointer to a structure and you
want to access its members.
content_copydownload
Use code with caution.
Explanation: This is needed to create linked lists and other dynamic data structures.
2. According to chapter 3, in C++, a structure can:
Answer: d) Contain a mix of primitive data types, other structures, and pointers
(including pointers to its own type) as members
**Explanation:** The `struct` keyword is used to define a composite data type that groups
together variables of different types.
content_copydownload
Use code with caution.
1. Chapter 3 introduces singly linked lists. What is the primary advantage of using a linked
list over an array?
Explanation: Linked lists dynamically allocate memory as needed, avoiding the need to
predefine the array size, which is particularly beneficial when the size of the data is not
known in advance.
2. According to Chapter 3, what is a common way to mark the end of a singly linked list?
a) Setting the last node's `next` pointer to point to the first node.
b) Setting the last node's `next` pointer to point to itself.
c) Setting the last node's `next` pointer to `NULL`.
d) Setting the last node's `next` pointer to a special "end-of-list" marker value.
**Explanation:** A NULL pointer in the `next` field is the standard convention for indicating
the end of a singly linked list.
content_copydownload
Use code with caution.
1. If 'start_ptr' points to the beginning of a linked list and you want to traverse the list, what
should you do if 'start_ptr' is equal to NULL?
Explanation: Before you can manipulate a node, you need to allocate memory for it
using 'new'.
3. What is the purpose of the 'temp2' pointer when adding a node to the end of the list if
there are already nodes in the list?
a) To delete nodes
b) To point to the new node
c) To point to the first node
d) To step through the list until it finds the last node
Answer: d) To step through the list until it finds the last node
Explanation: The 'temp2' pointer helps in traversing the list until the last node is found.
4. How does a doubly linked list differ from a singly linked list?
Explanation: The addition of a "previous" pointer in each node is what enables bidirectional
traversal.
1. In a doubly linked list, what do the next pointer of the last node and the prev pointer of
the first node typically point to?
a) Themselves.
b) Each other.
c) `NULL`.
d) Undefined memory locations.
**Answer:** c) `NULL`.
1. What is one drawback of a doubly linked list compared to a singly linked list?
Explanation: Doubly Linked Lists support fast traversal to traverse forwards (redo) and
backwards (undo).
General Review
Explanation: The base cases are necessary for recursion to stop at some point.
3. Which of the following algorithm design techniques involves breaking down a problem
into smaller subproblems, solving them independently, and then combining the solutions
to solve the original problem?
a) Greedy algorithm
b) Dynamic programming
c) Divide and conquer
d) Backtracking
**Explanation:** This is the core principle of the divide and conquer approach.
content_copydownload
Use code with caution.
Explanation: Hash tables provide the best performance for looking up values given their
key.
2. Which of the following best describes a greedy algorithm?
a) An algorithm that explores all possible solutions to find the optimal one.
b) An algorithm that makes the locally optimal choice at each step, hoping to find the global
optimum.
c) An algorithm that breaks down a problem into smaller subproblems and solves them
recursively.
d) An algorithm that uses a table to store previously computed results to avoid recomputation.
**Answer:** b) An algorithm that makes the locally optimal choice at each step, hoping to find
the global optimum.
**Explanation:** Greedy algorithms make the best choice at the present moment.
content_copydownload
Use code with caution.
This concludes the 100 questions. Let me know if you need any clarifications or modifications.
18.5s
edit
more_vert
and from the fourth pdf
edit
more_vert
You're absolutely right to ensure comprehensive coverage. Let's integrate questions from the
fourth PDF, "General Sorting Algorithm Questions." I'll replace some of the more generic
questions from the last set with ones derived directly from this material. I will provide 10 extra
questions.
Here are the extra 10 questions, bringing the total to 110. These will primarily focus on the
content from the "General Sorting Algorithm Questions" PDF.
1. According to the "General Sorting Algorithm Questions" PDF, what is the primary
difference between comparison-based and non-comparison-based sorting algorithms?
a) Comparison-based algorithms are always faster.
b) Non-comparison-based algorithms can only be used on integer data.
c) Comparison-based algorithms determine order based on element comparisons, while non-
comparison-based algorithms use element properties.
d) Non-comparison-based algorithms require more memory.
1. According to the document, which of the following best exemplifies a scenario where a
non-comparison-based sorting algorithm (like Counting Sort) would be preferred?
a) Sorting a large array of floating-point numbers.
b) Sorting a list of strings in alphabetical order.
c) Sorting an array of integers where the range of values is relatively small compared to the
number of elements.
d) Sorting an array of objects based on multiple criteria.
**Answer:** c) Sorting an array of integers where the range of values is relatively small
compared to the number of elements.
**Explanation:** Counting Sort excels when the range of values is limited, making it efficient to
count frequencies.
content_copydownload
Use code with caution.
1. The "General Sorting Algorithm Questions" document mentions that sorting enables
faster searching. What searching algorithm primarily benefits from sorted data?
a) Linear Search
b) Binary Search
c) Depth-First Search
d) Breadth-First Search
**Explanation:** Binary search's logarithmic time complexity is only achievable on sorted data.
content_copydownload
Use code with caution.
1. According to the PDF, for small datasets, which type of sorting algorithms might be
sufficient?
a) O(n log n) algorithms
b) Simple algorithms like Insertion Sort or Selection Sort
c) Non-comparison-based algorithms
d) All sorting algorithms are equally efficient on small datasets
**Explanation:** The simplicity of these algorithms can outweigh the overhead of more
complex algorithms on small datasets.
content_copydownload
Use code with caution.
1. What does it mean for a sorting algorithm to be "stable," as discussed in the "General
Sorting Algorithm Questions" document?
a) It always sorts data in ascending order.
b) It maintains the relative order of equal elements.
c) It is efficient for all types of data.
d) It requires minimal memory space.
1. Based on the PDF, in what kind of data range and type it is better to use Counting Sort
Explanation: Little data in Bubble Sort is generally ok, and if you are not using too large
data Bubble Sort can be preferrable.
3. The document mentions in-place and external sorting. What is external sorting?
a) Sorting data within a single memory location.
b) Sorting data using auxiliary data structures.
c) Sorting data that resides primarily on disk due to its size.
d) Sorting data using only comparison operations.
**Answer:** c) Sorting data that resides primarily on disk due to its size.
**Explanation:** External sorting handles datasets that are too large to fit in memory.
content_copydownload
Use code with caution.
1. In context of the PDF "General Sorting Algorithm Questions" what sorting method can
be useful in stable sorting?
a) Bubble Sort
b) Insertion Sort
c) Merge Sort
d) All of the mentioned