0% found this document useful (0 votes)
16 views9 pages

TrinadhreddySeelam_AOA_Assignment2

The document outlines algorithms for heap deletion, merging k sorted lists, and finding the k smallest elements, detailing their implementations and time complexities. It includes a running time analysis for each algorithm, demonstrating their efficiency and correctness. Additionally, it discusses the performance of the randomized quicksort algorithm under specific conditions and presents a modified partitioning algorithm.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views9 pages

TrinadhreddySeelam_AOA_Assignment2

The document outlines algorithms for heap deletion, merging k sorted lists, and finding the k smallest elements, detailing their implementations and time complexities. It includes a running time analysis for each algorithm, demonstrating their efficiency and correctness. Additionally, it discusses the performance of the randomized quicksort algorithm under specific conditions and presents a modified partitioning algorithm.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

CS624 - Analysis of Algorithms

Homework2

Trinadhreddy Seelam,

02126243.

1. Implementation of HeapDelete:

Algorithm: 𝐻𝐸𝐴𝑃 − 𝐷𝐸𝐿𝐸𝑇𝐸(𝐴, 𝑖)


o 𝐻𝐸𝐴𝑃 − 𝐼𝑁𝐶𝑅𝐸𝐴𝑆𝐸 − 𝐾𝐸𝑌(𝐴, 𝑖, ∞) // Setting key to ∞
o 𝐴[1] = 𝐴[𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒] // Replacing root with the last element
o 𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒 = 𝐴. ℎ𝑒𝑎𝑝 − 𝑠𝑖𝑧𝑒 − 1 // Decreasing the size of the heap
o 𝑀𝐴𝑋 − 𝐻𝐸𝐴𝑃𝐼𝐹𝑌(𝐴, 1) // Restoring the max-heap property

Steps:

1. Set the key of the node to be deleted to infinity using 𝐻𝐸𝐴𝑃 − 𝐼𝑁𝐶𝑅𝐸𝐴𝑆𝐸 −
𝐾𝐸𝑌. This will move the node to the top of the heap.

2. Replace the value of the root (which is the node to be deleted) with the value of
the last element in the heap. This step is safe since infinity is larger than any other
key.

3. Decrease the size of the heap by 1, eIectively removing the last element (which
now contains the value of the deleted node) from consideration.

4. Finally, call 𝑀𝐴𝑋 − 𝐻𝐸𝐴𝑃𝐼𝐹𝑌 to restore the max-heap property. This operation
has a time complexity of 𝑂(𝑙𝑜𝑔 𝑛) and ensures that the heap remains a valid max-
heap.

Running time analysis:

Firstly, the operation 𝐻𝐸𝐴𝑃 − 𝐼𝑁𝐶𝑅𝐸𝐴𝑆𝐸 − 𝐾𝐸𝑌(𝐴, 𝑖, ∞) which involves


adjusting the key of a node in a max heap takes 𝑂(𝑙𝑜𝑔 𝑛) time.

Secondly, replacing the root with the last element of the heap and updating the
heap size takes constant time 𝑂(1).

Finally, the 𝑀𝐴𝑋 − 𝐻𝐸𝐴𝑃𝐼𝐹𝑌 operation is called, which also takes 𝑂(𝑙𝑜𝑔 𝑛) time.
The number of recursive calls is at most the height of the heap and is bounded by
𝑙𝑜𝑔 𝑛. Therefore, the overall time complexity of 𝐻𝐸𝐴𝑃 − 𝐷𝐸𝐿𝐸𝑇𝐸 is
𝑂(𝑙𝑜𝑔 𝑛), meeting the specified requirement.
2. Algorithm to merge k sorted lists into one sorted list:

Algorithm: MERGE-K-SORTED-LISTS(lists)
• 𝐼𝑛𝑖𝑡𝑖𝑎𝑙𝑖𝑧𝑒 𝑎𝑛 𝑒𝑚𝑝𝑡𝑦 𝑚𝑖𝑛 − ℎ𝑒𝑎𝑝 𝐻.
• 𝑓𝑜𝑟 𝑖 = 1 𝑡𝑜 𝑘 𝑑𝑜
o 𝐼𝑛𝑠𝑒𝑟𝑡 𝑡ℎ𝑒 𝑓𝑖𝑟𝑠𝑡 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 𝑓𝑟𝑜𝑚 𝑙𝑖𝑠𝑡 𝑖 𝑖𝑛𝑡𝑜 𝐻, 𝑎𝑙𝑜𝑛𝑔 𝑤𝑖𝑡ℎ 𝑡ℎ𝑒 𝑖𝑛𝑑𝑒𝑥 𝑖.
• 𝐼𝑛𝑖𝑡𝑖𝑎𝑙𝑖𝑧𝑒 𝑎𝑛 𝑒𝑚𝑝𝑡𝑦 𝑟𝑒𝑠𝑢𝑙𝑡 𝑙𝑖𝑠𝑡.
• 𝑤ℎ𝑖𝑙𝑒 𝐻 𝑖𝑠 𝑛𝑜𝑡 𝑒𝑚𝑝𝑡𝑦 𝑑𝑜
o 𝐸𝑥𝑡𝑟𝑎𝑐𝑡 𝑡ℎ𝑒 𝑚𝑖𝑛𝑖𝑚𝑢𝑚 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 (𝑣𝑎𝑙𝑢𝑒, 𝑙𝑖𝑠𝑡 𝑖𝑛𝑑𝑒𝑥) 𝑓𝑟𝑜𝑚 𝐻.
o 𝐴𝑑𝑑 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑡𝑜 𝑡ℎ𝑒 𝑟𝑒𝑠𝑢𝑙𝑡 𝑙𝑖𝑠𝑡.
o 𝐼𝑓 𝑡ℎ𝑒𝑟𝑒 𝑖𝑠 𝑎𝑛𝑜𝑡ℎ𝑒𝑟 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 𝑖𝑛 𝑡ℎ𝑒 𝑠𝑎𝑚𝑒 𝑙𝑖𝑠𝑡, 𝑖𝑛𝑠𝑒𝑟𝑡 𝑖𝑡 𝑖𝑛𝑡𝑜 𝐻 𝑤𝑖𝑡ℎ 𝑡ℎ𝑒 𝑙𝑖𝑠𝑡 𝑖𝑛𝑑𝑒𝑥.
• 𝑅𝑒𝑡𝑢𝑟𝑛 𝑡ℎ𝑒 𝑟𝑒𝑠𝑢𝑙𝑡 𝑙𝑖𝑠𝑡.

The MERGE-K-SORTED-LISTS algorithm uses the INSERT function to add elements


to the min-heap H.

Each element in the min-heap is a tuple that contains the element value and the
index of the list that the element belongs to.

The main loop runs until the min-heap is empty, and in each iteration, it extracts the
minimum element and adds it to the result list.

The INSERT function maintains the min-heap property throughout the process.

Running Time Analysis:

To merge k sorted lists into one sorted list, we initialize the min-heap by inserting the
first element from each of the k lists into the heap. This takes 𝑂(𝑘 𝑙𝑜𝑔(𝑘)) time.

Each extraction and insertion operation that follows takes 𝑂(𝑙𝑜𝑔(𝑘)) time.

Since we process each of the n total elements across all lists once, the overall time
complexity is 𝑂(𝑛 𝑙𝑜𝑔(𝑘)).

Therefore, the algorithm runs in 𝑂(𝑛 𝑙𝑜𝑔(𝑘)) time to merge k sorted lists into one
sorted list.
3. Algorithm that produces the k smallest elements

1) Algorithm: 𝐾 − 𝑆𝑀𝐴𝐿𝐿𝐸𝑆𝑇 − 𝐸𝐿𝐸𝑀𝐸𝑁𝑇𝑆(𝑎𝑟𝑟, 𝑘)


• 𝐵𝑢𝑖𝑙𝑑 − 𝑀𝑖𝑛 − 𝐻𝑒𝑎𝑝(𝑎𝑟𝑟)
• 𝑟𝑒𝑠𝑢𝑙𝑡 = []
• 𝑓𝑜𝑟 𝑖 𝑓𝑟𝑜𝑚 1 𝑡𝑜 𝑘:
o 𝑚𝑖𝑛_𝑒𝑙𝑒𝑚𝑒𝑛𝑡 = 𝐻𝑒𝑎𝑝 − 𝐸𝑥𝑡𝑟𝑎𝑐𝑡 − 𝑀𝑖𝑛(𝑎𝑟𝑟)
o 𝑟𝑒𝑠𝑢𝑙𝑡. 𝑎𝑝𝑝𝑒𝑛𝑑(𝑚𝑖𝑛_𝑒𝑙𝑒𝑚𝑒𝑛𝑡)
• 𝑟𝑒𝑡𝑢𝑟𝑛 𝑟𝑒𝑠𝑢𝑙𝑡
Steps:

To build a min-heap from an unsorted set of n elements, you can use a bottom-
up approach or any other linear-time heap construction algorithm.

This can be done in 𝑂(𝑛) time. Once you have the min-heap, you can extract the
minimum element (root) from it k times, which takes 𝑂(𝑙𝑜𝑔 𝑛) time per
extraction.

After each extraction, add the extracted element to a result list.

Finally, return the result list containing the k smallest elements.

Proof of Correctness:

• To build a min-heap from an unsorted array, it can be done in O(n) time.


• Extracting the minimum element from the min-heap guarantees that we
get the smallest remaining element. This process can be repeated k
times, ensuring that we obtain the k smallest elements.

2) Running time Analysis:

The algorithm for building a min-heap takes 𝑂(𝑛) time.

In the loop, the Heap-Extract-Min operation is executed k times, which


takes 𝑂(𝑘 𝑙𝑜𝑔 𝑛) time.

Therefore, the overall time complexity of this algorithm is 𝑂(𝑛 + 𝑘 𝑙𝑜𝑔 𝑛), which
satisfies the required time complexity.
This algorithm is eIicient when 𝑘 is much smaller than 𝑛, as the dominant term
in the time complexity is 𝑘 𝑙𝑜𝑔 𝑛.

The min-heap operation ensures that we can always eIiciently access the
smallest remaining element.

4.
a) All element values are equal:

If all the values in the array are the same, then the randomized quicksort
algorithm's running time will be the same as that of the regular quicksort
algorithm.
This is because the initial random choice of the pivot index and subsequent
swaps do not change anything when all elements are equal.

The partitioning step will always result in one partition being empty, which
causes the worst-case partitioning scenario to occur, and thus the runtime
becomes 𝛩(𝑛! ).

b) Modified PARTITION' algorithm:

Algorithm: 𝑃𝐴𝑅𝑇𝐼𝑇𝐼𝑂𝑁′(𝐴, 𝑝, 𝑟)
• 𝑥 = 𝐴[𝑟]
• 𝑠𝑤𝑎𝑝 𝐴[𝑟] 𝑤𝑖𝑡ℎ 𝐴[𝑝]
• 𝑖 = 𝑝 − 1
• 𝑘 = 𝑝
• 𝑗 = 𝑝 + 1
• 𝑤ℎ𝑖𝑙𝑒 𝑗 <= 𝑟 − 1 𝑑𝑜
• 𝑖𝑓 𝐴[𝑗] < 𝑥 𝑡ℎ𝑒𝑛
• 𝑖 = 𝑖 + 1
• 𝑘 = 𝑖 + 2
• 𝑠𝑤𝑎𝑝 𝐴[𝑖] 𝑤𝑖𝑡ℎ 𝐴[𝑗]
• 𝑠𝑤𝑎𝑝 𝐴[𝑘] 𝑤𝑖𝑡ℎ 𝐴[𝑗]
• 𝑒𝑛𝑑 𝑖𝑓
• 𝑖𝑓 𝐴[𝑗] = 𝑥 𝑡ℎ𝑒𝑛
• 𝑘 = 𝑘 + 1
• 𝑠𝑤𝑎𝑝 𝐴[𝑘] 𝑤𝑖𝑡ℎ 𝐴[𝑗]
• 𝑒𝑛𝑑 𝑖𝑓
• 𝑗 = 𝑗 + 1
• 𝑒𝑛𝑑 𝑤ℎ𝑖𝑙𝑒
• 𝑠𝑤𝑎𝑝 𝐴[𝑖 + 1] 𝑤𝑖𝑡ℎ 𝐴[𝑟]
• 𝑟𝑒𝑡𝑢𝑟𝑛 (𝑖 + 1, 𝑘 + 1)

c) Partition algorithm on the input array [1, 6, 5, 8, 5, 4, 5], with p = 1 and r = 7.

Steps:

𝐼𝑛𝑝𝑢𝑡 𝐴𝑟𝑟𝑎𝑦: [1, 6, 5, 8, 5, 4, 5]

o Initialize the variables: 𝑥 = 𝐴[7] = 5, 𝑖 = 𝑝 − 1 = 0, 𝑘 = 𝑝 =


1, 𝑗 = 𝑝 + 1 = 2.
o 𝑆𝑤𝑎𝑝 𝐴[𝑟] (5 𝑎𝑡 𝑖𝑛𝑑𝑒𝑥 7) 𝑤𝑖𝑡ℎ 𝐴[𝑝] (1 𝑎𝑡 𝑖𝑛𝑑𝑒𝑥 1):
• 𝐴𝑟𝑟𝑎𝑦: [5, 6, 5, 8, 5, 4, 1]

o 𝐵𝑒𝑔𝑖𝑛 𝑡ℎ𝑒 𝑤ℎ𝑖𝑙𝑒 𝑙𝑜𝑜𝑝:


- 𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 1 (𝑗 = 2):
- 𝐴𝑟𝑟𝑎𝑦: [5, 6, 5, 8, 5, 4, 1], 𝑖 = 1, 𝑘 = 2

- 𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 2 (𝑗 = 3):
- 𝐴𝑟𝑟𝑎𝑦: [5, 6, 5, 8, 5, 4, 1], 𝑘 = 3

- 𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 3 (𝑗 = 4):
- 𝐴𝑟𝑟𝑎𝑦: [5, 6, 5, 8, 5, 4, 1]

- 𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 4 (𝑗 = 5):
- 𝐴𝑟𝑟𝑎𝑦: [5, 6, 5, 8, 5, 4, 1], 𝑘 = 4

- 𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 5 (𝑗 = 6):
- 𝐴𝑟𝑟𝑎𝑦: [5, 4, 5, 8, 5, 6, 1], 𝑖 = 2, 𝑘 = 5

- 𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 6 (𝑗 = 7):
- 𝐴𝑟𝑟𝑎𝑦: [5, 4, 5, 8, 5, 6, 1], 𝑘 = 6
- 𝐸𝑛𝑑 𝑜𝑓 𝑤ℎ𝑖𝑙𝑒 𝑙𝑜𝑜𝑝.

• 𝑆𝑤𝑎𝑝 𝐴[𝑖 + 1] 𝑤𝑖𝑡ℎ 𝐴[𝑟]:


• 𝐴𝑟𝑟𝑎𝑦: [5, 4, 5, 8, 5, 6, 1], 𝑠𝑤𝑎𝑝 𝐴[3] 𝑤𝑖𝑡ℎ 𝐴[7].
• 𝑅𝑒𝑡𝑢𝑟𝑛 𝑡ℎ𝑒 𝑖𝑛𝑑𝑖𝑐𝑒𝑠 (𝑖 + 1, 𝑘 + 1): (3, 7)

𝐹𝑖𝑛𝑎𝑙 𝐴𝑟𝑟𝑎𝑦: [1, 4, 5, 5, 5, 6, 8]

Therefore, the partitioning is complete, and the final array is partitioned into
three segments: elements less than the pivot (1, 4), elements equal to the pivot
(5, 5, 5), and elements greater than the pivot (6, 8). The returned indices are (3,
7).

d) Loop Invariant:

At the beginning of each iteration of the loop, with 𝑗 pointing to the current
element A[𝑗], the following conditions are true:

• All elements in 𝐴[𝑝: 𝑖] are less than the pivot.


• All elements in 𝐴[𝑖 + 1: 𝑘] are equal to the pivot.
• Elements in 𝐴[𝑘 + 1: 𝑗 − 1] are not yet processed and can be less
than, equal to, or greater than the pivot.
• No element has been swapped more than twice.

This loop invariant is strong enough to prove the correctness of the algorithm
because it ensures that elements are correctly partitioned around the pivot.
5. a) Proof of Correctness:

Let’s demonstrate the correctness of TAIL-RECURSIVE-QUICKSORT using strong


induction.

Base Case: For an array 𝐴 with only one element (𝑛 = 1), 𝑝 𝑎𝑛𝑑 𝑟 𝑎𝑟𝑒 𝑒𝑞𝑢𝑎𝑙 (𝑝 =
𝑟), causing the loop to terminate. The single element remains sorted as it is.

Inductive case:

Induction Hypothesis: Assume TAIL-RECURSIVE-QUICKSORT correctly sorts arrays


of size 𝑘 (1 <= 𝑘 <= 𝑛 − 1).

For an array A with size 𝑛, consider the following:

Partition and Sort Left Subarray:


• PARTITION divides 𝐴[𝑝: 𝑟] into subarrays, placing elements less than the
pivot before it and those greater than or equal to it after.
• The modified code ensures that the left subarray 𝐴[𝑝: 𝑞 − 1] has a size less
than 𝑛/2.
• By the induction hypothesis, TAIL-RECURSIVE-QUICKSORT correctly sorts
this smaller subarray.

Process Right Subarray:


• After sorting the left subarray, 𝑝 is updated to 𝑞 + 1, eIectively excluding the
pivot from the right subarray. r is also updated, potentially reducing the right
subarray's size even further.
• Since the remaining right subarray's size is strictly less than 𝑛/2, the
induction hypothesis applies again, guaranteeing its correct sorting.

Therefore, by induction, TAIL-RECURSIVE-QUICKSORT correctly sorts the entire


array A of size 𝑛.
b) Scenario for Θ(n) Stack Depth:

• As long as the loop condition remains 𝑝 < 𝑟, the stack depth continues to
increase. The worst-case scenario occurs when the partition process
consistently generates highly unequal subarrays.
• For instance, in the case of an already sorted array, PARTITION always places
only one element in the left subarray, while the rest are placed in the right
subarray.
• Each recursive call sorts a smaller subarray, which results in creating a new
stack frame. This leads to a stack depth of 𝑛 − 1, which has a time
complexity of 𝛩(𝑛), thereby exceeding the desired logarithmic bound.

c) Modification for 𝜣(𝒍𝒐𝒈 𝒏) Stack Depth:

The modified algorithm is:

𝑀𝑂𝐷𝐼𝐹𝐼𝐸𝐷 − 𝑇𝐴𝐼𝐿 − 𝑅𝐸𝐶𝑈𝑅𝑆𝐼𝑉𝐸 − 𝑄𝑈𝐼𝐶𝐾𝑆𝑂𝑅𝑇(𝐴, 𝑝, 𝑟)

𝑤ℎ𝑖𝑙𝑒 𝑝 < 𝑟 𝑑𝑜

𝑞 = 𝑃𝐴𝑅𝑇𝐼𝑇𝐼𝑂𝑁(𝐴, 𝑝, 𝑟)

𝑖𝑓 𝑞 − 𝑝 < 𝑟 − 𝑞 𝑡ℎ𝑒𝑛

𝑀𝑂𝐷𝐼𝐹𝐼𝐸𝐷 − 𝑇𝐴𝐼𝐿 − 𝑅𝐸𝐶𝑈𝑅𝑆𝐼𝑉𝐸 − 𝑄𝑈𝐼𝐶𝐾𝑆𝑂𝑅𝑇(𝐴, 𝑝, 𝑞 − 1)

𝑝 = 𝑞 + 1

𝑒𝑙𝑠𝑒

𝑀𝑂𝐷𝐼𝐹𝐼𝐸𝐷 − 𝑇𝐴𝐼𝐿 − 𝑅𝐸𝐶𝑈𝑅𝑆𝐼𝑉𝐸 − 𝑄𝑈𝐼𝐶𝐾𝑆𝑂𝑅𝑇(𝐴, 𝑞 + 1, 𝑟)

𝑟 = 𝑞 − 1

𝑒𝑛𝑑 𝑖𝑓

𝑒𝑛𝑑 𝑤ℎ𝑖𝑙𝑒

In this modified version, we recursively sort the smaller subarray first.


The condition, 𝑞 − 𝑝 < 𝑟 − 𝑞, ensures that the size of the left subarray [p,
q-1] is smaller than the size of the right subarray [𝑞 + 1, 𝑟]. This modification
guarantees a worst-case stack depth of 𝛩(𝑙𝑜𝑔 𝑛) by always working on the
smaller subarray first
6.
a) Randomized Algorithm for Fuzzy-Sorting Intervals:

The randomized algorithm for Fuzzy-Sorting Intervals is:

Algorithm: 𝐹𝑈𝑍𝑍𝑌 − 𝑃𝐴𝑅𝑇𝐼𝑇𝐼𝑂𝑁(𝐴, 𝑝, 𝑟):


𝑥 = 𝐴[𝑟]
𝑠𝑤𝑎𝑝 𝐴[𝑟] 𝑤𝑖𝑡ℎ 𝐴[𝑝]
𝑖 = 𝑝 − 1
𝑘 = 𝑝
𝑓𝑜𝑟 𝑗 = 𝑝 + 1 𝑡𝑜 𝑟 − 1 𝑑𝑜
𝑖𝑓 𝑏[𝑗] < 𝑥. 𝑎 𝑡ℎ𝑒𝑛
𝑖 = 𝑖 + 1
𝑘 = 𝑖 + 2
𝑠𝑤𝑎𝑝 𝐴[𝑖] 𝑤𝑖𝑡ℎ 𝐴[𝑗]
𝑠𝑤𝑎𝑝 𝐴[𝑘] 𝑤𝑖𝑡ℎ 𝐴[𝑗]
𝑒𝑛𝑑 𝑖𝑓
𝑖𝑓 𝑏[𝑗] ≥ 𝑥. 𝑎 𝑜𝑟 𝑎[𝑗] ≤ 𝑥. 𝑏 𝑡ℎ𝑒𝑛
𝑥. 𝑎 = 𝑚𝑎𝑥(𝑎[𝑗], 𝑥. 𝑎) 𝑎𝑛𝑑 𝑥. 𝑏 = 𝑚𝑖𝑛(𝑏[𝑗], 𝑥. 𝑏)
𝑘 = 𝑘 + 1
𝑠𝑤𝑎𝑝 𝐴[𝑘] 𝑤𝑖𝑡ℎ 𝐴[𝑗]
𝑒𝑛𝑑 𝑖𝑓
𝑒𝑛𝑑 𝑓𝑜𝑟
𝑠𝑤𝑎𝑝 𝐴[𝑖 + 1] 𝑤𝑖𝑡ℎ 𝐴[𝑟]
𝑟𝑒𝑡𝑢𝑟𝑛 𝑖 + 1 𝑎𝑛𝑑 𝑘 + 1
In this algorithm, equal intervals are treated as overlapping elements to reduce
sorting time.

b) Running time analysis:

When distinct intervals are present, the algorithm functions similarly to regular
quicksort, thereby resulting in an expected runtime of 𝛩(𝑛 𝑙𝑜𝑔 𝑛) in general.

However, when all intervals overlap, the condition on line 12 is satisfied for every
iteration of the for loop. In such a scenario, the algorithm returns p and r,
indicating that only empty arrays remain to be sorted.

Since FUZZY-PARTITION is called only once and its runtime remains 𝛩(𝑛), the
total expected runtime becomes 𝛩(𝑛).

You might also like