0% found this document useful (0 votes)
14 views

Advanced DSA

The document provides a detailed explanation of various sorting algorithms, including Insertion Sort, Selection Sort, Bubble Sort, and Merge Sort, along with their time complexities and practical examples. It also discusses the concepts of Big O, Big Omega, and Big Theta notations in the context of algorithm performance, and illustrates the Divide and Conquer paradigm with a recurrence relation. Additionally, it applies the Master Theorem to analyze the running times of specific algorithms.

Uploaded by

danielmutuku539
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Advanced DSA

The document provides a detailed explanation of various sorting algorithms, including Insertion Sort, Selection Sort, Bubble Sort, and Merge Sort, along with their time complexities and practical examples. It also discusses the concepts of Big O, Big Omega, and Big Theta notations in the context of algorithm performance, and illustrates the Divide and Conquer paradigm with a recurrence relation. Additionally, it applies the Master Theorem to analyze the running times of specific algorithms.

Uploaded by

danielmutuku539
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

a.

)Using a language of your choice write a pseudo code for


insertion sort consisting of array A with n item and use the array
A[90, 4.65.100,5 ]
[4 marks]
FUNCTION InsertionSort(A, n)
FOR i FROM 1 TO n - 1 DO
key = A[i]
j=i-1

// Move elements of A[0..i-1], that are greater than key,


// to one position ahead of their current position
WHILE j >= 0 AND A[j] > key DO
A[j + 1] = A[j]
j=j-1
END WHILE

A[j + 1] = key
END FOR
END FUNCTION

// Main Program
A = [90, 4.65, 100, 5]
n = LENGTH(A) // n should be 4 in this case
InsertionSort(A, n)
// Output the sorted array
PRINT "Sorted Array: ", A

b.)Define the following symbol as they are used in order of growth


of algorithm and use algorithm to illustrate them
[6 marks ]
O (big oh)
Ώ (big omega),
Θ (big theta

1. O (Big O)
- Big O notation gives an upper bound on the time complexity of an
algorithm. It describes the worst-case scenario, indicating the maximum
time an algorithm can take as the input size grows. It essentially captures
the growth rate of the algorithm in terms of its worst-case performance.
2. Ω (Big Omega
- Big Omega notation provides a lower bound on the time complexity
of an algorithm. It describes the best-case scenario, indicating the
minimum time an algorithm will take as the input size grows. It captures
the growth rate of the algorithm in terms of its best-case performance.
3. Θ (Big Theta)
- Big Theta notation gives a tight bound on the time complexity of an
algorithm. It describes both the upper and lower bounds, indicating that
the algorithm's running time grows at a rate that is asymptotically equal
to a given function. It captures the growth rate of the algorithm in terms
of its average-case performance.
For example using linear search algorithm
FUNCTION LinearSearch(A, n, target)
FOR i FROM 0 TO n - 1 DO
IF A[i] == target THEN
RETURN i
END IF
END FOR
RETURN -1 // target not found
END FUNCTION

Time Complexity: The time complexity of Linear Search can be


analyzed as follows:
- Best Case: If the target element is found at the first position, the
algorithm makes 1 comparison.
- This is represented as Ω(1).
- Worst Case: If the target element is not present or is at the last
position, the algorithm makes n comparisons.
- This is represented as O(n).
- Average Case: On average, the target will be found after n/2
comparisons.
- This is represented as Θ(n).

c.)
i.) Consider a set of 5 unordered integers. Assuming the set of
integers is taken as input to a sorting algorithm.
State this problem as a computation problem [3 marks]
To state the problem of sorting a set of unordered integers as a
computation problem, we can define it using a formal structure that
includes the input, output, and the goal of the computation. Here’s how
you can present this problem:
The computation problem can be defined using a formal structure using
input, output and the goal of computation, which can be represented as:
1. Input:
- A set of 5 unordered integers represented as an array or list A.
- Example: A = [34, 12, 5, 67, 23]
2. Output:
- A sorted array or list of the integers in non-decreasing order.
- Example: Sorted A = [5, 12, 23, 34, 67]
3. Goal:
- The goal is to rearrange the elements of the input set A such that each
element is in its correct position according to the specified order (non-
decreasing). The sorting algorithm should ensure that for every pair of
indices i and j where i < j, the condition A[i] ≤ A[j] holds true.
Formal Representation can be as follows:
- Problem Definition: Given an array A containing n = 5 unordered
integers, sort the array in non-decreasing order.
- Mathematical Notation:
- Input: A ∈ ℤⁿ, where n = 5
- Output: A' such that A' is sorted and A'[i] ≤ A'[j] for all 0 ≤ i < j < n
ii.) Describe the worst case and best case scenarios for such an
algorithm in relation to the problem [4 marks]
When analyzing a sorting algorithm for a set of 5 unordered integers, it
is essential to consider the worst-case and best-case scenarios. These
scenarios help us understand the efficiency and performance of the
algorithm under different conditions.
Best Case Scenario
- The best-case scenario refers to the situation where the algorithm
performs the minimum number of comparisons and operations necessary
to complete the sorting process.

- Example: For a sorting algorithm like Insertion Sort, the best case
occurs when the input array is already sorted in non-decreasing order.
- Input:
- A = [5, 12, 23, 34, 67 (already sorted)
- Performance:
- In this case, each element is compared only once with its
predecessor, resulting in n-1 comparisons for n = 5.
- Time Complexity: O(n) (linear) because the algorithm simply
iterates through the list.

Worst Case Scenario- the worst-case scenario describes the situation


where the algorithm performs the maximum number of comparisons and
operations necessary to complete the sorting process.
- Example: Insertion Sort, the worst case occurs when the input array is
sorted in reverse order.
- Input:
- A = [67, 34, 23, 12, 5] (sorted in descending order)
- Performance:
- In this case, for each element, the algorithm must compare it with all
previously sorted elements and shift them to the right to insert the
current element in its correct position.
- The number of comparisons for each element is as follows:
- 1st element: 0 comparisons (it’s the first element)
- 2nd element: 1 comparison
- 3rd element: 2 comparisons
- 4th element: 3 comparisons
- 5th element: 4 comparisons
- Total comparisons: 0 + 1 + 2 + 3 + 4 = 10 comparisons.
- Time Complexity: O(n²) (quadratic) because the performance
degrades significantly as the number of elements increases.

d.)Consider the following:


T(n)=8T(n/2) + Ɵ(n2)
i.) Describe the features that make the above fit the Divide
and Conquer paradigm for design of algorithms? [3
Marks]
Features of Divide and Conquer
1. Problem Division:
- The problem is divided into sub problems of smaller size. In this
case, the original problem of size n is divided into 8 sub problems of size
n/2. This characteristic of breaking down a problem into smaller parts is
fundamental to the Divide and Conquer approach.
2. Recursive Solution:
- Each of the smaller sub problems is solved independently and
recursively. The function T (n) calls itself for each of the 8 sub
problems, which illustrates the recursive nature of Divide and Conquer
algorithms. This allows the algorithm to tackle each sub problem with
the same logic as the original problem.
3. Combining Solutions:
- After solving the sub problems, their results need to be combined to
form a solution to the original problem. In this recurrence, the term
Θ(n²) represents the time complexity required to combine the results of
the sub problems (e.g., merging, aggregating, or processing the results).
The complexity of combining the results is an essential component of
the Divide and Conquer methodology.

ii.) Use the Master method to compute the tight bounds for the
running time of the algorithm [2 Marks]

- a = 8 (the number of sub problems)


- b = 2 (the factor by which the problem size is reduced)
- f(n) = Θ(n²) (the cost of the work done outside the recursive calls)

f(n) = n^log_b(a), where log_b(a) is the logarithm of a base b.


n^log_b(a):
- \( log_b(a) = log_2(8) = 3 \)
- Thus, n^log_b(a) =n³.

Comparing f(n) and n^log_b(a):


f(n) = Θ(n²)< n^log_b(a) = n³.
- Since f(n)** grows slower than **n^log_b(a)**, we can apply case 1
of the Master Theorem.
(Case 1 of the Master Theorem)
According to the Master Theorem:
- If f(n) is polynomially smaller than n^log_b(a), specifically if there
exists a constant ε > 0 such that f(n) = O(n^(log_b(a) - ε)), then:
- T(n) = Θ(n^log_b(a)).
Since Θ(n²) is polynomially smaller than Θ(n³) (specifically, n² = O(n^(3
- ε)) for ε = 1), we can conclude:
- T(n) = Θ(n³).

e.)Use Master Theorem to compute the running times of the


algorithms represented by the functions given below [4 Marks]
T(n) = 4T(3/2) + n2
T(n) = 9T(n/3) + n3
1. Analyzing T(n) = 4T(3/2) + n²
In this recurrence:
- a = 4 (the number of sub problems)
- b = 3/2 (the factor by which the problem size is reduced)
- f(n) = n² (the cost of the work done outside the recursive calls)
Calculate n^log_b(a)
1. Calculate log_b(a):
- \( a = 4 \)
- \( b = 3/2 \)
- Using the change of base formula:
Logb(a) =log(b)/log(a) = log4 /log(3/2)
= 3.41
2. Calculate n^log_b(a):
- Therefore, n^log_b(a) = n^{3.41}.
f(n) <n^log_b(a)
n^log_b(a) = n^{3.41
n² < n^{3.41:: we apply case 1 of the Master Theorem.

Result for T(n) = 4T(3/2) + n²


- T(n) = Θ(n^{3.41}).

2. Analyzing T(n) = 9T(n/3) + n³


- a = 9 (the number of sub problems)
- b = 3 (the factor by which the problem size is reduced)
- f(n) = n³ (the cost of the work done outside the recursive calls)
Calculate n^log_b(a)
1. Calculate log_b(a):
- \( a = 9 \)
- \( b = 3 \)
log_b(a) = log_3(9) = 2
2. Calculate n^log_b(a):
- Therefore, n^log_b(a) = n².
Comparing f(n) and n^log_b(a)
- We have:
- f(n) = n³
- n^log_b(a) = n²
Since f(n) = n³ is polynomially larger than n²
Therefore: n³ = Θ(n^{2 + ε}) for ε = 1), thus we apply case 3 of the
Master Theorem.
Result for T(n) = 9T(n/3) + n³
- T(n) = Θ(n³).
: - For T(n) = 4T(3/2) + n²: T(n) = Θ(n^{3.41}).
: - For T(n) = 9T(n/3) + n³: T(n) = Θ(n³).

f.) Using a practical example for each case, briefly explain how each of
the following sorting algorithms work [4 marks]
Insertion Sort
Selection Sort
Bubble sort
Merge Sort

Here's a brief explanation of how each sorting algorithm works, along


with practical examples for each:

1. Insertion Sort
- Insertion Sort builds a sorted array one element at a time.
- It iteratively takes an element from the unsorted portion and inserts it
into its correct position in the sorted portion.
Example:
- Consider the array: [5, 2, 9, 1, 5, 6]
- Steps:
- Start with the second element (2). Compare it with 5, and since 2 is
smaller, place it before 5: [2, 5, 9, 1, 5, 6]
- Next, take 9. It is already in the correct position: [2, 5, 9, 1, 5, 6]
- Take 1. Compare with 9, 5, and 2. Insert it at the front: [1, 2, 5, 9, 5,
6]
- Continue with 5 and 6, resulting in the sorted array: [1, 2, 5, 5, 6, 9]

2. Selection Sort
- Selection Sort divides the input array into two parts: sorted and
unsorted.
- It repeatedly selects the smallest (or largest) element from the unsorted
portion and swaps it with the first unsorted element.
Example:
- Consider the array: [64, 25, 12, 22, 11]
- Steps:
- Find the smallest element (11) and swap it with the first element: [11,
25, 12, 22, 64]
- Now find the smallest in the remaining array (12) and swap it: [11,
12, 25, 22, 64]
- Repeat for 22 and 25, resulting in: [11, 12, 22, 25, 64]

3. Bubble Sort
- Bubble Sort repeatedly steps through the list, compares adjacent pairs,
and swaps them if they are in the wrong order.
- This process is repeated until no swaps are needed, indicating that the
array is sorted.
Example:
- Consider the array: [5, 1, 4, 2, 8]
- Steps:
- Compare 5 and 1, swap: [1, 5, 4, 2, 8]
- Compare 5 and 4, swap: [1, 4, 5, 2, 8]
- Compare 5 and 2, swap: [1, 4, 2, 5, 8]
- Compare 5 and 8, no swap.
- Repeat the process for the next passes until sorted: [1, 2, 4, 5, 8]

4. Merge Sort
- Merge Sort is a Divide and Conquer algorithm that divides the array
into halves, sorts each half, and then merges the sorted halves back
together.
- It continues this process recursively until the base case of a single
element is reached.
Example:
- Consider the array: [38, 27, 43, 3, 9, 82, 10]
- Steps:
- Split the array into halves: [38, 27, 43] and [3, 9, 82, 10]
- Keep splitting until single elements: [38], [27], [43], [3], [9], [82],
[10]
- Merge back together in sorted order:
- Merge [38] and [27] → [27, 38]
- Merge [27, 38] and [43] → [27, 38, 43]
- Similarly, merge the other array: [3, 9, 10, 82]
- Finally merge both sorted halves: [3, 9, 10, 27, 38, 43, 82]

You might also like