0% found this document useful (0 votes)
55 views

Assignment 1 AoA

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

Assignment 1 AoA

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Assignment Topics

1. Algorithms (Binary search, Quick sort, Merge sort).


2. Notations.
3. Recurrence relations.
4. Substitutional Method.
5. Example of Recurrence relations.
6. Divide and Conquer Algorithm.

Algorithm
1. Binary search -> Binary search is a classic algorithm, which, given a
target value and an ordered list or array, determines whether the
value appears in the list and, if so, its position. It performs this by
repeatedly dividing the search interval in half. If the value of the
search key is less than the item in the middle of the interval, the
interval is narrowed to the lower half. Otherwise, the interval is
narrowed to the upper half. This continues until the desired value
has been found, or else the interval becomes empty.
Binary Search Algorithm (Recursive Approach):
1. Base Case:
o If left exceeds right, return -1 (indicating the target is not
found).
2. Calculate Middle Index:
o Calculate mid = left + (right - left) / 2.
3. Check Middle Element:
o If the target value equals arr[mid], return mid.
o If the target value is less than arr[mid], recursively search the
left subarray: binarySearch(arr, left, mid - 1, target).
o If the target value is greater than arr[mid], recursively search
the right subarray: binarySearch(arr, mid + 1, right, target).
4. Return:
o The result of the recursive call.

Complexity of binary search->


• Time Complexity:
o Best Case: O(1)
o Average Case: O(log N)
o Worst Case: O(log N)

2.Ouick Sort-> Quick sort is a highly efficient, comparison-based


sorting algorithm that employs the divide-and-conquer strategy. It
works by selecting a "pivot" element from the array and partitioning
the other elements into two sub-arrays according to whether they
are less than or greater than the pivot. The sub-arrays are then
sorted recursively.
Quick sort Algorithm->

1. Choose a Pivot:
• Select a pivot element from the array (commonly the last element,
first element, or a random element).
2. Partition the Array:
• Rearrange the array such that:
o Elements less than the pivot are on the left.
o Elements greater than the pivot are on the right.
• The pivot element will be in its final sorted position after the
partitioning process.
3. Recursively Apply Quick Sort:
• Apply the quick sort algorithm recursively to the sub-arrays
formed by the partition step:
o The sub-array of elements less than the pivot.
o The sub-array of elements greater than the pivot.
4. Base Case:
• If the sub-array has zero or one element, it is already sorted, and
no further action is needed.

Complexity of quick sort ->


1. Average Time Complexity: O(nlogn)
2. Worst-Case Time Complexity: O(n2) (occurs when the smallest
or largest element is always chosen as the pivot)
3. Best-Case Time Complexity: O(nlogn)

3.Merge sort-> Merge sort is a classic, efficient, and stable sorting


algorithm that employs the divide-and-conquer strategy. It works by
dividing the array into smaller sub-arrays until each sub-array has only
one element (which is inherently sorted), and then merges those sub-
arrays to produce a sorted array.
Merge sort Algorithum ->

1 Divide:
• Divide the unsorted array into two approximately equal halves.
2 Conquer:
• Recursively apply merge sort to the two halves until each sub-
array contains only one element.
3 Merge:
• Merge the two halves back together in a sorted manner.
4 Base Case:
• If the array has zero or one element, it is already sorted, and no
further action is needed.

Complexity of Merge sort ->


1 Average Time Complexity: O(nlogn)
2 Worst-Case Time Complexity: O(nlogn)
3 Best-Case Time Complexity: O(nlogn)

2 . Notations.
Notations in algorithm analysis are mathematical tools used to
describe and analyze the performance of algorithms, particularly in
terms of time and space complexity. These notations help in
understanding how an algorithm's resource usage grows with the
size of the input.
Common Notations:
1. Big O Notation (O)
2. Big Omega Notation (Ω)
3. Big Theta Notation (Θ)
4. Little o Notation (o)
5. Little omega Notation (ω)

1. Big O Notation (O):


Purpose: Describes the upper bound or worst-case scenario for the
time or space complexity of an algorithm.
Usage: It provides an asymptotic analysis, indicating how the runtime
or space requirements grow as the input size increases.
Example: If an algorithm has a time complexity of O(n2), it means that
in the worst case, the runtime increases quadratically with the size of the
input.
2. Big Omega Notation (Ω):
Purpose: Describes the lower bound or best-case scenario for the time
or space complexity of an algorithm.
Usage: It shows the minimum amount of time or space an algorithm
will require as the input size increases.
Example: If an algorithm has a time complexity of Ω(n)Ω(n)Ω(n), it
means that the runtime will grow at least linearly with the size of the
input, even in the best case.
3. Big Theta Notation (Θ):
Purpose: Describes the tight bound or average-case scenario, where
the algorithm's time or space complexity is both upper and lower
bounded.
Usage: It provides a more precise analysis, showing that the runtime or
space requirements grow at a specific rate for both the best and worst
cases.
Example: If an algorithm has a time complexity of Θ(nlogn), it means
that the runtime grows at a rate proportional to nlogn for all cases.

3. Recurrence Relations->
A recurrence relation is an equation that recursively defines a
sequence of values. Each term in the sequence is defined as a function
of one or more previous terms. Recurrence relations are commonly
used in computer science and mathematics to describe the runtime of
recursive algorithms or to model dynamic systems.

Types of Recurrence Relations:


1. Linear Recurrence Relations:
A recurrence relation where each term is a linear combination of
previous terms.
Example: The Fibonacci sequence is defined by the recurrence relation:
F(n)=F(n−1)+F(n−2)
with base cases F(0)=0 and F(1)=1.
2. Homogeneous vs. Non-Homogeneous:
Homogeneous: No additional terms are added to the recurrence
relation.
Example: T(n)=2T(n−1)
Non-Homogeneous: Includes additional terms.
Example: T(n)=2T(n−1)+n.

3. Linear vs. Non-Linear:


• Linear: The recurrence relation involves linear combinations of previous
terms.
Example: T(n)=3T(n−1)+2T(n−2)

• Non-Linear: Involves non-linear combinations, such as products of terms.


Example: T(n)=T(n−1)⋅T(n−2)

Solving Recurrence Relations:


There are several methods to solve recurrence relations, depending on
the form of the relation:
1. Substitution (Iteration):
o Substitute the recurrence relation into itself repeatedly to
express T(n) in terms of the base case.
o Example: For T(n)=2T(n−1)+1 repeatedly substitute T(n−1)
until you reach the base case.
2. Master Theorem

4. Substitution (Iteration):

Recurrence Relation for Binary Search


Binary search is an efficient algorithm for finding an element in a sorted
array by repeatedly dividing the search interval in half. The time
complexity of binary search can be analyzed using a recurrence relation.
Binary Search Recurrence Relation
Let T(n) represent the time complexity of binary search on an array of size
nnn. The recurrence relation can be described as:
T(n)=T(n2)+O(1)
Where:
• T(n2) is the time complexity of searching in half of the array.
• O(1) is the constant time required to perform the comparison and
calculate the middle index.
Solution of the Recurrence Relation
Given the recurrence relation for binary search:
T(n)=T(n2)+O(1)
We want to find the time complexity T(n).

Step-by-Step Solution
1. Understand the Recurrence:
o T(n)) represents the time it takes to perform binary search on
an array of size n.
o The algorithm divides the array into two halves, so the
problem size reduces from n to n2
o The O(1) term represents the constant time operations (like
checking the middle element).
2. Unroll the Recurrence (also known as substitution method):
• Start by expanding the recurrence relation by substituting T(n) with
its definition: T(n)=T(n/2)+O(1)
• Now, substitute T(n2) with its recurrence: T(n)=[T(n/4)+O(1)]+O(1)
• Substitute again: T(n)=[[T(n/8)+O(1)]+O(1)]+O(1)
• After k steps, the recurrence relation becomes:
T(n)=T(n/2^k)+k⋅O(1)
3.Determine the Base Case: The base case occurs when the size of the
problem is reduced to 1, i.e., n/2^k=1
Solve for k: n/2^k=1⇒n=2^k⇒k=log2n
4. Substitute the Base Case:
• At the base case k=log2n, we substitute back into the recurrence:
T(n)=T(1)+log2n⋅O(1)
• T(1) is a constant, say O(1), because checking an array of size 1 takes
constant time.

5. Final Time Complexity:


• The time complexity now simplifies to: T(n)=O(1)+log2n⋅O(1)
• Which simplifies further to: T(n)=O(logn)

Conclusion
The time complexity of binary search, as derived from the recurrence
relation T(n)=T(n/2)+O(1), is O(logn). This reflects the logarithmic nature
of binary search, where the problem size is halved with each recursive
step.

Recurrence relation for the quick sort->


Worst case average case
T(n) = t(n-1)+ n O(nlogn)

T(n)=T(n-1)+n ---------------------1
T(n-1) = T(n-2)+n-1 -------------2
T(n-2) = T(n-3) + n-2 -------------3
Substitute T(n-1) in place of n in equation 1

T(n) = T(n-2) + (n-1) + n ---------4


Substitute T(n-2) in place of n in equation 2
T(n) = T(n-3)+(n-2)+(n-1)+n……………….

T(n) = T(1)+2+3+4……………..(n-1)+ n
T(n) = n(n+1)/2 = n^2/2 +n^2 /2 = O(n^2)

Recurrence relation for the Merge sort->

Time Complexity
• Best Case: O(nlogn)
• Average Case: O(nlogn)
• Worst Case: O(nlogn)

T(n) = T(n-1) + logn


T(n-1) = T(n-2) + log(n-1)

Using the values


T(3) = T(2) + log(3) --------------1
T(2) = T(1) + log(2) --------------2
T(1) = T(0) + log(1) --------------3

Substitute equation 2 in place of equation 1

T(3) = T(1) +log(2) +log(3) ------4


Substitute equation 3 in place of equation 4

T(3) = T(0) + log(1) + log(2) + log(3) ----------5

T(n) = T(0) +log(1) + log(2) +---------log(n-1) + log(n)


T(n) = T(0) + log(n!)
T(n) = log(n log n )

So worst time complexity is T(n) = log(n logn).

6. Divide and Conquer Algorithm.


The Divide and Conquer algorithm is a powerful problem-solving
paradigm that involves breaking a problem down into smaller, more
manageable subproblems, solving each subproblem individually, and then
combining their solutions to solve the original problem. This approach is
often used to design efficient algorithms, particularly in sorting, searching,
and various computational geometry problems.
Steps in Divide and Conquer
1. Divide:
o Break the original problem into smaller, independent
subproblems. These subproblems should ideally be smaller
instances of the same problem.
2. Conquer:
o Solve each subproblem recursively. If the subproblems are
small enough, they can be solved directly (base case).
3. Combine:
o Merge the solutions of the subproblems to obtain the solution
for the original problem.
Advantages
• Efficiency: Divide and conquer algorithms are often more efficient,
especially for large inputs, due to their logarithmic nature in terms
of recursive depth.
• Parallelism: The independent subproblems can often be solved in
parallel, leading to potential performance improvements in multi-
core or distributed computing environments.
Disadvantages
• Overhead: The recursive nature of these algorithms can lead to
overhead due to function calls and stack usage.
• Space Complexity: Some divide and conquer algorithms (like merge
sort) require additional memory for merging, leading to higher
space complexity.

General Divide and Conquer Algorithm


function DivideAndConquer(problem):
// Base case: If the problem is small enough or simple enough to be
solved directly
if problem is small enough:
return the solution to the base case

// Divide: Split the problem into smaller subproblems


subproblem1, subproblem2, ..., subproblemN = divide(problem)

// Conquer: Recursively solve each subproblem


solution1 = DivideAndConquer(subproblem1)
solution2 = DivideAndConquer(subproblem2)
...
solutionN = DivideAndConquer(subproblemN)

// Combine: Merge the solutions of the subproblems to get the solution


to the original problem
final_solution = combine(solution1, solution2, ..., solutionN)

return final_solution

You might also like