0% found this document useful (0 votes)
3 views

Lecture - 09

The document provides an overview of the Master Theorem, which is used to analyze the running time of divide and conquer algorithms. It explains the theorem's application through various cases, including examples like Merge Sort and Quick Sort, detailing how to determine the time complexity. Additionally, it discusses the advantages and disadvantages of the divide and conquer approach, emphasizing its scalability, efficiency, and modularity, while also noting potential overhead and complexity issues.

Uploaded by

Ai Shayy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Lecture - 09

The document provides an overview of the Master Theorem, which is used to analyze the running time of divide and conquer algorithms. It explains the theorem's application through various cases, including examples like Merge Sort and Quick Sort, detailing how to determine the time complexity. Additionally, it discusses the advantages and disadvantages of the divide and conquer approach, emphasizing its scalability, efficiency, and modularity, while also noting potential overhead and complexity issues.

Uploaded by

Ai Shayy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Advanced Algorithms Analysis

Prof. Dr. Ehsan Ullah Munir


1
The Master Theorem

2
The Master Theorem
• Given: a divide and conquer algorithm
• An algorithm that divides the problem of size n into a
subproblems, each of size n/b
• Let the cost of each stage (i.e., the work to divide the problem +
combine solved subproblems) be described by the function f(n)

• Then, the Master Theorem gives us a cookbook for the


algorithm’s running time:

3
The Master Theorem
• if T(n) = aT(n/b) + f(n) where a ≥ 1 & b > 1
• Then  
 (
 logb a
n ) (
f ( n) = O n )
log b a − 

 
 logb a   0
T (n) =  n( lg n ) (
f ( n) =  n )
log b a

  c 1
 
( f (n) ) f ( n) =  n(log b a +
)
& 
 
 af (n / b)  cf (n) for large n 
4
Using The Master Method: Case 1
• Comparison between f(n) and nlogba (<,=,>)

• T(n) = 8T(n/2) + n2
• a = 8, b = 2, f(n) = n2
• nlogb a = nlog28 = n3
• since f(n) = O(nlog28 - ) = O(n3-0.5) = O(n2.5)
• where  = 0.5
• case 1 applies:
( ) (
T (n) =  n logb a when f (n) = O n logb a − )

• Thus the solution is


• T(n) = (n3)
Using The Master Method: Case 1
• T(n) = 9T(n/3) + n
• a = 9, b = 3, f(n) = n
• nlogb a = nlog39 = n2
• since f(n) = O(nlog39 - ) = O(n2-0.5) = O(n1.5)
• where  = 0.5
• case 1 applies:

(
• Thus the solution is T (n) =  n logb a ) (
when f (n) = O n logb a − )
• T(n) = (n2)
Using The Master Method: Case 2
• T(n) = T(2n/3) + 1
• a = 1, b = 3/2, f(n) = 1
• nlogba = nlog3/21 = n0 = 1
• since f(n) = (nlogba) = (1)
• case 2 applies:

(
• Thus the solution is T (n) =  n logb a lg n ) (
when f (n) =  n logb a )
• T(n) = (lg n)
How to Change the Base of a Log

log10 𝑛
log 𝑏 𝑛 =
log10 𝑏

Log: exponent or power to which a base must be


raised to yield a given number. Expressed
mathematically, x is the logarithm of n to the base b
if bx = n, in which case one writes x = logb n

3.8
How to Change the Base of a Log

log10 𝑛 log10 3163


log 𝑏 𝑛 = log 2 3163 =
log10 𝑏 log10 2

Log: exponent or power to which a base must be


raised to yield a given number. Expressed
mathematically, x is the logarithm of n to the base b
if bx = n, in which case one writes x = logb n

3.9
How to Change the Base of a Log

log10 𝑛 log10 3163


log 𝑏 𝑛 = log 2 3163 =
log10 𝑏 log10 2

Log: exponent or power to which a base must be


raised to yield a given number. Expressed
mathematically, x is the logarithm of n to the base b
if bx = n, in which case one writes x = logb n

3.10
 
 logb a 
Using The Master Method: Case
T (n) =3 n ( lg n ) (
f ( n) =  n )
log b a

 c
• T(n) = 3T(n/4) + n lg n  
• a = 3, b = 4, f(n) = n lg n
( f (n) ) f ( n) =  n( )
log b a +
& 
 
• nlog a = nlog 3
b 4  af (n / b)  cf (n) for large n 

(
T (n) = ( f (n) ) when f (n) =  n logb a + )
 
 logb a 
Using The Master Method: Case T (n) =3 n ( lg n ) (
f ( n) =  n )
log b a

 c
• T(n) = 3T(n/4) + n lg n  
• a = 3, b = 4, f(n) = n lg n
( f (n) ) f ( n) =  n( )
log b a +
& 
 
• nlogba = nlog43 = n0.793 = n0.8  af (n / b)  cf (n) for large n 
• Since f(n) = (nlog43+) = (n0.8+0.2) = (n)
• where   0.2, and for sufficiently large n,
• a.f(n/b) = 3(n/4) lg(n/4)  (3/4) n lg n for c = 3/4
• case 3 applies:

(
• Thus the solution is T (n) = ( f (n) ) when f (n) =  n logb a + )
• T(n) = (n lg n)
How to Change the Base of a Log

log10 𝑛 log10 3
log 𝑏 𝑛 = log 4 3 =
log10 𝑏 log10 4

Log: exponent or power to which a base must be


raised to yield a given number. Expressed
mathematically, x is the logarithm of n to the base b
if bx = n, in which case one writes x = logb n

3.13
Using The Master Method: Example
• T(n) = T(n-1) + 1
• ????
Divide and Conquer
• The problem is divided into smaller sub-problems and then each
problem is solved independently.
• When we keep dividing the sub-problems into even smaller sub-
problems, we may eventually reach a stage where no more division is
possible.
• Those smallest possible sub-problems are solved using original
solution because it takes lesser time to compute.
• The solution of all sub-problems is finally merged to obtain the solution
of the original problem.
Divide and Conquer
• Broadly, we can understand divide-and-conquer approach in a three-step process.
• Divide/Break
• This step involves breaking the problem into smaller sub-problems. Sub-problems should
represent a part of the original problem. This step generally takes a recursive approach to
divide the problem until no sub-problem is further divisible. At this stage, sub-problems
become atomic in size but still represent some part of the actual problem.

• Conquer/Solve
• This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the
problems are considered 'solved' on their own.

• Merge/Combine
• When the smaller sub-problems are solved, this stage recursively combines them until
they formulate a solution of the original problem. This algorithmic approach works
recursively and conquer & merge steps works so close that they appear as one.
Divide and Conquer
Divide and Conquer - Examples
• Some of the divide-and-conquer programming
algorithms
• Binary Search
• Merge Sort
• Quick Sort
• Strassen's Matrix Multiplication
• Closest pair (points)
• Karatsuba (for fast multiplication)
Divide and Conquer - Pros
• Scalability: Since the divide and conquer strategy is very scalable, it may be used
in situations of all sizes. The technique may handle larger issues with little
alteration to the original method by splitting the problem into smaller sub-problems.
• Efficiency: For issues where the subproblems are independent or only have minor
dependencies, the divide-and-conquer strategy can be quite efficient. The method
can tackle the sub-problems simultaneously, which cuts down on the total amount
of time needed to solve the problem.
• Modularity: The divide and conquer strategy encourages modularity, which allows
for the development of a larger solution by merging the solutions into smaller
subproblems. As a result, it is simpler to comprehend and alter the solution.
• Parallelism: The divide and conquer strategy frequently supports parallelism,
allowing various processors or threads to work on the subproblems simultaneously.
In multi-core or distributed computing environments, this can result in faster
problem-solving and increased efficiency.
Divide and Conquer – Cons
• Overhead: The overhead of dividing the problem into smaller subproblems is one
aspect of the divide and conquer strategy, and it can contribute to the compute
storage, and communication overhead.
• Complexity: The recursive structure of the algorithm makes the divide-and-
conquer strategy susceptible to adding more complexity. Debugging the algorithm
and understanding it can be difficult.
• Dependencies: For situations with significant dependency among the
subproblems, the divide and conquer strategy is ineffective. High interdependence
between the subproblems can result in unsatisfactory solutions when they are
solved separately.
• Solution Integration: Combining the solutions found through resolving separate
subproblems might not necessarily result in the best solution to the main problem.
The approach's total complexity may increase because of the integration of the
solutions, which can be challenging and may call for additional processing to
provide a globally optimal solution.
Merge Sort
• The Merge Sort algorithm breaks the array down into smaller and smaller
pieces.
• The array becomes sorted when the sub-arrays are merged back.

21
Merge Sort - Example

22
Merge Sort – MERGE Function

23
Merge Sort – Time Complexity
• As we can see from the figure below, 𝑛 comparisons are needed on each
level.

24
Merge Sort – Time Complexity
• The recurrence for merge sort will be:

 c n =1
 n
T (n) = 2T
  + c n 1
  2 

25
Quick Sort
• The Quicksort algorithm chooses
a value as the 'pivot' element, and
moves the other values so that
higher values are on the right of
the pivot element, and lower
values are on the left of the pivot
element.
• The Quicksort algorithm then
continues to sort the sub-arrays
on the left and right side of the
pivot element recursively until the
array is sorted.

26
Quick Sort – Working

27
Quick Sort – Time Complexity (Worst-case)
• The worst-case behavior for quicksort occurs when the partitioning produces
one subproblem with 𝑛 − 1 elements and one with 0 elements.
• Let us assume that this unbalanced partitioning arises in each recursive call.
• The partitioning costs O(𝑛) time.
• Since the recursive call on an array of size 0 just returns without doing
anything, 𝑇 0 = 𝑂 1 , and the recurrence for the running time is

𝑇 𝑛 = 𝑇 𝑛 − 1 + 𝑇(0) + 𝑂(𝑛)
= 𝑇 𝑛 − 1 + 𝑂(𝑛)
= 𝑂(𝑛2 )
28
Quick Sort – Time Complexity (Average-case)
• Quicksort is fast on average
because the array is split
approximately in half each
time Quicksort runs
recursively.
• The recurrence for the
running time is
 c n =1
 n
T (n) = 2T
  + c n 1
  2 
= 𝑂(𝑛2 )
29

You might also like