0% found this document useful (0 votes)
24 views78 pages

Topic 2 - Divide and Conquer Method

DAA THEORY

Uploaded by

22052521
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views78 pages

Topic 2 - Divide and Conquer Method

DAA THEORY

Uploaded by

22052521
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Design and Analysis of Algorithms

Divide and Conquer Method


Divide-and-Conquer Approach
• The divide-and-conquer strategy suggests splitting the inputs into k
distinct subsets, 1< k < n, yielding k subproblems
• These subproblems must be solved
• Then, a method must be found to combine sub-solutions into a solution of
the whole problem
• If the subproblems are still relatively large, then the divide-and-
conquer strategy can possibly be reapplied
• Often the subproblems resulting from a divide-and-conquer design
are of the same type as the original problem
• For those cases the reapplication of the divide-and-conquer principle
is naturally expressed by a recursive algorithm
Divide-and-Conquer Approach
• Divide-and-Conquer algorithms are often recursive in structure
• They involves three steps at each level of the recursion:
• Divide the problem into a number of subproblems that are smaller instances of the
same problem
• Conquer the subproblems by solving them recursively
• If the subproblem sizes are small enough, however, just solve the subproblems in a
straightforward manner
• Combine, if necessary, the solutions to the subproblems into the solution for the
original problem
• The base case for the recursion in Divide-and-Conquer algorithms are
subproblems of constant size
• Analysis can be done using recurrence equations
Divide-and-Conquer Example: Binary Search
1. Algorithm: BinSearch(A, n, key)
2. Input: A sorted array A[1, …, n], the key element key
3. Output: The index j if key present, unsuccess if key absent
4. Begin
1. Low ← 1; high ← n
2. While low ≤ high Do
1. mid ← Floor((low + high)/2)
2. If key < A[mid] Do high ← mid – 1
3. Else If key > A[mid] Do low ← mid + 1
4. Else Return mid
3. End While
4. Return 0 /* 0 indicates unsuccess, as indices are 1, …, n */
5. End
Binary Search Demo
Binary search. Given value and sorted array a[], find index i
such that a[i] = value, or report that no such index exists.

Invariant. Algorithm maintains a[lo]  value  a[hi].

Ex. Binary search for 33.

6 13 14 25 33 43 51 53 64 72 84 93 95 96 97
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

lo hi
Binary Search Demo
Binary search. Given value and sorted array a[], find index i
such that a[i] = value, or report that no such index exists.

Invariant. Algorithm maintains a[lo]  value  a[hi].

Ex. Binary search for 33.

6 13 14 25 33 43 51 53 64 72 84 93 95 96 97
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

lo mid hi
Binary Search Demo
Binary search. Given value and sorted array a[], find index i
such that a[i] = value, or report that no such index exists.

Invariant. Algorithm maintains a[lo]  value  a[hi].

Ex. Binary search for 33.

6 13 14 25 33 43 51 53 64 72 84 93 95 96 97
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

lo hi
Binary Search Demo
Binary search. Given value and sorted array a[], find index i
such that a[i] = value, or report that no such index exists.

Invariant. Algorithm maintains a[lo]  value  a[hi].

Ex. Binary search for 33.

6 13 14 25 33 43 51 53 64 72 84 93 95 96 97
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

lo mid hi
Binary Search Demo
Binary search. Given value and sorted array a[], find index i
such that a[i] = value, or report that no such index exists.

Invariant. Algorithm maintains a[lo]  value  a[hi].

Ex. Binary search for 33.

6 13 14 25 33 43 51 53 64 72 84 93 95 96 97
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

lo hi
Binary Search Demo
Binary search. Given value and sorted array a[], find index i
such that a[i] = value, or report that no such index exists.

Invariant. Algorithm maintains a[lo]  value  a[hi].

Ex. Binary search for 33.

6 13 14 25 33 43 51 53 64 72 84 93 95 96 97
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

lo mid hi
Binary Search Demo
Binary search. Given value and sorted array a[], find index i
such that a[i] = value, or report that no such index exists.

Invariant. Algorithm maintains a[lo]  value  a[hi].

Ex. Binary search for 33.

6 13 14 25 33 43 51 53 64 72 84 93 95 96 97
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

lo
hi
Binary Search Demo
Binary search. Given value and sorted array a[], find index i
such that a[i] = value, or report that no such index exists.

Invariant. Algorithm maintains a[lo]  value  a[hi].

Ex. Binary search for 33.

6 13 14 25 33 43 51 53 64 72 84 93 95 96 97
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

lo
hi
mid
Binary Search Demo
Binary search. Given value and sorted array a[], find index i
such that a[i] = value, or report that no such index exists.

Invariant. Algorithm maintains a[lo]  value  a[hi].

Ex. Binary search for 33.

6 13 14 25 33 43 51 53 64 72 84 93 95 96 97
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

lo
hi
mid
Decision Tree for Binary Search (n = 13)
Binary Search: Time Complexity Analysis
• Theorem: If n is in the range [2k-1, 2k), then BinSearch algorithm makes at
most k element comparisons for a successful search and either k – 1 or k
comparisons for an unsuccessful search
• Proof Hints:
• Consider the binary decision tree for the scenario
• Note that all successful searches end at a circular node
• Also note that all unsuccessful searches end at a square node
• If 2k-1 ≤ n < 2k, then
• All circular nodes are at levels 1, 2, …, k
• All square nodes are at levels k and k + 1
• Corollary: The Worst-case Time for a successful search is O(log n), and the
Worst-case Time for an unsuccessful search is Θ(log n)
Binary Search: Average Case Time Complexity
• Let us assume the following terms for a binary decision tree:
Internal path length (I): Sum of distances of all internal nodes form root
External path length (E): Sum of distances of all external nodes form root
• Then, using mathematical induction, we can show that
E = I + 2n ∙ ∙ ∙ (1)
• Let us also assume the following terms:
As(n): Average number of comparisons in a successful search
Au(n): Average number of comparisons in an unsuccessful search
Binary Search: Average Case Time Complexity
• Note that, number of comparisons needed to find an element is one more
than the distance of its node in decision tree from the root
As(n) = 1 + I/n ∙ ∙ ∙ (2)
• Also note that a binary decision tree with n internal node has n + 1 external
node, which yields
Au(n) = E/(n + 1) ∙ ∙ ∙ (3)
• Combining (1), (2) and (3):
As(n) = (1 + 1/n) Au(n) – 1 ∙ ∙ ∙ (4)
• As all external nodes are at levels k and k + 1, we derive from (3):
Au(n) = Θ(log n)
• Then, we conclude further from (4): As(n) = Θ(log n)
Analysis of Divide-and-Conquer Algorithms
• let T(n) be the running time on a problem of size n
• If problem size is sufficiently small, say n < c for a constant c, then

• Suppose our target D&C algorithm divide the whole problem into a
subproblems, each of which are of 1/b size of the original problem
• So, each subproblem takes T(n/b) time
• Also, let D(n) be the time to divide the problem and C(n) be the time
combine the solutions of the subproblem
• Then, for sufficiently large n, we have
Merge Sort: The Approach
• To sort an array A[p . . r]:
• Divide
• Divide the n-element sequence to be sorted into two subsequences of n/2
elements each
• Conquer
• Sort the subsequences recursively using merge sort
• When the size of the sequences is 1 there is nothing more to do

• Combine
• Merge the two sorted subsequences
How to Merge

Here are two lists to be merged:


First: (12, 16, 17, 20, 21, 27)
Second: (9, 10, 11, 12, 19)
Compare12 and 9
First: (12, 16, 17, 20, 21, 27)
Second: (10, 11, 12, 19)
New: (9)
Compare 12 and 10
First: (12, 16, 17, 20, 21, 27)
Second: (11, 12, 19)
New: (9, 10)
20
How to Merge

Compare 12 and 11
First: (12, 16, 17, 20, 21, 27)
Second: (12, 19)
New: (9, 10, 11)
Compare 12 and 12
First: (16, 17, 20, 21, 27)
Second: (12, 19)
New: (9, 10, 11, 12)

21
How to Merge

Compare 16 and 12
First: (16, 17, 20, 21, 27)
Second: (19)
New: (9, 10, 11, 12, 12)
Compare 16 and 19
First: (17, 20, 21, 27)
Second: (19)
New: (9, 10, 11, 12, 12, 16)

22
How to Merge

Compare 17 and 19
First: (20, 21, 27)
Second: (19)
New: (9, 10, 11, 12, 12, 16, 17)

Compare 20 and 19
First: (20, 21, 27)
Second: ( )
New: (9, 10, 11, 12, 12, 16, 17, 19)

23
How to Merge

Checkout 20 and empty list


First: ( )
Second: ()
New: (9, 10, 11, 12, 12, 16, 17, 19, 20, 21, 27)

24
Merge Sort Tree
 An execution of merge-sort is depicted by a binary tree
– each node represents a recursive call of merge-sort and stores
unsorted sequence before the execution and its partition
sorted sequence at the end of the execution

– the root is the initial call


– the leaves are calls on subsequences of size 0 or 1

7 29 4  2 4 7 9

72  2 7 94  4 9

77 22 99 44

25
Execution Example

 Partition
7 2 9 43 8 6 1

26
Execution Example (cont.)

 Recursive call, partition


7 2 9 43 8 6 1

7 29 4

27
Execution Example (cont.)

 Recursive call, partition


7 2 9 43 8 6 1

7 29 4

72

28
Execution Example (cont.)

 Recursive call, base case


7 2 9 43 8 6 1

7 29 4

72

77

29
Execution Example (cont.)

 Recursive call, base case


7 2 9 43 8 6 1

7 29 4

72

77 22

30
Execution Example (cont.)

 Merge
7 2 9 43 8 6 1

7 29 4

722 7

77 22

31
Execution Example (cont.)
 Recursive call, …, base case, merge
7 2 9 43 8 6 1

7 29 4

722 7 9 4  4 9

77 22 99 44

32
Execution Example (cont.)

 Merge
7 2 9 43 8 6 1

7 29 4 2 4 7 9

722 7 9 4  4 9

77 22 99 44

33
Execution Example (cont.)

 Recursive call, …, merge, merge


7 2 9 43 8 6 1

7 29 4 2 4 7 9 3 8 6 1  1 3 6 8

722 7 9 4  4 9 3 8  3 8 6 1  1 6

77 22 99 44 33 88 66 11

34
Execution Example (cont.)

 Merge
7 2 9 43 8 6 1  1 2 3 4 6 7 8 9

7 29 4 2 4 7 9 3 8 6 1  1 3 6 8

722 7 9 4  4 9 3 8  3 8 6 1  1 6

77 22 99 44 33 88 66 11

35
Merge - Pseudocode p q r
Alg.: MERGE(A, p, q, r)
1 2 3 4 5 6 7 8

2 4 5 7 1 2 3 6
1. Compute n1 and n2
2. Copy the first n1 elements into n1 n2
L[1 . . n1 + 1] and the next n2 elements into R[1 . . n2 + 1]
3. L[n1 + 1] ← ; R[n2 + 1] ←  p q

4. i ← 1; j ← 1 L 2 4 5 7 
5. for k ← p to r q+1 r

R 1 2 3 6 
6. do if L[ i ] ≤ R[ j ]
7. then A[k] ← L[ i ]
8. i ←i + 1
9. else A[k] ← R[ j ]
10. j←j+1
Mergesort - Pseudocode
p q r
1 2 3 4 5 6 7 8

Alg.: MERGE-SORT(A, p, r) 5 2 4 7 1 3 2 6

if p < r Check for base case

then q ← (p + r)/2 Divide

MERGE-SORT(A, p, q) Conquer

MERGE-SORT(A, q + 1, r) Conquer

MERGE(A, p, q, r) Combine

• Initial call: MERGE-SORT(A, 1, n)


Time Complexity: Merge
• Initialization (copying into temporary arrays):
• (n1 + n2) = (n)
• Adding the elements to the final array:
- n iterations, each taking constant time  (n)
• Total time for Merge:
• (n)
Time Complexity: Mergesort
• Divide:
• compute q as the average of p and r: D(n) = (1)
• Conquer:
• recursively solve 2 subproblems, each of size n/2  2T (n/2)
• Combine:
• MERGE on an n-element subarray takes (n) time  C(n) = (n)

(1) if n =1
T(n) =
2T(n/2) + (n) if n > 1
Solve the Recurrence
c if n = 1
T(n) =
2T(n/2) + cn if n > 1

Use Master’s Theorem:


Compare n with f(n) = cn
Case 2 of Master’s Theorem: T(n) = Θ(nlgn)
Merge Sort - Discussion
• Running time insensitive of the input:
• Merge sort pays no attention to the original order of the list
• It keeps dividing the list into half until sub-lists of length 1, then start merging

• Therefore, Average-Case Time Complexity of Merge Sort is the same


as its Worst-Case Time Complexity

• Advantage:
• Guaranteed to run in (nlog2n)

• Disadvantage:
• Requires extra space  (n)
Quicksort
• The problem:

Same as Merge Sort

• Compared with Merge Sort:

• Quick Sort also use Divide-and-Conquer strategy


• Quick Sort eliminates merging operations
Quicksort A[p…q] ≤ A[q+1…r]

• Sort an array A[p…r]


• Divide
• Partition the array A into 2 subarrays A[p..q] and A[q+1..r], such that each element of
A[p..q] is smaller than or equal to each element in A[q+1..r]
• Need to find index q to partition the array
Quicksort A[p…q] ≤ A[q+1…r]

• Conquer
• Recursively sort A[p..q] and A[q+1..r] using Quicksort
• Combine
• Trivial: the arrays are sorted in place
• No additional work is required to combine them
• The entire array is now sorted
Quicksort: Psedocode

Alg.: QUICKSORT(A, p, r) Initially: p=1, r=n

if p < r

then q  PARTITION(A, p, r)

QUICKSORT (A, p, q)

QUICKSORT (A, q+1, r)

Recurrence: T(n) = T(q) + T(n – q) + f(n) PARTITION())


Quicksort: Partitioning the Array
• Choosing PARTITION()
• There are different ways to do this
• Each has its own advantages/disadvantages

• Hoare partition
• Select a pivot element x around which to partition
• Grows two regions A[p…i]  x x  A[j…r]
A[p…i]  x
x  A[j…r]
i j
Quicksort: Partitioning the Array
• A key step in the Quicksort algorithm is partitioning the array
• We choose some (any) number p in the array to use as a pivot
• We partition the array into three parts:

numbers less p numbers greater than or


than p equal to p
Quicksort: Partitioning the Array
• Choose an array value (say, the first or the last) to use as the pivot
• Starting from the left end, find the first element that is greater than
or equal to the pivot
• Searching backward from the right end, find the first element that is
less than the pivot
• Interchange (swap) these two elements
• Repeat, searching from where we left off, until done
Quicksort: Partitioning the Array (Algorithm)
• To partition a[left...right]:
1. Set pivot = a[left], l = left + 1, r = right;
2. while l < r, do
2.1. while l < right & a[l] < pivot , set l = l + 1
2.2. while r > left & a[r] >= pivot , set r = r - 1
2.3. if l < r, swap a[l] and a[r]
3. Set a[left] = a[r], a[r] = pivot
4. Terminate
Quicksort: Partitioning the Array (Example)
• To sort the array: 436924312189356
• choose pivot: 436924312189356
• search: 43 924312189356
• swap: 433924312189 56
• search: 433 24312189656
• swap: 433124312 89656
• search: 43312 312989656
• swap: 43312231 989656
• search: 4 3 3 1 2 2 3 1 4 8 9 6 5 6 (Crossover)
• swap with pivot: 133122344 89656
Quicksort: Example

Each element is
visited once!
Running time: (n)
n = right – left + 1
Quicksort: Recurrence Relation
Initially: p=1, r=n

Alg.: QUICKSORT(A, p, r)

if p < r

then q  PARTITION(A, p, r)

QUICKSORT (A, p, q)

QUICKSORT (A, q+1, r)


Recurrence: T(n) = T(q) + T(n – q) + n
Worst Case Partitioning
• Worst-case partitioning
• One region has one element and the other has n – 1 elements

• Maximally unbalanced
n n
• Recurrence: q=1 1 n-1 n
1 n-2 n-1
T(n) = T(1) + T(n – 1) + n, n-2
n 1 n-3
T(1) = (1) 1
2 3
T(n) = T(n – 1) + n 1 1 2
 n 
n    k   1  ( n)  ( n2 )  ( n2 ) (n2)
=  k 1 
When does the worst case happen?
Best Case Partitioning
• Best-case partitioning
• Partitioning produces two regions of size n/2

• Recurrence: q=n/2
T(n) = 2T(n/2) + (n)
T(n) = (nlgn) (Master theorem)
Case Between Worst and Best
• 9-to-1 proportional split
Q(n) = Q(9n/10) + Q(n/10) + n
How Does Partition Affect Quicksort Performance?
How Does Partition Affect Quicksort Performance?
Worst-Case Analysis of Quicksort: Formal Proof
• T(n) = worst-case running time
T(n) = max (T(q) + T(n-q)) + (n)
1 ≤ q ≤ n-1

• Use substitution method to show that the running time


of Quicksort is O(n2)

• Guess T(n) = O(n2)


• Induction goal: T(n) ≤ cn2

• Induction hypothesis: T(k) ≤ ck2 for any k < n

58
Worst-Case Analysis of Quicksort: Formal Proof
• Proof of induction goal:
T(n) ≤ max (cq2 + c(n-q)2) + (n) =
1 ≤ q ≤ n-1

= c  max (q2 + (n-q)2) + (n)


1 ≤ q ≤ n-1

• The expression q2 + (n-q)2 achieves a maximum over the range 1 ≤ q ≤ n-1 at one of the
endpoints

max (q2 + (n - q)2) = 12 + (n - 1)2 = n2 – 2(n – 1)


1 ≤ q ≤ n-1

T(n) ≤ cn2 – 2c(n – 1) + (n)


≤ cn2

59
Quicksort: Choice of Pivot
1. Choose the of the array as pivot
• But, it can be very bad
• Why?
• Remedy??

2. Choose the pivot randomly


• Need a random number generator
• Will be discussed in detail shortly:

3. The to choose pivot


• Median3 takes the median of leftmost, middle, and rightmost elements
• Often a good choice for pivot
• Illustrated Later with example
Randomized Quicksort
• To sort a given set of numbers
• In traditional quicksort algorithm, we pick a particular index
element as pivot for splitting
• Worst Case: O(n2 )
• Average Case: O(n log n)
• A good pivot can be selected using median finding algorithm, but
the total complexity will again be O(n2 )
• So, what if we pick and do
the partition!!!
• We shall show that this takes expected O(n log n) time
Randomized Quicksort
• Let us assume that all elements are distinct
• We pick a random element x as the pivot and partition the input set S
into two sets L and R such:
• L = numbers less than x
• R = numbers greater than x
• Recursively sort L and R
• Return LxR
Randomized Partition
Alg.: RANDOMIZED-PARTITION(A, p, r)

i ← RANDOM(p, r)

exchange A[p] ↔ A[i]

return PARTITION(A, p, r)
Randomized Quicksort
Alg. : RANDOMIZED-QUICKSORT(A, p, r)

if p < r

then q ← RANDOMIZED-PARTITION(A, p, r)

RANDOMIZED-QUICKSORT(A, p, q)

RANDOMIZED-QUICKSORT(A, q + 1, r)

64
Example: Median3 Method for Choosing Pivot
Choose the pivot as the median of three
0 1 2 3 4 5 6 7 8 9

8 1 4 9 0 3 5 2 7 6

Median of 0, 6, 8 is 6. Pivot is 6
0 1 4 9 7 3 5 2 6 8

i j

Place the largest at the right and the smallest at the left

Swap pivot with next to last element


Example: Median3 Method for Choosing Pivot
i j

0 1 4 9 7 3 5 2 6 8

i j
0 1 4 9 7 3 5 2 6 8
i j
0 1 4 9 7 3 5 2 6 8
i j
0 1 4 2 7 3 5 9 6 8

Move i to the right up to A[i] larger than pivot.


Move j to the left up to A[j] smaller than pivot.
Swap
Example: Median3 Method for Choosing Pivot
i j

0 1 4 2 7 3 5 9 6 8
i j
0 1 4 2 7 3 5 9 6 8
i j
0 1 4 2 5 3 7 9 6 8

i j
0 1 4 2 5 3 7 9 6 8
j i
0 1 4 2 5 3 7 9 6 8 Cross-over i > j

j i
0 1 4 2 5 3 6 9 7 8

S1 < pivot pivot S2 > pivot


Folklore

• Truth about Quicksort


• Quicksort uses very few comparisons on average
• Quicksort does have good performance in the memory hierarchy
• In-place Sort: Recursive calls of Quicksort can be done in-place, requiring small additional amounts
of memory to perform the sorting
• Small Footprint: The in-place version of quicksort has a space complexity of O(log n), even in the
worst case
• Good Locality of Reference: Often, after a few iterations, quicksort works with blocks that fit
completely into the cache, and this substantially increases performance
• Efficient implementations of Quicksort are , meaning that the
relative order of equal sort items is not preserved
• No iterative version (without using a stack)
• Pure quicksort not good for small arrays
Mergesort vs Quicksort
• Both run in O(n lgn)
• Mergesort – always
• Quicksort – on average
• Compared with Quicksort, Mergesort has less number of comparisons, but
larger number of moving elements
• In Java, an element comparison is expensive but moving elements is cheap.
Therefore,
• In C++, copying objects can be expensive while comparing objects often is
relatively cheap. Therefore,
Integer Multiplication
• The problem: Multiply two large integers (n digits)

• Traditional Method: Use two “for” loops

• Time Complexity: It takes Θ(n2) operations


Integer Multiplication: Divide-and-Conquer Approach
Suppose, two large integers x and y; x is divided into two
parts a and b, and y is divided into two parts c and d
n
x: a b  x  a  10  b
2

y: c d  y  c  10  d
2

n n
x  y  (a  10  b)(c  10  d )
2 2

n
 ac  10 n  bd  10 (ad  bc)
2

So, we transform the problem of multiplying two integers of n-digit


n
into four subproblems of multiplying two integers of -digit
2
Worst-Case time complexity is:
n
T (n)  4  T ( )  bn
2
 O (n Log 2 4 )  O (n 2 )
however, it is same as the traditional method

Therefore, we need to improve equation as follows:



n
 ac  10 n  bd  10 2  (ad  bc)
n
 ac  10 n  bd  10 2  ((a  b)(c  d )  ac  bd )
Worst-Case is:
n
T (n)  3  T ( )  bn
2
 O(n Log 2 3 )  O(n1.58 )
Integer Multiplication: Divide-and-Conquer Version
Algorithm: MULTIPLICATION (x, y)
BEGIN
n = MAX ( # of digits in x, #of digits in y);
IF x = 0 OR y = 0 DO return 0;
ELSE IF n = 1 DO return x * y in the usual way;
ELSE  n 
m =  2  ;
a = x divide 10m;
b = x rem 10m;
c = y divide 10m;
d = y rem 10m;
p1= MULTIPLICATION (a, c);
p2= MULTIPLICATION (b, d);
p3 = MULTIPLICATION (a+b, c+d);
2m m
return p1  10  p 2  10 ( p 3  p1  p 2 );
END;
Integer Multiplication: Divide-and-Conquer Version
p1= MULTIPLICATION (a, c);
p2= MULTIPLICATION (b, d);
p3 = MULTIPLICATION (a+b, c+d);

a = 14
p1= P'
x = 143 b=3
P y = 256
n=3 m=1
c = 25
p2= 18 P=36608

d=6
p3= P''

a=1
p1 = 2
x = 14 b=4
P' y = 25
n=2 m=1
c=2
p2 = 20 P'=350

d=5
p3 = 35

a=1
p1 = 3
x = 17 b=7
P'' y = 31
n=2 m=1
c=3
p2 = 7 P''=527

d=1
p3 = 32
Max-Min Problem: Divide and Conquer Approach

• Broad Idea:
• The array is divided into two halves
• The maximum and the minimum numbers in each halves are found using
recursive approach
• Return the maximum of two maxima of each half and the minimum of two
minima of each half
Max-Min Problem: Divide and Conquer Algorithm
• Max−Min(x,y) will return the maximum and minimum values of an array
numbers[x...y]
A Simple Example
n Finding the maximum of a set S of n numbers

4 -77
Time Complexity
n Time complexity:
T(n): # of comparisons
 2T(n/2)+1 , n>2
T(n)= 
1 , n2
n Calculation of T(n):
Assume n = 2k,
T(n) = 2T(n/2)+1
= 2(2T(n/4)+1)+1
= 4T(n/4)+2+1
:
=2k-1T(2)+2k-2+…+4+2+1
=2k-1+2k-2+…+4+2+1
=2k-1 = n-1 4 -78

You might also like