0% found this document useful (0 votes)
5 views

week_4

Uploaded by

muddasirrizwan9
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

week_4

Uploaded by

muddasirrizwan9
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

CSE 317: Design and Analysis of

Algorithms

Shahid Hussain
Week 4: September 9, 11: Fall 2024

1
Divide and Conquer Algorithms
Master Theorem

Theorem
Let L(n) be a function depending on natural n. Let c be a
natural number, c ≥ 2, a, b, γ be real constants such that,
a ≥ 1, b > 0, γ ≥ 0, and for any n = ck , where k is an arbitrary
natural number, the following inequality holds:
n
L(n) ≤ aL + bnγ .
c
Suppose for any natural k for any n ∈ {ck + 1, ck + 2, . . . , ck+1 }
the inequality L(n) ≤ L(ck+1 ) holds. Then:

γ
O(n ) if γ > logc a,


L(n) = O(nlogc a ) if γ < logc a,


O(nγ log n) if γ = logc a.

2
Proof of Master Theorem

• Let n = ck where, we obtain:


n  n  n γ 
L(n) ≤ aL + bnγ ≤ a aL 2 + b + bnγ
c   c c
n γ n
= bnγ + ab + a2 L 2
 ac c n  n γ 
≤ bnγ + b γ nγ + a2 aL 3 + b 2
c c c
a  a 2 n
γ γ γ 3
= bn + bn + bn + a L 3 ≤ ···
cγ cγ c
a  a k−1 n
≤ bnγ + bnγ γ + · · · + bnγ γ + ak L k
c c c

3
Proof of Master Theorem (cont.)

• Let d = max{b, L(1)}. Since n/ck = 1, we have:


  a k−1 
γ a  a 2
L(n) ≤ dn 1 + γ + γ + · · · + γ + dak
c c c
  a k 
γ a  a 2
= dn 1 + γ + γ + · · · + γ .
c c c

• We can now use the following fact about geometric series.


Let α be a real number and 0 ≤ α ≤ 1. Then, for any n,
n
X 1 − αn 1
αj = 1 + α + α2 + · · · + αn = < .
1−α 1−α
i=0

4
Proof of Master Theorem (cont.)

• Let us consider three cases:


(1) γ > logc a (2) γ < logc a (3) γ = logc a
• If γ > logc a. Then a/cγ < 1.
In this case L(n) ≤ dnγ · const 1 = p1 nγ for some positive
constant p1
• If γ < logc a. Then a/cγ > 1, and
 a k γ
 γ 2  γ k !
c c c
L(n) ≤ dnγ γ 1+ + + ··· +
c a a a
Since n = ck , we have L(n) ≤ dnγ · const 2 = p2 aγ .
Therefore,
L(n) ≤ p2 ak = p2 alogc n = p2 nlogc a
• If γ = logc a. Then a/cγ = 1 and
L(n) ≤ dnγ (k+1) = dnγ (1+logc n) ≤ 2dnγ logc n for n ≥ c 5
Proof of Master Theorem (cont.)

• For an arbitrary n ∈ N > c, ∃k ∈ N s.t ck < n ≤ ck+1


• Let us consider three cases for which L(n) ≤ L(ck+ ) holds
• If γ > logc a. Then
 γ  γ
k+1 k+1
L(n) ≤ L(c ) ≤ p1 c γ
= p1 c ck ≤ p1 cγ nγ .
Thus L(n) = O(nγ ).
• If γ < logc a. Then
 logc a  logc a
L(n) ≤ L(ck+1 ) ≤ p2 ck+1 = p2 clogc a ck ≤ p2 anlogc a .

Thus, L(n) = O(nlogc a ).


• If γ = logc a. Then
 
L(n) ≤ L(ck+1 ) ≤ p3 c(k+1)γ logc ck+1
 γ
≤ p3 cγ ck (k+1) ≤ p3 cγ nγ (1+logc n) ≤ 2p3 cγ nγ logc n.
6
Thus, L(n) = O(nγ log n).
Divide and Conquer Recurrences

If in above theorem the inequality L(n) ≤ aL nc + bnγ is




replaced with L(n) ≤ aL nc + O(nγ ) then the statement of the




theorem will still be true.

• A(n) ≤ A(n/2) + n for any n = 2k , k = 1, 2, 3, . . .. So


a = c = 2, b = 1 and γ = 1. We have γ = logc a. We assume
A(n) is a nondecreasing function so A(n) = O(n log n)
• B(n) ≤ 3B(n/2) + 1 for any n = 2k , k = 1, 2, 3, . . .. So
a = 3, c = 2, and γ = 0. This means γ < log3 2. We assume
B(n) is a nondecreasing function so
B(n) = O(nlog3 2 ) = O(n0.6309 )

7
Merge Sort

• Let us consider the mergesort


• mergesort is a recursive algorithm
• Let ⟨a1 , . . . , an ⟩ be input sequence to be sorted
• Merge sort divides the array into two (almost) equal parts
as ⟨a1 , . . . , a⌊n/2⌋ ⟩ and ⟨a⌊n/2⌋+1 , . . . , an ⟩
• Use mergesort to sort these two subproblems
• Let α and β be two the sorted sequences we receive after
recursive calls
• We combine (merge) these lists to form a new list
• We compare first element of α with first element of β and
transfer the smaller element to the new sequence and move
the pointer where we take element from. If at any point if
one of the sequences α or β becomes empty we concatenate
the other sequence to the new sequence
8
Merge Sort

Algorithm: mergesort
Input: A = ⟨a1 , . . . , an ⟩: a sequence of n numbers
Output: A sorted permutation of A

1. if n > 1 then
2. α = mergesort(⟨a1 , a2 , . . . , a⌊n/2⌋ ⟩)
3. β = mergesort(⟨a⌊n/2⌋+1 , a⌊n/2⌋+2 , . . . , an ⟩)
4. return merge(α, β)
5. else return A

9
Merge Sort. Merging Two Sorted Lists

Algorithm: merge
Input: Two sorted lists A and B
Output: Merged sorted list of A and B

1. if k = 0 then return ⟨b1 , . . . , bl ⟩


2. if l = 0 then return ⟨a1 , . . . , ak ⟩
3. if a1 ≤ b1 then
4. return ⟨a1 ⟩ ◦ merge(⟨a2 , . . . , ak ⟩, ⟨b1 , . . . , bk ⟩)
5. else
6. return ⟨b1 ⟩ ◦ merge(⟨a1 , . . . , ak ⟩, ⟨b2 , . . . , bl ⟩)

• Here ◦ denotes concatenation

10
Analysis of Merge Sort

• The running time for mergeing is O(k + l) i.e., it is linear


in sizes of both arrays. Therefore, overall running time
T (n) of mergesort is (using Master Theorem):
n
T (n) = 2T + O(n) = O(n log n)
2

11
Example: Sorting the sequence ⟨7, 0, 3, 2, 1, 5⟩

7 0 3 2 1 5

7 0 3 2 1 5

7 0 3 2 1 5

7 0 3 2 1 5

7 0 3 2 1 5

0 3 7 1 2 5

0 1 2 3 5 7
12
Finding Maximum

• We can apply divide and conquer design technique to solve


a variety of problems including some trivial ones
• Suppose we need to find the maximum element from a
sequence (array) ⟨a1 , a2 , . . . , an ⟩ of n unordered elements.
Clearly the lower bound is Ω(n) as we need to check each
and every element of the array
• Following is a divide and conquer algorithm that finds the
maximum element from then sequence (array)
⟨a1 , a2 , . . . , an ⟩ of n unordered elements

13
Finding Maximum

Algorithm: dc-max
Input: A sequence ⟨a1 , a2 , . . . , an ⟩ of n unordered elements
Output: ak such that ∀i, ai < ak or −∞ if n = 0

1. if n = 1 then return a1
2. else if n < 1 then return −∞
3. else
4. m1 =dc-max(⟨a1 , . . . , a⌊n/2⌋ ⟩)
5. m1 =dc-max(⟨a⌊n/2⌋+1 , . . . , an ⟩)
6. return max{m1 , m2 }

14
Matrix Multiplcation

• Let A and B be two matrices of size 2 × 2 each


" # " #
a11 a12 b11 b12
A= ,B =
a21 a22 b21 b22

• Let C = A × B, then
" # " #
c11 c12 a11 · b11 + a11 · b21 a11 · b12 + a12 · b22
C= =
c21 c22 a21 · b11 + a22 · b21 a21 · b12 + a22 · b22

• To multiply two matrices of size 2 × 2 we need to perform 8


multiplications and 4 additions
• Strassen proposed an algorithm to multiply two matrices of
size 2 × 2 using only 7 multiplications

15
Matrix Multiplcation

• Let us define following:


m1 = (a11 + a22 ) · (b11 + b22 )
m2 = (a21 + a22 ) · b11
m3 = a11 · (b12 − b22 )
m4 = a22 · (b21 − b11 )
m5 = (a11 + a12 ) · b22
m6 = (a21 − a11 ) · (b11 + b12 )
m7 = (a12 − a22 ) · (b21 + b22 )
• Now we can calculate C = A × B as follows:
" # " #
c11 c12 m1 + m4 − m5 + m7 m3 + m5
=
c21 c22 m2 + m4 m1 − m2 + m3 + m6

16
Matrix Multiplcation: Strassen’s Algorithm

• Let A and B be two n × n matrices each (for n = 2k )


• The product C = A × B can be calculated as follows:
• Divide A and B into four n/2 × n/2 matrices each as
follows:
" # " #
A11 A12 B11 B12
A= , B=
A21 A22 B21 B22

• Here Aij and Bij are n/2 × n/2 matrices


• Calculate 7 matrices M1 , M2 , . . . , M7 and then calculate Cij
• The running time T (n) of Strassen’s algorithm is:
n
T (n) = 7T + O(n2 ) = O(nlog2 7 ) ≈ O(n2.8074 )
2

17
Integer Multiplcation

• Let us consider the problem of multiplying two integers x


and y of n bits each
• The product z = xy requires O(n2 ) bit-multiplications
• Karatsuba proposed an algorithm to multiply two integers
of n digits each using only O(nlog2 3 ) bit-multiplications
• Let x = ⟨x0 , . . . xn−1 ⟩2 and y = ⟨y0 , . . . yn−1 ⟩2
• We can say that: x = xL xR and y = yL yR where
xL = ⟨x0 , . . . , xn/2−1 ⟩2 , xR = ⟨xn/2 , . . . , xn−1 ⟩2 ,
yL = ⟨y0 , . . . , yn/2−1 ⟩2 , and yR = ⟨yn/2 , . . . , yn−1 ⟩2
• We can write x = xL · 2n/2 + xR and y = yL · 2n/2 + yR
• The product z = xy can be calculated as follows:
z = x · y = (xL · 2n/2 + xR ) · (yL · 2n/2 + yR )
= xL yL · 2n + (xL yR + xR yL ) · 2n/2 + xR yR
18
Integer Multiplcation

• The product
z = xy = xL yL · 2n + (xL yR + xR yL ) · 2n/2 + xR yR
• Requires 4 multiplications of n/2-bit numbers
• We can reduce the number of multiplications to 3
• As following:
xL yR + xR yL = (xL + xR ) · (yL + yR ) − xL yL − xR yR
= xL yL + xL yR + xR yL + xR yR − xL yL − xR yR
= x L yL + x R yR
• Now:
z = xL yL ·2n +((xL +xR )·(yL +yR )−xL yL −xR yR )·2n/2 +xR yR
• The running time T (n) of Karatsuba’s algorithm is:
n
T (n) = 3T + O(n) = O(nlog2 3 ) ≈ O(n1.585 )
2 19

You might also like