0% found this document useful (0 votes)
2 views

1 Introduction

The document provides an introduction to algorithms, defining them as finite sequences of unambiguous instructions for problem-solving. It discusses the goals of studying algorithms, types of algorithms, and the importance of analyzing their efficiency in terms of time and space complexity. Additionally, it covers various algorithm analysis methods and asymptotic notations such as Big-O, Big-Omega, and Big-Theta.

Uploaded by

Surya Kamal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

1 Introduction

The document provides an introduction to algorithms, defining them as finite sequences of unambiguous instructions for problem-solving. It discusses the goals of studying algorithms, types of algorithms, and the importance of analyzing their efficiency in terms of time and space complexity. Additionally, it covers various algorithm analysis methods and asymptotic notations such as Big-O, Big-Omega, and Big-Theta.

Uploaded by

Surya Kamal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 104

Design and Analysis of

Algorithms
-Introduction

1
Introduction to Algorithm

2
Goals of Algorithm Study
• To develop framework for instructing computer to perform
tasks

• To introduce notion of algorithm as means of specifying


how to solve a problem

• To introduce and appreciate approaches for defining and


solving very complex tasks in terms of simpler tasks;

3
Algorithm - Definition
• An algorithm for solving a problem
“a finite sequence of unambiguous, executable steps or instructions,
which, if followed would ultimately terminate and give the solution
of the problem”.

• Note the keywords:


• Finite set of steps;
• Unambiguous;
• Executable;
• Terminates;

4
Algorithm/Pseudocode – find the max in
an array
Algorithm
1. Set the temporary maximum equal to the first integer in the sequence.
2. Compare the next integer in the sequence to the temporary maximum, and set
the larger one to be temporary maximum.
3. Repeat the previous step if there are more integers in the sequence.
4. Stop when there are no integers left in the sequence. The temporary maximum
at this point is the maximum in the sequence.

Algorithm 1. Finding the Maximum Element in a Finite Sequence


//Input : n integers a1,a2 ,...,an Instead of using a
particular computer
// Output : max (the maximum of a1,a2 ,...,an )
language, we use a
max :a1 ; form of pseudocode.
for i: 2 to n
if max  ai then max :ai
5
return max
Algorithm - Example
• Problem: What is the largest integer?
INPUT: All the integers { … -2, -1, 0, 1, 2, … }
OUTPUT: The largest integer
Algorithm:
• Arrange all the integers in a list in decreasing order;
• MAX = first number in the list;
• Print out MAX;
• WHY is the above NOT an Algorithm?
• (Hint: How many integers are there?)

6
Example - ArrayMax

7
Characteristics of an Algorithm

8
Fundamentals of Algorithmic Problem
Solving
Understand the problem

Design an algorithm and proper data structures

Analyze the algorithm

Code the algorithm

9
Important Problem Types
 Sorting
 Searching
 String processing (e.g. string matching)
 Graph problems (e.g. graph coloring problem)
 Combinatorial problems (e.g. maximizes a cost)
 Geometric problems (e.g. convex hull problem)
 Numerical problems (e.g. solving equations )

10
Some Algorithm Types
• Algorithm types include:
• Simple recursive algorithms
• Brute force algorithms
• Divide and conquer algorithms
• Greedy algorithms
• Dynamic programming algorithms
• Backtracking algorithms
• Branch and bound algorithms
• Randomized algorithms

11
Types of Algorithm Analysis
• Time complexity: the time required to solve a problem of a
specified size
• The time complexity is expressed in terms of the number of operations used
by the algorithm.
• Space complexity: the computer memory required to solve a
problem of a specified size.
• Analysis Types
• Worst case analysis: the largest number of operations needed to solve the
given problem using this algorithm.
• Average case analysis: the average number of operations used to solve the
problem over all inputs.
• Best case analysis: the minimum number of operations used to solve the
problem over all inputs.
12
Fundamentals of the Analysis of the
Algorithm
• Implementation and Empirical Analysis
• Challenge in empirical analysis:
 Develop a correct and complete implementation.
 Determine the nature of the input data and other factors influencing
on the experiment. Typically, There are three choices: actual data,
random data, or perverse data.
 Compare implementations independent to programmers, machines,
compilers, or other related systems.
 Consider performance characteristics of algorithms, especially for
those whose running time is big.
• Can we analyze algorithms that haven’t run yet?
13
Empirical Algorithm Analysis
• Results of an
experimental study on
the running time of an
algorithm.
• A dot with coordinates (n,
t) indicates that on an
input of size n, the
running time of the
algorithm is t
milliseconds (ms).

14
Random Access Machine Model
(RAM)
• Experimental analysis is valuable, but it has its limitations
• To analyze a particular algorithm without performing experiments on its running
time, we can use RAM model
• A set of high-level primitive operations that are largely independent from the
programming language used can be identified and analyzed
• Primitive operations include the following:
• Assigning a value to a variable
• Calling a method
• Performing an arithmetic operation (for example, adding two numbers)
• Comparing two numbers
• Indexing into an array
• Following an object reference
• Returning from a method.

15
Example - ArrayMax

1+1
1 (init)+ [2(cond)+ 1(if-check)+1(array)
+1(array)+1(assign)+2 (incr+assign)] (n-
1) + 1(false cond) + 1(return)

T(n)=8(n-1)+5 = 8n-3 => worst


This is called case
recurrence equation
T(n)=6n => best case
16
Best case and Worst case

17
Analyzing Recursive algorithms

1
1+1

1+1+1+1+1+1

𝑻 ( 𝒏) =
{ 𝟑
𝑻 ( 𝒏− 𝟏 ) +𝟕
𝒏=𝟏 This is called
𝒆𝒍𝒔𝒆𝒘𝒉𝒆𝒓𝒆 recurrence relation
18
Try Yourself!!
• Write a Pseudocode to find the sum of first N natural
numbers. Using the RAM model, find the recurrence
equation
• Write a Pseudocode/Algorithm to find the factorial of a
given number.
• For the above examples, try the recursive procedure too..

• Try to find the above examples for varying ‘n’s values and
plot it using python plot function

19
Example – factorial of a number
Iterative Recursive version
version
Fact(n)
Begin
if n==0 or 1 then
return 1
else
f=1
for i in 2 to n do
f=f*I
end for
end if
End
T(n) = 4 , if n=0 or n=1
Best Case = 4 = T(n-1) + 7 , otherwise
Worst Case = 5n - 2 20
Few examples
T(n) = 6n + 4
(Best & Worst Case)

T(n) = 2 , n<=0
= T(n-1) + 6 , otherwise

21
Few examples

T(n) = 5mn + 7n +
4

22
Generic Notations - Examples

T(n) =
a ;n<1
= 2T(n-1)+b ;n>
1
23
Generic Notations - Examples

T(n) = a ;n <
1
= T(n-1) + T(n-2) +
b ;else
24
Generic Notations - Examples

T(n) = a ;n <
0
= 2nT(m) +b n + c ;else

25
Algorithm Efficiency
• Linear Loops
for (i = 0; i < 1000; i++)
application code

• T (n) = n

for (i = 0; i < 1000; i += 2)


application code

• T (n) = n / 2
• If you were to plot either of these loop examples, you would get a straight line. For that reason they are known as
linear loops.
26
Algorithm Efficiency
• Logarithmic Loops
Multiply Loops
for ( i=1; i<1000; i*=2)
application code

Divide Loops
for ( i=1000; i>0; i/=2)
application code

T (n) = logn 27
Algorithm Efficiency
• Linear Logarithmic

T (n) = 10log10

T (n) = nlogn

28
Algorithm Efficiency
• Quadratic

T (n) = n2

• Dependent Quadratic
1 + 2 + 3 + … + 9 + 10 = 55

T (n) = n2

29
Algorithm Efficiency – Overall
comparison Assuming n=10000

30
Asymptotic Notations - The “Big-Oh”
Notation
• Let f(n) and g(n) be functions mapping nonnegative integers to real numbers. We
say that f(n) is O(g(n)) if there is a real constant c > 0 and an integer constant

n0 ≥ 1 such that f(n) ≤ cg(n) for every integer n ≥ n0.

• This definition is often pronounced as “f(n) is big-Oh of g(n)” or “f(n) is order


g(n).”

• Example: 7n − 2 is O(n).

31
Examples
Example 1: 7n − 2 is O(n)

• We need a real constant c > 0 and an integer constant n0 ≥ 1 such that 7n − 2 ≤ cn for every integer n ≥
n0. It is easy to see that a possible choice is c = 7and n0 = 1, but there are other possibilities as well.

Example 2: 20n3 + 10n log n + 5 is O(n3)

Example 3: 3 log n + log logn is O(log n)


• 3 log n + loglogn ≤ 4 log n, for n ≥ 2. Note that log log n is not even defined for n = 1, but log logn
< log n, for n ≥ 2. That is why we use n ≥ 2.

Example 4: 2100 is O(1)

Example 5: 5n log n + 2n is O(n log n)

32
Big – O - theorem
• Let d(n), e(n), f(n), and g(n) be functions mapping nonnegative integers to
nonnegative reals.
1. If d(n) is O(f(n)), then ad(n) is O(f(n)), for any constant a > 0.
2. If d(n) is O(f(n)) and e(n) is O(g(n)), then d(n)+e(n) is O(f(n)+g(n)).
3. If d(n) is O(f(n)) and e(n) is O(g(n)), then d(n)e(n) is O(f(n)g(n)).
4. If d(n) is O(f(n)) and f(n) is O(g(n)), then d(n) is O(g(n)).
5. If f(n) is a polynomial of degree d (that is, f(n) = a0 +a1n+ ・ ・ ・ +adnd),
then f(n) is O(nd).
6. nx is O(an) for any fixed x > 0 and a > 1.
7. log nx is O(log n) for any fixed x > 0.
8. logx n is O(ny) for any fixed constants x > 0 and y > 0.

33
Example
• Prove by theorem : 2n3 + 4n2 log n is O(n3).

logx n is O(ny)

d(n).e(n) is O(f(n).g(n)).

d(n)+e(n) is O(f(n)+g(n)).
ad(n) is O(f(n)),

34
Big-Omega and Big-Theta
• Let f(n) and g(n) be functions mapping integers to real numbers.

• We say that f(n) is Ω(g(n)) (pronounced “f(n) is big-Omega of g(n)”) if g(n) is O(f(n)); that is, there is a
real constant c > 0 and an integer constant n0 ≥ 1 such that

f(n) ≥ cg(n), for n ≥ n0. This definition allows us to say asymptotically that one function is greater than or
equal to another, up to a constant factor.

• Likewise, we say that f(n) is Θ(g(n)) (pronounced “f(n) is big-Theta of g(n)”)

if f(n) is O(g(n)) and f(n) is Ω(g(n)); that is, there are real constants c1 > 0 and c2 > 0, and an integer

constant n0 ≥ 1 such that c1g(n) ≤ f(n) ≤ c2 g(n), for n ≥ n0.

35
Examples
Example 1: 3 log n + log logn is Ω(log n)
• Proof: 3 log n + log logn ≥ 3 log n, for n ≥ 2.

Example 2: 3 log n + log logn is Θ(log n)


• Proof: 3 log n + loglogn ≤ 4 log n, for n ≥ 2.

Note that log log n is not even defined for n = 1, but log logn < log n, for n ≥ 2.
That is why we use n ≥ 2.
• Proof: 3 log n + log logn ≥ 3 log n, for n ≥ 2.

36
Examples

37
Little-Oh and Little-Omega

• Let f(n) and g(n) be functions mapping integers to real numbers. We say that
f(n) is o(g(n)) (pronounced “f(n) is little-oh of g(n)”) if, for any constant c > 0,
there is a constant n0 ≥ 1 such that f(n) < cg(n) for n ≥ n0. Eg: 7n + 8 ∈ o(n2)

• Likewise, we say that f(n) is ω(g(n)) (pronounced “f(n) is little-omega of g(n)”)


if g(n) is o(f(n)), that is, if, for any constant c > 0, there is a constant n0 ≥ 1

such that f(n) > cg(n) for n ≥ n0. Eg: 4n + 6 ∈ ω(1)

Note: Little-ο, (ο()) notation is used to describe an upper-bound that cannot be


tight.
38
Little ω, notation is used to denote a lower bound that is not
Examples
• x2∈O(x2)
• x2∉o(x2)
• x2∈o(x3)
The following are true for Big-O, but would not be true if you used little-o:
• x² ∈ O(x²)
• x² ∈ O(x² + x)
• x² ∈ O(200 * x²)
The following are true for little-o:
• x² ∈ o(x³)
• x² ∈ o(x!)
• ln(x) ∈ o(x) For example, the function f(n) = 3n is:
 in O(n²), o(n²), and O(n)
 not in O(lg n), o(lg n), or o(n)

39
Limit Theorem

40
Prove using limit theorem ∞ = 1/0

• Example 1: 3x2+2x is O(x2)

41
Prove using limit theorem
• Example 2: 3n2+2n is o(n2)
• Example 3: 3n2+2n is o(n3)

42
Prove using limit theorem
• Example 4: 3n2+2n is Ω(n)
• Example 5: 3n2+2n is ω(n)

43
Asymptotically tight notation

• Theta, commonly written as Θ, is an Asymptotic Notation to


denote the asymptotically tight bound on the growth rate of
runtime of an algorithm.
• For example, if f(n) is O(g(n)), and f(n) is O(h(n)) and if g(n)
is O(h(n)) then g(n) is called the asymptotically tight
notation.
44
Asymptotically tight notation
• So for log(n) we could say:
log(n) is O(n^n) since log(n) grows asymptotically no faster than n^n
log(n) is O(n) since log(n) grows asymptotically no faster than n
log(n) is O(log(n)) since log(n) grows asymptotically no faster than log(n)
We went from loose upper bounds to a tight upper bound

• log(n) is Ω(1) since log(n) grows asymptotically no slower than 1


log(n) is Ω(log(log(n))) since log(n) grows asymptotically no slower than
log(log(n))
log(n) is Ω(log(n)) since log(n) grows asymptotically no slower than log(n)
We went from loose lower bounds to a tight lower bound
Since we have log(n) is O(log(n)) and Ω(log(n)) we can say that log(n) is
45
Θ(log(n)).
46
47
48
Ordering functions by growth rate

49
Ordering functions by growth rate

• Note the point at which the function sqrt(n) dominates log2 n.

50
Logarithmic and Exponent
properties
• Let a, b, and c be positive real numbers. We have

51
Try yourself!!
• Order the following growth rates in ascending order

• Order these asymptotic notations by increasing growth rate.


, ,

52
Some other observations we can make are:
f(n) = Q(g(n)) ⇔ g(n) = Q(f(n))
f(n) = O(g(n)) ⇔ g(n) = W(f(n))
f(n) = o(g(n)) ⇔ g(n) = w(f(n))

Big-Q as an Equivalence Relation


If we look at the first relationship, we notice that
f(n) = Q(g(n)) seems to describe an equivalence relation:
1. f(n) = Q(g(n)) if and only if g(n) = Q(f(n))
2. f(n) = Q(f(n))
3. If f(n) = Q(g(n)) and g(n) = Q(h(n)), it follows that f(n) = Q(h(n))

Consequently, we can group all functions into equivalence


classes, where all functions within one class are big-theta Q
of each other
53
Examples:
O(n) + O(n2) + O(n4) = O(n + n2 + n4) = O(n4)
O(n) + Q(n2) = Q(n2)
O(n2) + Q(n) = O(n2)
O(n2) + Q(n2) = Q(n2)

54
Solving Recurrences
• An equation or inequality that describes a function in terms of its value
on smaller inputs.
T(n) = T(n-1) + n
• Recurrences arise when an algorithm contains recursive calls to itself

• What is the actual running time of the algorithm?


• Need to solve the recurrence
• Find an explicit formula of the expression
• Bound the recurrence by an expression that involves n

55
Example Recurrences
• T(n) = T(n-1) + n Θ(n2)
• Recursive algorithm that loops through the input to eliminate one
item
• T(n) = T(n/2) + c Θ(lgn)
• Recursive algorithm that halves the input in one step
• T(n) = T(n/2) + n Θ(n)
• Recursive algorithm that halves the input but must examine every
item in the input
• T(n) = 2T(n/2) + 1 Θ(n)
• Recursive algorithm that splits the input into 2 halves and does a
constant amount of other work
56
Recurrent Algorithms - BINARY-SEARCH
• for an ordered array A, finds if x is in the array A[lo…hi]

Alg.: BINARY-SEARCH (A, lo, hi, x)


1 2 3 4 5 6 7 8

if (lo > hi) 2 3 5 7 9 10 11 12


return FALSE
mid  (lo+hi)/2 mid
lo hi
if x = A[mid]
return TRUE
if ( x < A[mid] )
BINARY-SEARCH (A, lo, mid-1, x)
if ( x > A[mid] )
BINARY-SEARCH (A, mid+1, hi, x)
57
Example
• A[8] = {1, 2, 3, 4, 5, 7, 9, 11}
• lo = 1 hi = 8 x = 7
1 2 3 4 5 6 7 8

1 2 3 4 5 7 9 11 mid = 4, lo = 5, hi = 8

5 6 7 8

1 2 3 4 5 7 9 11 mid = 6, A[mid] = x
Found!

58
Another Example
• A[8] = {1, 2, 3, 4, 5, 7, 9, 11}
 lo = 1 hi = 8 x=6
1 2 3 4 5 6 7 8

1 2 3 4 5 7 9 11 mid = 4, lo = 5, hi = 8
low high

1 2 3 4 5 7 9 11 mid = 6, A[6] = 7, lo = 5, hi = 5
low high
1 2 3 4 5 7 9 11 mid = 5, A[5] = 5, lo = 6, hi = 5
NOT FOUND!

1 2 3 4 5 7 9 11
high low

59
Analysis of BINARY-SEARCH
Alg.: BINARY-SEARCH (A, lo, hi, x)
if (lo > hi) constant time: c1
return FALSE
mid  (lo+hi)/2 constant time: c2

if x = A[mid] constant time: c3


return TRUE
if ( x < A[mid] )
same problem of size
BINARY-SEARCH (A, lo, mid-1, x)
n/2
if ( x > A[mid] )
same problem of size
BINARY-SEARCH (A, mid+1, hi, x) n/2

• T(n) = c + T(n/2)
• T(n) – running time for an array of size n
60
Methods for Solving Recurrences

• Iteration method (or) Iterative substitution method

• Substitution method (or) Guess and test method

• Recursion tree method

• Master method

61
The Iteration Method

• Convert the recurrence into a summation and try to bound it


using known series
• Iterate the recurrence until the initial condition is reached.
• Use back-substitution to express the recurrence in terms of n and
the initial (boundary) condition.

62
The Iteration Method – Example 1
Example: T(n) = c + T(n/2)
T(n) = c + T(n/2)
= c + c + T(n/4) T(n/2) = c + T(n/4)
= c + c + c + T(n/8) T(n/4) = c + T(n/8)
Assume n = 2k
T(n) = c + c + … + c + T(1)
k times
= clgn + T(1)
= Θ(lgn)
63
Iteration Method – Example 2
Example: T(n) = n + 2T(n/2) Assume: n = 2k
T(n) = n + 2T(n/2) T(n/2) = n/2 + 2T(n/4)
= n + 2(n/2 + 2T(n/4))
= n + n + 4T(n/4)
= n + n + 4(n/4 + 2T(n/8))
= n + n + n + 8T(n/8)
… = in + 2iT(n/2i)
= kn + 2kT(1)
= nlgn + nT(1) = Θ(nlgn)
64
Iteration Method – Example 3
T(n) = 8T(n/2) + n2

65
Iteration Method – Example 4
T(n) = 2T(n/2) + b

66
Iteration Method – Example 5
T(n) = 2T(n-1) + b (Tower of Hanoi)

67
Iteration Method – Example 6

68
Iteration Method – Example 7

69
Decreasing series

Iteration Method – Example 8


Increasing series

70
Iteration Method – Example 9

71
The substitution method

1. Guess a solution

2. Use induction to prove that the


solution works

72
Substitution method

• Guess a solution
• T(n) = O(g(n))
• Induction goal: apply the definition of the asymptotic notation
(strong induction)
• T(n) ≤ d g(n), for some d > 0 and n ≥ n0

• Induction hypothesis: T(k) ≤ d g(k) for all k < n

• Prove the induction goal


• Use the induction hypothesis to find some values of the constants d and n0 for
which the induction goal holds
73
Example: Binary Search
T(n) = c + T(n/2)
• Guess: T(n) = O(lgn)
• Induction goal: T(n) ≤ d lgn, for some d and n ≥ n0
• Induction hypothesis: T(n/2) ≤ d lg(n/2)

• Proof of induction goal:


T(n) = T(n/2) + c ≤ d lg(n/2) + c
= d lgn – d + c ≤ d lgn
if: – d + c ≤ 0, d ≥ c
74
• Base case?
Example 2
T(n) = T(n-1) + n
• Guess: T(n) = O(n2)
• Induction goal: T(n) ≤ c n2, for some c and n ≥ n0
• Induction hypothesis: T(n-1) ≤ c(n-1)2 for all k < n

• Proof of induction goal:


T(n) = T(n-1) + n ≤ c (n-1)2 + n
= cn2 – (2cn – c - n) ≤ cn2
if: 2cn – c – n ≥ 0  c ≥ n/(2n-1)  c ≥ 1/(2 – 1/n)
• For n ≥ 1  2 – 1/n ≥ 1  any c ≥ 1 will work

75
Example 3
T(n) = 2T(n/2) + n
• Guess: T(n) = O(nlgn)
• Induction goal: T(n) ≤ cn lgn, for some c and n ≥ n0
• Induction hypothesis: T(n/2) ≤ cn/2 lg(n/2)

• Proof of induction goal:


T(n) = 2T(n/2) + n ≤ 2c (n/2)lg(n/2) + n
= cn lgn – cn + n ≤ cn lgn
if: - cn + n ≤ 0  c ≥ 1
76
• Base case?
Example 4
• T(n) = 2T(n/2) + bnlogn
First Guess: T(n) <=cnlogn

But there is no way that we can make this last


line less than or equal to cn log n for n ≥ 2.
Thus, this first guess was not sufficient
Better Guess: T(n) <=cnlog2n

provided c ≥ b 77
The recursion-tree method

Convert the recurrence into a tree:


• Each node represents the cost incurred at
various levels of recursion
• Sum up the costs of all levels

Used to “guess” a solution for the recurrence

78
Example 1
W(n) = 2W(n/2) + n2

• Subproblem size at level i is: n/2i


(n)
• Subproblem size hits 1 when 1 = n/2  i = lgn i

• Cost of the problem at level i = (n/2i)2 No. of nodes at level i = 2i


• Total cost: lg n  1 2 lg n  1 i  i
n  1 1 1
W ( n)  2
i 0
lg n
 2 W (1) n
i
2
    n n     O(n) n
i 0  2 
2

i 0  2 
2

1 1
 O(n) 2n 2
2
 W(n) = O(n2) 79
Example 2
E.g.: T(n) = 3T(n/4) + cn2

 Subproblem size at level i is: n/4i


 Subproblem size hits 1 when 1 = n/4i  i = log4n
 Cost of a node at level i = c(n/4i)2
 Number of nodes at level i = 3i  last level has 3log4n = nlog43 nodes
 Total cost:
log4 n  1 i  i
 3 2  3 1
T ( n)     cn   n 
log4 3
  
   cn 2   n log4 3 
3
 
cn 2   n log4 3 O(n 2 )
i 0  16  i 0  16 
1
16 80
 T(n) = O(n2)
Example 2 - Substitution
T(n) = 3T(n/4) + cn2

• Guess: T(n) = O(n2)


• Induction goal: T(n) ≤ dn2, for some d and n ≥ n0
• Induction hypothesis: T(n/4) ≤ d (n/4)2

• Proof of induction goal:

T(n) = 3T(n/4) + cn2

≤ 3d (n/4)2 + cn2

= (3/16) d n2 + cn2

≤ d n2 if: d ≥ (16/13)c

• Therefore: T(n) = O(n2)


81
Example 3 (simpler proof)
W(n) = W(n/3) + W(2n/3) + n

 The longest path from the root to


a leaf is:
n  (2/3)n  (2/3)2 n  …  1
 Subproblem size hits 1 when
1 = (2/3)in  i=log3/2n

 Cost of the problem at level i = n


 Total cost:
lg n
W (n)  n  n  ... n(log 3/ 2 n) n O(n lg n)
3
lg
2
 W(n) = O(nlgn) 82
Example 3
W(n) = W(n/3) + W(2n/3) + n

 The longest path from the root to


a leaf is:
n  (2/3)n  (2/3)2 n  …  1
 Subproblem size hits 1 when
1 = (2/3)in  i=log3/2n

 Cost of the problem at level i = n


 Total cost: (log3 / 2 n )  1
W (n)  n  n  ...  
i 0
n  2(log3 / 2 n )W (1) 
log3 / 2 n
lg n 1
n 
i 0
1  nlog3 / 2 2 n log 3/ 2 n  O(n) n
lg 3 / 2
 O ( n) 
lg 3 / 2
n lg n  O (n)

 W(n) = O(nlgn) 83
Example 3 - Substitution
W(n) = W(n/3) + W(2n/3) + O(n)
• Guess: W(n) = O(nlgn)
• Induction goal: W(n) ≤ dnlgn, for some d and n ≥ n0
• Induction hypothesis: W(k) ≤ d klgk for any K < n
(n/3, 2n/3)
• Proof of induction goal:
Try it out as an exercise!!
• T(n) = O(nlgn)

84
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:

85
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:

T(n)

86
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:

n2

T(n/4) T(n/2)

87
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:

n2

(n/4)2 (n/2)2

T(n/16) T(n/8) T(n/8) T(n/4)

88
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:

n2

(n/4)2 (n/2)2

(n/16)2 (n/8)2 (n/8)2 (n/4)2


Q(1)

89
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:

n2
n2
(n/4)2 (n/2)2

(n/16)2 (n/8)2 (n/8)2 (n/4)2


Q(1)

90
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:

n2
n2
(n/4)2 (n/2)2 5 n2
16
(n/16)2 (n/8)2 (n/8)2 (n/4)2

Q(1)

91
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:

n2
n2
(n/4)2 (n/2)2 5 n2
16
(n/16)2 (n/8)2 (n/8)2 (n/4)2 25 n 2
256


Q(1)

92
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:

n2
n2
(n/4)2 (n/2)2 5 n2
16
(n/16)2 (n/8)2 (n/8)2 (n/4)2 25 n 2
256


 2
      
5 3
Q(1)
Total = 2 5 5
n 1  16  16 16
= Q(n2) geometric series
93
Appendix: geometric series

n 1
1  x
1  x  x2   xn  for x ¹ 1

1 x

2 1 for |x| < 1


1 x  x  
1 x

94
Master’s method
• “Cookbook” for solving recurrences of the form:

 n
T (n) aT    f (n)
b
where, a ≥ 1, b > 1, and f(n) > 0

Idea: compare f(n) with nlogba

• f(n) is asymptotically smaller or larger than nlogba by a


polynomial factor n

• f(n) is asymptotically equal with nlogba


95
Master’s method
• “Cookbook” for solving recurrences of the form:

 n
T (n) aT    f (n)
b
where, a ≥ 1, b > 1, and f(n) > 0

Case 1: if f(n) = O(nlogba -) for some  > 0, then: T(n) = (nlogba)

Case 2: if f(n) = (nlogba), then: T(n) = (nlogba lgn)

Case 3: if f(n) = (nlogba +) for some  > 0, and if

af(n/b) ≤ cf(n) for some c < 1 and all sufficiently large n, then:

regularity
T(n)condition
= (f(n))
96
Examples

T(n) = 2T(n/2) + n

a = 2, b = 2, log22 = 1

Compare nlog22 with f(n) = n

 f(n) = (n)  Case 2

 T(n) = (nlgn)
97
Examples
T(n) = 2T(n/2) + n2
a = 2, b = 2, log22 = 1

Compare n with f(n) = n2


 f(n) = (n1+) Case 3  verify regularity cond.
a f(n/b) ≤ c f(n)
 2 n2/4 ≤ c n2  c = ½ is a solution (c<1)
 T(n) = (n2)
98
Examples (cont.)

T(n) = 2T(n/2) + n

a = 2, b = 2, log22 = 1

Compare n with f(n) = n1/2

 f(n) = O(n1-) Case 1

 T(n) = (n)
99
Examples
Ex. T(n) = 4T(n/2) + n
a = 4, b = 2  nlogba = n2; f (n) = n.
CASE 1: f (n) = O(n2 – e) for e = 1.
 T(n) = Q(n2).

Ex. T(n) = 4T(n/2) + n2


a = 4, b = 2  nlogba = n2; f (n) = n2.
CASE 2: f (n) = Q(n2lg0n), that is, k = 0.
 T(n) = Q(n2lg n).
100
Examples
Ex. T(n) = 4T(n/2) + n3
a = 4, b = 2  nlogba = n2; f (n) = n3.
CASE 3: f (n) = W(n2 + e) for e = 1
and 4(cn/2)3 £ cn3 (reg. cond.) for c = 1/2.
 T(n) = Q(n3).

Ex. T(n) = 4T(n/2) + n2/lg n


a = 4, b = 2  nlogba = n2; f (n) = n2/lg n.
Master method does not apply. In particular,
for every constant e > 0, we have ne = w(lg n).
101
Examples
T(n) = 3T(n/4) + nlgn

a = 3, b = 4, log43 = 0.793

Compare n0.793 with f(n) = nlgn

f(n) = (nlog43+) Case 3

Check regularity condition:

3(n/4)lg(n/4) ≤ (3/4)nlgn = c f(n), c=3/4

T(n) = (nlgn) 102


Examples
T(n) = 2T(n/2) + nlgn

a = 2, b = 2, log22 = 1

• Compare n with f(n) = nlgn


• seems like case 3 should apply

• f(n) must be polynomially larger by a factor of n

• In this case it is only larger by a factor of lgn

• So which rule??

103
Changing variables
T(n) = 2T( n) + lgn
• Rename: m = lgn  n = 2m
T (2m) = 2T(2m/2) + m
• Rename: S(m) = T(2m)
S(m) = 2S(m/2) + m  S(m) = O(mlgm) (demonstrated
before)
T(n) = T(2m) = S(m) = O(mlgm)=O(lgnlglgn)
Idea: transform the recurrence to one that you
have seen before
104

You might also like