BKS Unit 1-Growth of Functions
BKS Unit 1-Growth of Functions
Unit I: Syllabus
• Introduction:
– Algorithm definition
– Algorithm Specification
• Performance Analysis-
– Space complexity
– Time complexity
Learn DAA: From B K Sharma
Unit I: Syllabus
• Randomized Algorithms.
• Divide and conquer- General method
– Applications:
• Binary search
• Merge sort Quick sort
• Strassen’s Matrix Multiplication.
Learn DAA: From B K Sharma
How fast will our program run?
T(n) ∝ n Linear
T(n) ∝ n log n Linearithmic
T(n) ∝ n2 Quadratic
T(n) ∝ n3 Cubic
T(n) ∝ nk Polynomial
T(n) ∝ 2n Exponential
T(n) ∝ n! Exponential
Learn DAA: From B K Sharma
Running Time T(n) curve
Learn DAA: From B K Sharma
Asymptote
Asymptote-Noun Asymptotic-adjective
A straight line that relating to Asymptote.
continually approaches a
given curve but does not
meet it at any finite
distance.
Learn DAA: From B K Sharma
Asymptotic Analysis
To estimate the complexity function T(n) for
reasonably large size of input n.
What are f(n) and g(n)?
Searching Problem Sorting
Comparing Algorithms
Approximation
Cost: ~ cost_of_elephant
Learn DAA: From B K Sharma
n4 + 100n2 + 10n + 50
Approximation:
n4
Highest order term determines rate of growth!
Learn DAA: From B K Sharma
Example
Suppose you are designing a website to process user
data (e.g., financial records).
Suppose program A takes fA(n)=30n+8 microseconds
to process any n records,
while program B takes fB(n)=n2+1 microseconds to
process the n records.
On a graph, as you go to
the right, a faster growing
Value of function →
function eventually becomes fA(n)=30n+8
larger...
fB(n)=n2+1
Increasing n →
Learn DAA: From B K Sharma
sum = 0; c1
sum += arr[i][j]; c3
j
0 1 i=0 i=0 i=0
j=0 j=1 j=2: false
0 2 5
i sum+=arr[0][0] sum+=arr[0][1]
1 3 4
Learn DAA: From B K Sharma
sum = 0; c1
sum += arr[i][j]; c3
j
0 1 i=1 i=1 i=1
j=0 j=1 j=2: false
0 2 5
i sum+=arr[1][0] sum+=arr[1][1]
1 3 4
Learn DAA: From B K Sharma
sum = 0; c1
sum += arr[i][j]; c3
j
0 1 i=2:false
0 2 5
i
1 3 4
Learn DAA: From B K Sharma
sum = 0; c1
sum += arr[i][j]; c3
------------
c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N x N
j
0 1
0 2 5
i
1 3 4
Learn DAA: From B K Sharma
Asymptotic Notations
Big (O); Big Omega (); Big Theta(); Little (o); Little Omega(ω);
Remarks
Remarks
1. Best case complexity is denoted using Ω notation.
i.e., if an algorithm has time complexity Ω(n), then
every input to the algorithm incurs at least c·n
comparisons.
Remarks
Remarks
f(n)
n0 n
f(n) = O(g(n))
f(n) is at most g(n), up to constant factor c
Learn DAA: From B K Sharma
Asymptotic Notations
c.g(n)
n0 n
f(n) = (g(n)
f(n) is at least g(n), up to constant factor c
Learn DAA: From B K Sharma
Asymptotic Notations
c1.g(n) ≤ f(n) ≤ c2 g(n)
c2.g(n)
O f(n)
time
c1.g(n)
n0 f(n) = (g(n)) n
f(n) is at most c2.g(n) and at least c1.g(n) for some
constants c1 and c2
Learn DAA: From B K Sharma
Asymptotic Notations
No Uniqueness
Is 5n2 = (n2)?
This means we have to prove that for some c and n0,
5n2 c.n2
Put c = 5 and n0 = 1, we get
O
c1.g(n)< f(n) <c2.g(n)
Learn DAA: From B K Sharma
Example Ө
O
c2.g(n)
f(n)
time
c1.g(n)
n0 n
f(n) = (g(n))
Learn DAA: From B K Sharma
Example Ө
O
c1 c2
O
c1 c2
n2
logn
c2 ≥ n/logn, n≥ n0 – impossible
Examples of notation
• Example 8 : Let f(n) and g(n) be
asymptotically nonnegative functions.
Using the basic definition of notation,
prove that max(f(n),g(n))= (f(n)
+g(n))
• There exists positive constants c1 and
c2 and n0 such that
c1(f(n) + g(n)) ≤ max(f(n),g(n))
≤c2(f(n) +g(n) ) for all n≥ n0.
Examples of notation
• Selecting c2=1 clearly shows that c2(f(n)
+g(n) ) =(f(n) + g(n)).
• Now max(f(n),g(n)) must be smaller than
(f(n) +g(n)).
• Selecting c1=1/2 clearly shows that
c1(f(n) +g(n))=(f(n) + g(n))/2.
• Now max(f(n), g(n)) is always greater
than weighted average of f(n) and g(n).
• Thus, max(f(n),g(n))= (f(n) +g(n))
Intuition for Asymptotic Notation
• Big-Oh
◼ f(n) is O(g(n)) if f(n) is asymptotically less
than or equal to g(n)
• big-Omega
◼ f(n) is (g(n)) if f(n) is asymptotically
greater than or equal to g(n)
• big-Theta
◼ f(n) is (g(n)) if f(n) is asymptotically equal
to g(n)
Little o-, Little ω-notations
• So far,
– (g) is the functions that go to infinity
essentially at the same speed
– O(g) goes to infinity no faster than g,
and
– (g) goes to infinity at least as fast as
g.
• Sometimes, we want to say that a function
grows strictly faster or slower than g.
• That’s what the lower case letters are for.
o-notation: upper bound but not
asymptotically tight
⚫ The bound provided by O-notation may
or may not be asymptotically tight.
⚫ The bound 2n2=O(n2) is asymptotically
tight, but the bound 2n=O(n2) is not.
⚫ The o-notation denotes an upper
bound that is not asymptotically tight.
Formally, define o(g(n)) as the set
⚫ o(g(n)) = {f(n): for any positive
constants c>0, there exits a constant
n0>0 such that 0≤ f(n)< c g(n) for all
o-notation: upper bound but not
asymptotically tight
⚫ For example, 2n=o(n2), but 2n2≠o(n2).
⚫ The definitions of O-notation and o-
notation are similar.
⚫ The main difference
◆ In f(n)=O(g(n)), the bound 0≤f(n)≤c
In -notation In ω-notation
f ( n) f ( n)
0 lim n → lim = , if the limit exists.
g ( n) n → g ( n)
What is the algorithm’s efficiency?
• The algorithm’s efficiency is a function of the
number of elements to be processed. The
general format is
f (n) = efficiency
f ( n) = n
f (n) = log n
f (n) = n log n
f ( n) = n 2
n +1
f ( n) = n
2
What is the algorithm’s
efficiency?
• When comparing two different
algorithms that solve the same problem,
we often find that one algorithm is an
order of magnitude more efficient than
the other.
The basic concept
• If the efficiency function is linear then
this means that the algorithm is linear and
it contains no loops or recursions. In this
case, the algorithm’s efficiency depends
only on the speed of the computer.
• If the algorithm contains loops or
recursions (any recursion may always be
converted to a loop), it is called nonlinear.
In this case the efficiency function
strongly and informally depends on the
number of elements to be processed.
Linear Loops
• The efficiency depends on how many times
the body of the loop is repeated. In a linear
loop, the loop update (the controlling
variable) either adds or subtracts.
• For example:
for (i = 0; i < 1000; i++)
the loop body
f (n) = log n
Linear Logarithmic Nested
Loop
for (i=1; i<=10; i++)
for (j=1; j<=10; j *=2)
the loop body
f (n) = n 2
Dependent Quadratic Nested
Loop
for (i=1; i<10; i++)
for (j=1; j<i; j ++)
the loop body
References:
1. Coreman, Rivest, Lisserson, “Algorithm",
PHI.
2. Basse, "Computer Algorithms: Introduction
to Design & Analysis", Addision Wesley.
3. Horowitz, Sahani, and Rajasekaran
"Fundamental of Computer Algorithms",
Universities Press