0% found this document useful (0 votes)
54 views

BKS Unit 1-Growth of Functions

The document provides an overview of analyzing algorithms and their running time. It discusses analyzing algorithms based on their time complexity as a function of input size n. Common time complexities include constant, logarithmic, linear, quadratic, and exponential. The document emphasizes that asymptotic analysis allows algorithms to be compared by their rate of growth for large input sizes n, ignoring lower order terms. It also introduces asymptotic notations like Big-O, Big-Omega, and Big-Theta for describing algorithm time complexities.

Uploaded by

Akash Rawat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

BKS Unit 1-Growth of Functions

The document provides an overview of analyzing algorithms and their running time. It discusses analyzing algorithms based on their time complexity as a function of input size n. Common time complexities include constant, logarithmic, linear, quadratic, and exponential. The document emphasizes that asymptotic analysis allows algorithms to be compared by their rate of growth for large input sizes n, ignoring lower order terms. It also introduces asymptotic notations like Big-O, Big-Omega, and Big-Theta for describing algorithm time complexities.

Uploaded by

Akash Rawat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Learn DAA: From B K Sharma

BCST-503: Design and


Analysis of Algorithms
Growth of Functions
Learn DAA: From B K Sharma

Unit I: Syllabus
• Introduction:
– Algorithm definition
– Algorithm Specification
• Performance Analysis-
– Space complexity
– Time complexity
Learn DAA: From B K Sharma

Unit I: Syllabus
• Randomized Algorithms.
• Divide and conquer- General method
– Applications:
• Binary search
• Merge sort Quick sort
• Strassen’s Matrix Multiplication.
Learn DAA: From B K Sharma
How fast will our program run?

The running time of our program will depend upon:

The algorithm The input


Our implementation of The Translator
the algorithm in a (Assembler / Interpreter
programming language. / compiler) we use.

The OS on our computer Our computer hardware

Our Motivation: Analyze the running time of an


algorithm as a function of only
simple parameter of the size of
input.
Learn DAA: From B K Sharma

Running Time T(n)


Running Time T(n)
Is proportional to Complexity

T(n) ∝ log n Logarithmic

T(n) ∝ n Linear
T(n) ∝ n log n Linearithmic
T(n) ∝ n2 Quadratic
T(n) ∝ n3 Cubic

T(n) ∝ nk Polynomial

T(n) ∝ 2n Exponential

T(n) ∝ kn, k > 1 Exponential

T(n) ∝ n! Exponential
Learn DAA: From B K Sharma
Running Time T(n) curve
Learn DAA: From B K Sharma
Asymptote
Asymptote-Noun Asymptotic-adjective
A straight line that relating to Asymptote.
continually approaches a
given curve but does not
meet it at any finite
distance.
Learn DAA: From B K Sharma
Asymptotic Analysis
To estimate the complexity function T(n) for
reasonably large size of input n.
What are f(n) and g(n)?
Searching Problem Sorting

Algorithm 1 Insertion Merge Algorithm 2


sort Sort
Linear Search Technique Binary Search Technique
f(n) is the complexity function g(n) is the complexity function
that demotes the running time of that denotes the running time of
Algorithm 1. Algorithm2.

Insertion Sort f(n)=c1n2 Merge Sort g(n)=c2nlogn


Linear Search f(n)=c3n Binary Search g(n)=c4log2n
To compare functions f(n) and g(n) for large values of n.
Learn DAA: From B K Sharma

Comparing Algorithms

Given two algorithms having running times f(n) and g(n),


how do we decide which one is faster?

Compare “rates of growth” of f(n) and g(n)

Understanding Rate of Growth

Consider the example of buying elephant and fish:


Cost: cost_of_elephant + cost_of_fish

Approximation
Cost: ~ cost_of_elephant
Learn DAA: From B K Sharma

Understanding Rate of Growth


The low order terms of a function are relatively
insignificant for large n

n4 + 100n2 + 10n + 50
Approximation:
n4
Highest order term determines rate of growth!
Learn DAA: From B K Sharma

Example
Suppose you are designing a website to process user
data (e.g., financial records).
Suppose program A takes fA(n)=30n+8 microseconds
to process any n records,
while program B takes fB(n)=n2+1 microseconds to
process the n records.

Which program would you choose, knowing you’ll


want to support millions of users?

Compare rates of growth:


30n+8 ~ n and n2+1 ~ n2
Learn DAA: From B K Sharma

Visualizing Orders of Growth

On a graph, as you go to
the right, a faster growing

Value of function →
function eventually becomes fA(n)=30n+8
larger...

fB(n)=n2+1

Increasing n →
Learn DAA: From B K Sharma

Rate of Growth ≡Asymptotic Analysis

Using rate of growth as a measure to compare


different functions implies comparing them
asymptotically ( i.e., as n →  )

If f(x) is faster growing than g(x), then f(x) always


eventually becomes larger than g(x) in the limit (i.e.,
for large enough values of x).
Learn DAA: From B K Sharma

How do we find f(n) and g(n)?

(1) Associate a "cost" with each statement.


(2) Find total number of times each statement is
executed.
(3) Add up the costs.
Algorithm 1 Algorithm 2
Cost Cost
arr[0] = 0; c1 for(i=0; i<N; i++) c2
arr[i] = 0; c1
arr[1] = 0; c1
arr[2] = 0; c1
...
arr[N-1] = 0; c1
----------- -------------
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 = (c2 + c1) x N + c2
Learn DAA: From B K Sharma

How do we find f(n) and g(n)?


Cost

sum = 0; c1

for(i=0; i<N; i++) c2

for(j=0; j<N; j++) c2

sum += arr[i][j]; c3

j
0 1 i=0 i=0 i=0
j=0 j=1 j=2: false
0 2 5
i sum+=arr[0][0] sum+=arr[0][1]
1 3 4
Learn DAA: From B K Sharma

How do we find f(n) and g(n)?


Cost

sum = 0; c1

for(i=0; i<N; i++) c2

for(j=0; j<N; j++) c2

sum += arr[i][j]; c3

j
0 1 i=1 i=1 i=1
j=0 j=1 j=2: false
0 2 5
i sum+=arr[1][0] sum+=arr[1][1]
1 3 4
Learn DAA: From B K Sharma

How do we find f(n) and g(n)?


Cost

sum = 0; c1

for(i=0; i<N; i++) c2

for(j=0; j<N; j++) c2

sum += arr[i][j]; c3

j
0 1 i=2:false
0 2 5
i
1 3 4
Learn DAA: From B K Sharma

How do we find f(n) and g(n)?


Cost

sum = 0; c1

for(i=0; i<N; i++) c2

for(j=0; j<N; j++) c2

sum += arr[i][j]; c3

------------

c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N x N
j
0 1
0 2 5
i
1 3 4
Learn DAA: From B K Sharma
Asymptotic Notations
Big (O); Big Omega (); Big Theta(); Little (o); Little Omega(ω);

Asymptotic Notations: are standard means for describing


T1(n) = 15n3 + n2 + 4 families of functions that share
T2(n) = 5n3 + 4n + 5 similar asymptotic behavior.
Both will belong to allows us to ignore small input sizes,
the same class of
constant factors, lower-order terms
functions. Namely,
“cubic functions of n”. in polynomials and so forth.
The low order terms in a function are relatively
insignificant for large n
Highest order term determines rate of growth!
i.e., we say that n4 + 100n2 + 10n + 50
n4 + 100n2 + 10n + 50 ~ n4 and n4 have the same rate of growth
Learn DAA: From B K Sharma
Asymptotic Notations
Big (O); Big Omega (); Big Theta();

f(n) ≤ c.g(n) f(n) ≥ c.g(n) c1.g(n)≤ f(n) ≤ c2 g(n)


f(n)=g(n)
Where c,c1,c2>0 and  n  n0
Then
f(n)=O(g(n)) f(n)=  (g(n)) f(n)=  (g(n))
used to denote used to denote used to denote Tight
upper bound lower bound Bound, Best and
Worst Cases are
same
Worst Case Best Case (Both Upper and
Complexity of Complexity of Lower Bound)
Algorithm Algorithm Complexity of
Algorithm
Learn DAA: From B K Sharma
Worst-Case Vs Average Case Analysis
Worst-case analysis Average case analysis gives
gives an upper bound for an upper bound for the
the running time of a expected running time of a
single execution of an single execution of a
algorithm with a worst- deterministic algorithm with
case input and worst- a random input selected
case random choices. according to some
distribution.
OR
Average case analysis gives
an upper bound for the
expected running time of a
single execution of a
randomized algorithm with a
worst-case input.
Learn DAA: From B K Sharma
Worst-Case Vs Average Case Analysis

Given a distribution of running times:


worst-case analysis gives an upper bound for the
maximum, while

average case analysis gives an upper bound for


the expected value.
Learn DAA: From B K Sharma
Worst-Case Vs Average Case Analysis

Remarks

In statistics, there is no single parameter that always


captures the relevant properties of a distribution.

Similarly, there is no single way of analyzing an


algorithm that always gives reasonable bounds for its
performance in practice.

One has to understand the nature of a particular


algorithm to know, which method of analysis accurately
describes its performance.
Learn DAA: From B K Sharma

Remarks
1. Best case complexity is denoted using Ω notation.
i.e., if an algorithm has time complexity Ω(n), then
every input to the algorithm incurs at least c·n
comparisons.

2. Worst case complexity is denoted using O


notation. i.e., if an algorithm has time complexity
O(n2), then every input to the algorithm incurs at
most c·n2 comparisons.

3. We use theta (θ) notation to analyze the run-


time of a specific input with respect to an
algorithm.
Learn DAA: From B K Sharma

Remarks

For example, an already sorted input is a best


case input to insertion sort whose run-time for
this input is denoted as θ(n).

Similarly, with respect to quick sort, the same


input acts as a worst case input and its run-
time is θ(n2).
Learn DAA: From B K Sharma

Remarks

Note that the worst case and best case analysis of


an algorithm can sometimes yield same asymptotic
bounds (as of merge sort) or can have different
bounds (as of insertion sort).

For algorithms whose asymptotic complexity of best


case and worst case inputs are different, it is
natural to look at average case analysis of the
algorithm.
Further, for such algorithms, it is interesting to
investigate whether average case is close to the
best case bound or worst case bound.
Learn DAA: From B K Sharma
Asymptotic Notations
f(n) ≤ c.g(n) c.g(n)
time

f(n)

n0 n

f(n) = O(g(n))
f(n) is at most g(n), up to constant factor c
Learn DAA: From B K Sharma
Asymptotic Notations

Note: if we write f(n) = O(g(n)) , here “=“ does not


mean “equal to”, it means “belongs to”.
That is f(n) belongs to Big-Oh (O) family of
functions.
Two functions are compared asymptotically, for
large n, and not near the origin
Learn DAA: From B K Sharma
Asymptotic Notations
f(n) ≥ c.g(n)
f(n)
time

c.g(n)

n0 n

f(n) = (g(n)
f(n) is at least g(n), up to constant factor c
Learn DAA: From B K Sharma
Asymptotic Notations
c1.g(n) ≤ f(n) ≤ c2 g(n)
c2.g(n)
 O f(n)
time

c1.g(n)

n0 f(n) = (g(n)) n
f(n) is at most c2.g(n) and at least c1.g(n) for some
constants c1 and c2
Learn DAA: From B K Sharma
Asymptotic Notations

Proving the Asymptotic Bounds

Must find SOME constants c and n0 that satisfy the


asymptotic notation relation.
No Uniqueness
There is no unique set of values for n0 and c in proving
the asymptotic bounds.
Learn DAA: From B K Sharma
Asymptotic Notations

No Uniqueness

Prove that 100n + 5 = O(n2)

100n + 5 ≤ 100n + n = 101n ≤ 101n2 for all n ≥ 5


n0 = 5 and c = 101 is a solution.

100n + 5 ≤ 100n + 5n = 105n ≤ 105n2 for all n ≥ 1

n0 = 1 and c = 105 is also a solution


Learn DAA: From B K Sharma
Asymptotic Notations(O)
Example 1:First Method
Prove that running time T(n) = n3 + 20n + 1 is O(n3)
Proof:
Here,
f(n)= n + 20n + 1
3 And g(n)=n 3

This means we have to prove that for some c and


n0, n3 + 20n + 1 < c. n3
Put n=n0= 1, we get,
1 + 20+1=22 ≤ 22.13 Yes.
Put n=n0= any value greater than 1, say 3 ,we get,
33 + 20 x 3 + 1= 27 + 60+1
=88 ≤ 88 x 33=88 x 9=792 Yes.
Thus  n ≥ n0 ≥ 1 and c ≥ 22
f(n) ≤c.g(n) Hence, n3 + 20n + 1 is O(n3)
Learn DAA: From B K Sharma
Asymptotic Notations(O)
Example Second Method
Prove that running time T(n) = n3 + 20n + 1 is O(n3)
Proof:
Here,
f(n)= n + 20n + 1
3 And g(n)=n3

This means we have to prove that for some c and


n0, n3 + 20n + 1 < c. n3
Divide both sides by n3:
n3 + 20n + 1 < c. n3
1+ 20/n2 + 1/n3 ≤ c
Now  n ≥ n0 = 1,
1 + 20 + 1=22≤ c Thus c≥ 22
Thus  n ≥ n0 = 1 and c≥ 22
n3 + 20n + 1 < 22. n3 f(n) ≤ 22.g(n) ≤ c.g(n)
Hence, n3 + 20n + 1 is O(n3)
Learn DAA: From B K Sharma
Asymptotic Notations(O)
Example 2: Third Method
Is 5n3 + 2n2 + n + 106 = O(n3)?
Proof:
Here,
f(n)= 5n3 + 2n2 + n + 106 And g(n)=n3
By the Big-Oh definition:
f(n) is O(n3) if f(n) ≤ c.g(n) ≤ c·n3 for some c>0
and n ≥ n0 .
Let us check this condition:
if f(n)=5n3 + 2n2 + n +106 ≤ c·n3 then
(5+ 20/n +1/n2 +106/n3) ≤ c
Now  n ≥ n0 = 100, c ≥ 6.05.
f(n) ≤ 6.05 * n3 f(n) ≤ 6.05 * g(n) ≤ c * g(n)
Hence, T(n) = 5n3 + 2n2 + n + 106 is O(n3)
Learn DAA: From B K Sharma
Asymptotic Notations(O)
Example 3: Third Method
Is 2n + 7 = O(n)?
Proof:
Here,
f(n)=2n +7 And g(n)=n
By the Big-Oh definition:
f(n) is O(n) if f(n) ≤ c.g(n) ≤ c·n for some c>0
and n ≥ n0 .
Let us check this condition:
If f(n)=2n +7≤ c·n then
(2 +7/n) ≤ c
Now  n ≥ n0 =7, c ≥ 3 (2+1)
f(n) ≤ 3 * n f(n) ≤ 3 * g(n) ≤ c * g(n)
Hence, T(n) = 2n + 7 is O(n)
Learn DAA: From B K Sharma
Asymptotic Notations(O)
Example 4: Fourth Method
Is 3n +2 = O(n)?

Put n=2, f(2)=3x2 + 2=8=4x2=4n


Put n=3, f(3)=3x3 + 2=11<4x3<4n
Put n=4, f(4)=3x4 + 2=14<4x4<4n
Thus f(n)≤ 4n for all n≥ 2
Hence f(n)=O(n)
Learn DAA: From B K Sharma
Asymptotic Notations(O)
Example 5:
Is 5n2 =O(n)?

This means we have to prove that for some c and


n0, 5n2 < c.n
Put c = 5 and n0 = 1, we get,
5n2 ≤ c.n 5 ≤5 Yes.
Put c = 5 and n0 = any value of n0 > 1 say 5, we get,
5n2 ≤ c.n 5 x 52 ≤ 5 x 5 125 ≤25 Yes.
Hence 5n2 =O(n)
Learn DAA: From B K Sharma
Asymptotic Notations()

When we say that an algorithm takes at least a


certain amount of time, without providing an upper
bound we use big-Ω notation; that's the Greek
letter "omega."

We say that the running time is "big-Ω of g(n), we


use big-Ω notation for asymptotic lower bounds,
since it bounds the growth of the running time from
below for large enough input sizes.
Learn DAA: From B K Sharma
Asymptotic Notations()
Example 1:
Definition:
If f(n) ≥c.g(n) where c>0 and n ≥n0 then f(n)= (n)
Is f(n)=3n +1= (n)?
Choose c=2 and n0=5
3n+1 ≥2 . n Whenever n ≥5
3+1/n ≥2 Whenever n ≥5
3+1/5 ≥2
3.2≥2 Whenever n ≥5 True
Also 3 + 1/6 ≥2 Whenever n ≥5
3.17 ≥2 Whenever n ≥5 True
Therefore, f(n)=3n +1= (n)
Learn DAA: From B K Sharma
Asymptotic Notations()
Example 1:
Is f(n)=3n +2= (n)?
Put n=2,
f(n)=3x2 + 2=8 ≥3 x2 ≥ 3n
Put n=3,
f(n)=3x3 +2=11 ≥ 3 x 3 ≥ 3n
Thus, f(n) ≥ 3n  n ≥2
Hence f(n)= Ω(n)
Learn DAA: From B K Sharma
Asymptotic Notations()
Example 1:
Is 3n +2 = (n)?
Proof:
Here,
f(n)=3n +2 And g(n)=n
By the Big-Omega() definition:
f(n) is (n) if f(n) ≥ c . g(n) ≥ c·n for some c>0
and n ≥ n0 .
Let us check this condition:
If f(n)=3n +2 ≥ c·n then
(3 +2/n) ≥ c
Now  n ≥ n0 =2, (3 +1) ≥ c c ≤ 4
f(n) ≥ 4 * n f(n) ≥ 4 * g(n) ≥ c * g(n)
Hence, T(n) = 3n + 2 is (n)
Learn DAA: From B K Sharma
Asymptotic Notations()
Example 2:
Is 5n2 = (n)?
Proof:
Here,
f(n)= 5n2 And g(n)=n
By the Big-Omega() definition:
f(n) is (n) if f(n) ≥ c . g(n) ≥ c·n for some c>0
and n ≥ n0 .
Let us check this condition:
If f(n)= 5n2 ≥ c·n then
5n ≥ c
Now  n ≥ n0 =1, 5x 1 ≥ c c ≤ 5
f(n) ≥ 5 * n f(n) ≥ 5 * g(n) ≥ c * g(n)
5n2 is Ω(n) because 5n2 ≥ 5n for n ≥ 1.
Hence, T(n) = 5n2 is (n)
Learn DAA: From B K Sharma
Asymptotic Notations()
Example 3:

Is 5n2 = (n2)?
This means we have to prove that for some c and n0,
5n2  c.n2
Put c = 5 and n0 = 1, we get

5n2  c.n2 5 x 12 5 x12 Yes.

Thus, for c=5 and n0=1,


f(n) =5n2 c.n2  c. g(n)

Hence 5n2= (n2)


Learn DAA: From B K Sharma
Asymptotic Notations(Ө)
Example

 O
c1.g(n)< f(n) <c2.g(n)
Learn DAA: From B K Sharma
Example Ө
O
c2.g(n)
f(n)
time 

c1.g(n)

n0 n

f(n) = (g(n))
Learn DAA: From B K Sharma
Example Ө

Is 3n+2=O(n)? Is f(n)=3n +2= (n)?


Put n=2, f(2)=3x2+2=8=4x2=4n Put n=2,
Put n=3, f(3)=3x3+2=11<4x3<4n f(n)=3x2+2=8 ≥3 x2 ≥ 3n
Put n=4, f(4)=3x4+2=14<4x4<4n Put n=3,
Thus f(n)≤ 4n for all n≥ 2 f(n)=3x3+2=11 ≥ 3x3 ≥3n
Hence f(n)=O(n) Thus, f(n) ≥ 3n  n ≥2
Hence f(n)= Ω(n)
Learn DAA: From B K Sharma
Example 1: Ө

 O

c1.g(n)< f(n) <c2.g(n)


3n< 3n +2< 4n, n ≥2 n0

c1 c2

Hence 3n +2= Ө(n)


Learn DAA: From B K Sharma
Example 2: Ө

 O

c1.g(n)< f(n) <c2.g(n)


1.n< 5n2< 5n, n ≥1 n0

c1 c2

Hence 5n2= Ө(n)


Extra
 notation (Theta)
(Tight Bound)
• Example 2: n2/2 –n/2 = (n2)
• This means we have to prove that
– c1.n2≤ n2/2 –n/2 ≤ c2.n2
• This means also means we have to prove
that
• n2/2 –n/2 ≤c2.n2 (1)
• and
• n2/2 –n/2 ≥ c1.n2 (2)
 notation (Theta)
(Tight Bound)
• Now, from (1)
– ½ n2 - ½ n ≤ ½ n2 n ≥ 0  c2= ½
– Thus ½ n2 - ½ n=O(n2)-------- (3)
• Also, from (2)
– ½ n2 - ½ n ≥ ½ n2 - ½ n * ½ n ( n ≥ 2 ) = ¼ n2 
c1 = ¼
– Thus ½ n2 - ½ n=  (n2)----------------(4)
• Hence from (3) and (4)
• ½ n2 - ½ n= (n2)
 notation (Theta)
(Tight Bound)
• Example 4:
• Show that 6n3  (n2)
• Suppose for the purpose of contradiction that
c2 and n0 exist such that 6n3  c2n2 for all n
 n0
– Dividing by n2 yields
• n  c2/6
– which cannot possibly hold for arbitrary
large n, since c2 is constant
– Also, limn→[6n3 / n2 ] = limn→[6n] = , which
is not a non-zero constant
Examples of  notation
– Example 5: n ≠ (n2): c1 n2 ≤ n ≤ c2 n2

 only holds for: n ≤ 1/c1


Examples of  notation
– Example 6: 6n3 ≠ (n2): c1 n2 ≤ 6n3 ≤ c2

n2

 only holds for: n ≤ c2 /6

– Example 7: n ≠ (logn): c1 logn ≤ n ≤ c2

logn

 c2 ≥ n/logn,  n≥ n0 – impossible
Examples of  notation
• Example 8 : Let f(n) and g(n) be
asymptotically nonnegative functions.
Using the basic definition of  notation,
prove that max(f(n),g(n))= (f(n)
+g(n))
• There exists positive constants c1 and
c2 and n0 such that
c1(f(n) + g(n)) ≤ max(f(n),g(n))
≤c2(f(n) +g(n) ) for all n≥ n0.
Examples of  notation
• Selecting c2=1 clearly shows that c2(f(n)
+g(n) ) =(f(n) + g(n)).
• Now max(f(n),g(n)) must be smaller than
(f(n) +g(n)).
• Selecting c1=1/2 clearly shows that
c1(f(n) +g(n))=(f(n) + g(n))/2.
• Now max(f(n), g(n)) is always greater
than weighted average of f(n) and g(n).
• Thus, max(f(n),g(n))= (f(n) +g(n))
Intuition for Asymptotic Notation

• Big-Oh
◼ f(n) is O(g(n)) if f(n) is asymptotically less
than or equal to g(n)
• big-Omega
◼ f(n) is (g(n)) if f(n) is asymptotically
greater than or equal to g(n)
• big-Theta
◼ f(n) is (g(n)) if f(n) is asymptotically equal
to g(n)
Little o-, Little ω-notations
• So far,
– (g) is the functions that go to infinity
essentially at the same speed
– O(g) goes to infinity no faster than g,
and
– (g) goes to infinity at least as fast as
g.
• Sometimes, we want to say that a function
grows strictly faster or slower than g.
• That’s what the lower case letters are for.
o-notation: upper bound but not
asymptotically tight
⚫ The bound provided by O-notation may
or may not be asymptotically tight.
⚫ The bound 2n2=O(n2) is asymptotically
tight, but the bound 2n=O(n2) is not.
⚫ The o-notation denotes an upper
bound that is not asymptotically tight.
Formally, define o(g(n)) as the set
⚫ o(g(n)) = {f(n): for any positive
constants c>0, there exits a constant
n0>0 such that 0≤ f(n)< c g(n) for all
o-notation: upper bound but not
asymptotically tight
⚫ For example, 2n=o(n2), but 2n2≠o(n2).
⚫ The definitions of O-notation and o-
notation are similar.
⚫ The main difference
◆ In f(n)=O(g(n)), the bound 0≤f(n)≤c

g(n) holds for some constant c>0


◆ In f(n)=o(g(n)), the bound 0≤f(n)<c

g(n) holds for all constant c>0


o-notation: upper bound but not
asymptotically tight

• Intuitively, in the o-notation, the


function f(n)
becomes insignificant relative to g(n)
as n
approaches infinity; In o
that is I n O
f ( n) f ( n)
0  lim n →   lim =0
g ( n) n → g ( n)
o-notation: upper bound but not
asymptotically tight
• So o(g(n)) is the set of functions that, in the
long run, become insignificant compared to g.
– The way we say this formally is that o(g(n)) is
the set of f(n) such that …
• So, if we want g(n) to be twice as big as f(n)
we choose a certain n0.
• If we want it to be 4 times as big, we may
have to go to larger values of n, but
eventually it will be, and so on.
• we can express this in terms of limits -- etc.
ω-notation: lower bound but not
asymptotically tight
⚫ ω-notation is to Ω-notation as o-notation
is to O-notation.
⚫ The ω-notation denotes an lower bound that
is not asymptotically tight. Formally, define
ω(g(n)) as the set
• ω(g(n)) = { f(n): for any positive constants
c>0, there exits a constant n0>0 such
that 0≤c g(n)<f(n) for all n≥n0}.
• One way to define it is by
f(n)∈ω(g(n)) if and only if g(n)∈o(f(n))
ω-notation: lower bound but not
asymptotically tight
• For example, n2/2=ω(n), but n2/2≠ω(n2).
⚫ The relation f(n)=ω(g(n)) implies that

In -notation In ω-notation

f ( n) f ( n)
0  lim n →   lim = , if the limit exists.
g ( n) n → g ( n)
What is the algorithm’s efficiency?
• The algorithm’s efficiency is a function of the
number of elements to be processed. The
general format is

f (n) = efficiency
f ( n) = n
f (n) = log n
f (n) = n log n
f ( n) = n 2

 n +1 
f ( n) = n  
 2 
What is the algorithm’s
efficiency?
• When comparing two different
algorithms that solve the same problem,
we often find that one algorithm is an
order of magnitude more efficient than
the other.
The basic concept
• If the efficiency function is linear then
this means that the algorithm is linear and
it contains no loops or recursions. In this
case, the algorithm’s efficiency depends
only on the speed of the computer.
• If the algorithm contains loops or
recursions (any recursion may always be
converted to a loop), it is called nonlinear.
In this case the efficiency function
strongly and informally depends on the
number of elements to be processed.
Linear Loops
• The efficiency depends on how many times
the body of the loop is repeated. In a linear
loop, the loop update (the controlling
variable) either adds or subtracts.
• For example:
for (i = 0; i < 1000; i++)
the loop body

• Here the loop body is repeated 1000 times.


• For the linear loop the efficiency is directly
proportional to the number of iterations, it
is: f ( n) = n
Logarithmic Loops
• In a logarithmic loop, the controlling variable is
multiplied or divided in each iteration
• For example:
Multiply loop Divide loop
for (i=1; i<=1000; i*=2) for (i=1000; i<=1; i /=2)
the loop body the loop body

• For the logarithmic loop the efficiency is


determined by the following formula:

f (n) = log n
Linear Logarithmic Nested
Loop
for (i=1; i<=10; i++)
for (j=1; j<=10; j *=2)
the loop body

• The outer loop in this example adds, while the


inner loop multiplies
• A total number of iterations in the linear
logarithmic nested loop is equal to the product
of the numbers of iterations for the external
and inner loops, respectively (10log10 in our
example).
Linear Logarithmic Nested Loop

•For the linear logarithmic nested loop


the
efficiency is determined by the
following
f (n) = n log n
formula:
Quadratic Nested Loop
for (i=1; i<10; i++)
for (j=1; j<10; j ++)
the loop body

• Booth loops in this example add


• A total number of iterations in the quadratic
nested loop is equal to the product of the
numbers of iterations for the external and
inner loops, respectively (10x10=100 in our
example).
Quadratic Nested Loop

• For the quadratic nested loop the efficiency


is determined by the following formula:

f (n) = n 2
Dependent Quadratic Nested
Loop
for (i=1; i<10; i++)
for (j=1; j<i; j ++)
the loop body

• The number of iterations of the inner loop


depends on the outer loop. It is equal to the
sum of the first n members of an arithmetic
progression: n(n+1)/2
• A total number of iterations in the nested loop
is equal to the product of the numbers of
iterations for the external and inner loops,
respectively (10x5=50 in our example).
Dependent Quadratic Nested
Loop

• For the dependent quadratic nested loop


the efficiency is determined by the
following formula:
 n +1 
f ( n) = n  
 2 
Learn DAA: From B K Sharma

References:
1. Coreman, Rivest, Lisserson, “Algorithm",
PHI.
2. Basse, "Computer Algorithms: Introduction
to Design & Analysis", Addision Wesley.
3. Horowitz, Sahani, and Rajasekaran
"Fundamental of Computer Algorithms",
Universities Press

You might also like