0% found this document useful (0 votes)
4 views

04_Algorithm_Analysis_Asymptotic Notation_Growth of Functions

The document discusses the design and analysis of algorithms, focusing on algorithm performance, asymptotic notations, and the importance of understanding theoretical bases for practical applications. It highlights the significance of analyzing algorithms to predict performance, compare different algorithms, and avoid performance bugs. Additionally, it introduces various asymptotic notations such as Big-Oh, Big-Omega, and Theta, which help categorize algorithms based on their growth rates.

Uploaded by

podrilogy7753
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

04_Algorithm_Analysis_Asymptotic Notation_Growth of Functions

The document discusses the design and analysis of algorithms, focusing on algorithm performance, asymptotic notations, and the importance of understanding theoretical bases for practical applications. It highlights the significance of analyzing algorithms to predict performance, compare different algorithms, and avoid performance bugs. Additionally, it introduces various asymptotic notations such as Big-Oh, Big-Omega, and Theta, which help categorize algorithms based on their growth rates.

Uploaded by

podrilogy7753
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Design and Analysis of Algorithms

Algorithm Analysis and the


Asymptotic Notations
Sug ges t ed Readi ngs : Int r oduct i on t o A l gor i t hms , C or m en et al .
Programmer needs to
develop a working solution.
Student might play any or
all of these roles someday.

Client wants to solve


problem efficiently.

Theoretician wants to understand.


Reasons to analyze algorithms
Predict performance
Compare algorithms
Provide guarantees
Understand theoretical basis

Primary practical reason: avoid performance bugs

client gets poor performance because


programmer did not understand
performance characteristics
Programmer’s Main Challenge
Q. Will my program be able to solve a large practical input?

Why is my program so Why does it run out of


slow? memory?
Example: 3-SUM

3-SUM. Given N distinct integers, how many triples sum to


exactly zero?

a[i] a[j] a[k] sum


1 30 -40 10 0
A: 30 -40 -20 -10 40 0 10 5
2 30 -20 -10 0

3 -40 40 0 0

4 -10 0 10 0
3-SUM: brute-force algorithm

public class ThreeSum


{
public static int count(int[] a)
{
int N = a.length;
int count = 0;
for
for(int
(int i i
= =0;0;i i
< <N;N;i++)
i++)
for (int
for (int jj == i+1;
i+1; jj << N;
N; j++)
j++)
for
for (int
(int kk == j+1;
j+1; kk << N;
N; k++)
k++) check each triple
if (a[i] + a[j] + a[k] == 0)
count++;
count++;
return count;
}
public static void main(String[] args)
{
In in = new In(args[0]);
int[] a = in.readAllInts();
StdOut.println(count(a));
}
}
Measuring the running time

Q. How to time a program?


A. Manual
Computing the Running Time
Experimental Approach
Q. How to time a program?
A. Automatic

• Write a program that implements the algorithm


• Run the program with data sets of varying sizes
• Determine the actual running time using a system call to
measure time (e.g. system(date))
Empirical analysis

Run the program for various input sizes and measure running
time.

N time (seconds) †

250 0

500 0

1,000 0.1

2,000 0.8

4,000 6.4

8,000 51.1

16,000 ?
Data analysis

Plot running time T (N) vs. input size N.


Issues with experimental approach
• It is necessary to implement and test the algorithm in order to
determine its running time.
• Experiments can be done only on a limited set of inputs, and
may not be indicative of the running time for other inputs.
• The same hardware and software should be used in order to
compare two algorithms. – condition very hard to achieve!
Use a Theoretical Approach

Based on high-level description of the algorithms, rather than


language dependent implementations

Makes possible an evaluation of the algorithms that is


independent of the hardware and software environments
• à Generality
Mathematical models for running time

Total running time: sum of cost ´ frequency for all operations.


• Need to analyze program to determine set of operations.
• Cost depends on machine, compiler.
• Frequency depends on algorithm, input data.

In principle, accurate mathematical models are available.


Cost of basic operations
Challenge. How to estimate constants.
operation example nanoseconds †

integer add a+b 2.1

integer multiply a*b 2.4

integer divide a/b 5.4

floating-point add a+b 4.6

floating-point multiply a*b 4.2

floating-point divide a/b 13.5

sine Math.sin(theta) 91.3

arctangent Math.atan2(y, x) 129

... ... ...


† Running OS X on Macbook Pro 2.2GHz with 2GB RAM
Cost of basic operations
Observation. Most primitive operations take constant time.

operation example nanoseconds †

variable declaration int a c1

assignment statement a=b c2

integer compare a<b c3

array element access a[i] c4

array length a.length c5

1D array allocation new int[N] c6 N

2D array allocation new int[N][N] c7 N 2

... ... ...

Caveat. Non-primitive operations often take more than constant time.

novice mistake: abusive string concatenation


Cost of basic operations
Observation. Most primitive operations take constant time.

nanoseconds
operation example †

variable declaration int a c1

assignment statement a=b c2

integer compare a<b c3

array element access a[i] c4

array length a.length c5

1D array allocation new int[N] c6 N

2D array allocation new int[N][N] c7 N 2


Caveat. Non-primitive operations often take more than constant
time.
Pseudo Code Notation
Expressions: use standard mathematical symbols
• use ¬ for assignment
• use = for the equality relationship
Method Declarations: Algorithm name(param1, param2)

Programming Constructs:
• decision structures: if ... then ... [else ..]
• while-loops while ... do
• repeat-loops: repeat ... until ...
• for-loop: for ... do
• array indexing: A[i]
Methods
• calls: object method(args)
• returns: return value
Use comments /* this is a comment */ or // this is also a comment
Instructions have to be basic enough and feasible!
Asymptotic Notations
Asymptotic Notations
• Categorize algorithms based on asymptotic growth rate e.g.
linear, quadratic, polynomial, exponential
• Ignore small constant and small inputs
• Estimate upper bound and lower bound on growth rate of time
complexity function
• Describe running time of algorithm as n grows to ∞.
• Describes behavior of function within the limit.
Limitations
• Not always useful for analysis on fixed-size inputs.
• All results are for sufficiently large inputs.
Asymptotic Notation
Goal: to simplify analysis by getting rid of unneeded information
• like “rounding” 1,000,001≈1,000,000

We want to say in a formal way 3n2 ≈ n2


Asymptotic Notations
Q, O, W, o, w
Q “order exactly”
O to mean “order at most”
W to mean “order at least”
o to mean “tight upper bound”
w to mean “tight lower bound”

Define a set of functions, which is in practice used to compare


two function sizes.
Big-Oh Notation (O)
For a given function 𝒈(𝒏) ≥ 0, denoted by O(𝒈(𝒏)),
the set of functions,

𝒇(𝒏) = O(𝒈(𝒏)) OR 𝒇 𝒏 ∈ O(𝒈(𝒏)) means, function


𝒈(𝒏) is an asymptotically upper bound for 𝒇(𝒏)

Intuitively:
Set of all functions whose rate of growth is the same as or lower
than that of g(n).
Big-Oh Notation (O)

f(n) Î O(g(n))

g(n) is an asymptotic upper bound for f(n).


Big-Oh Notation (O) - Examples
Example 1: Prove that 2n2 Î O(n3)
Proof:
We have f(n) = 2n2 , and g(n) = n3
f(n) Î O(g(n)) ?
Now we have to find the existence of c and n0 such that 𝟎 ≤f(n)≤
𝒄. 𝒈 𝒏 ∀ 𝒏 ≥ 𝒏𝒐

f(n) ≤ c.g(n)
⇒ 2n2 ≤ c.n3
⇒ 2 ≤ c.n
if we take, c = 1 and n0= 2 OR c = 2 and n0= 1
then
2n2 ≤ c.n3
Hence f(n) Î O(g(n)), c = 1 and n0= 2
Big-Oh Notation (O) - Examples
Example 2: Prove that n2 Î O(n2)
Proof:
We have f(n) = n2 , and g(n) = n2
f(n) Î O(g(n)) ?
Now we have to find the existence of c and n0 such that 𝟎 ≤f(n)≤
𝒄. 𝒈 𝒏 ∀ 𝒏 ≥ 𝒏𝒐

f(n) ≤ c.g(n) ⇒ n2 ≤ c.n2 ⇒ 1 ≤ c.n


if we take, c = 1 and n0= 1 then
n2 ≤ c.n2
Hence f(n) Î O(g(n)), c = 1 and n0= 1
Big-Oh Notation (O) - Examples
𝟏 𝟐
Example 3: Prove that 𝒏 − 𝟑𝒏 𝑶(𝒏𝟐)
𝟑
Proof:
$ 2
We have 𝑓 𝑛 = 𝑛 − 3𝑛 , and 𝑔(𝑛) = 𝑛2
%
To show that f(n)∈O(g(n)) we have to find the existence of c and n0 such that
𝟎 ≤f(n)≤ 𝒄. 𝒈 𝒏 ∀ 𝒏 ≥ 𝒏𝒐

𝟏 𝟐 𝟏 𝟑
𝒏 − 𝟑𝒏 ≤ 𝒄𝒏𝟐 if 𝒄≥ − which holds for c = 1/3 and n > 1
𝟑 𝟑 𝒏

Hence f(n) Î O(g(n)), c =1/3 and n0= 1


Big-Oh Notation (O) - Examples
Example 4: Prove that n3 ∉ O(n2)
Proof:
We have f(n) = n3, and g(n) = n2
f(n) Î O(g(n)) ?
Now we have to find the existence of c and n0 such that
𝟎 ≤f(n)≤ 𝒄. 𝒈 𝒏 ∀ 𝒏 ≥ 𝒏𝒐

𝟎 ≤ n3 ≤ c.n2
⇒ n≤ c
Since c is any fixed number and n is any arbitrary constant, therefore n ≤
c is not possible in general.
Hence our assumption is wrong and 𝒏𝟑 ≤ 𝒄𝒏𝟐 ∀𝒏 ≥ 𝒏𝟎 is not true of any
combination of 𝒄 and 𝒏𝟎
Therefore, f(n) ∉ O(g(n))
Some More Examples
1. n2 + n3 = O(n4)
2. n2 / log(n) = O(n . log n)
3. 5n + log(n) = O(n)
4. nlog n = O(n100)
5. 3n = O(2n . n100)
6. n! = O(3n)
7. n +1 = O(n)
8. 2n+1 = O(2n)
9. (n+1)! = O(n!)
10. 1 + c + c2 +…+ cn = O(cn) for c > 1
11. 1 + c + c2 +…+ cn = O(1) for c < 1
Big-Omega Notation (𝛀) – Lower
Bounds
For a given function 𝒈(𝒏), denoted by (𝛀𝒈(𝒏)), the
set of functions,

𝒇(𝒏) = 𝛀(𝒈(𝒏)) OR 𝒇 𝒏 ∈ 𝛀(𝒈(𝒏)) means, function


𝒈(𝒏) is an asymptotically lower bound for 𝒇(𝒏)

Intuitively:
Set of all functions whose rate of growth is the same as or higher
than that of g(n).
Big-Omega Notation (𝛀)

f(n) Î 𝛀(g(n))

g(n) is an asymptotic lower bound for f(n).


Examples
Examples
Examples
Examples
Examples
Examples
𝚯-Notation
Given any function g(n), we define 𝚯 (g(n)) to be a set of
functions which are asymptotically equivalent to g(n). Formally:

This is written as 𝒇 𝒏 ∈ 𝚯(𝒈 𝒏 ) i.e., 𝒇(𝒏) and 𝒈 𝒏 are


asymptotically equivalent.

This means that they have essentially the same growth rates for
large 𝒏.
Theta Notation

f(n) Î Q(g(n))

We say that g(n) is an asymptotically tight bound for f(n).


Theta Notation
Example
Consider the function

f(n) = 8n2 + 2n – 3
Our informal rule of keeping the largest term and ignoring the constant
suggests that f(n) ∈ 𝜽(n2). Let’s see why this bears out formally.
We need to show two things for f(n) = 8n2 + 2n – 3:
1) Lower bound: f(n) = 8n2 + 2n - 3 grows asymptotically at least as
fast as n2,
2) Upper bound: f(n) grows no faster asymptotically than n2
Theta Notation
Example (…continued)

f(n) = 8n2 + 2n – 3
1) Lower bound: f(n) grows asymptotically at least as fast as n2.
For this, need to show that there exist positive constants c1 and n0,
such that f(n) ≥ c1n2 for all 𝒏 ≥ 𝒏𝟎 .
Consider the reasoning:
f(n) = 8n2 + 2n - 3
≥ 8n2 - 3 ≥ 7n2 + (n2 - 3) ≥ 7n2
Thus c1 = 7. We implicitly assumed that 2n ≥ 0 and n2 - 3 ≥ 0.
These are not true for all n but if n ≥ √𝟑, then both are true.
Therefore, selecting n" ≥ √𝟑, we then have f(n) ≥ c1n2 for all 𝒏 ≥ 𝒏𝟎
Theta Notation
Example (…continued)

f(n) = 8n2 + 2n – 3
2) Upper bound: f(n) grows asymptotically no faster than n2.
For this, need to show that there exist positive constants c1 and n0, such
that f(n) ≤ c2n2 for all 𝒏 ≥ 𝒏𝟎 .
Consider the reasoning:
f(n) = 8n2 + 2n - 3
≤ 8n2 + 2n
≤ 8n2 + 2n2
= 10n2
Thus c2 = 10. We implicitly assumed that 2n ≤ 2n2. These are not true for all
n but it is true for n ≥ 𝟏.
Therefore, selecting n) ≥ 𝟏, we then have f(n) ≤ c2n2 for all 𝒏 ≥ 𝒏𝟎
Theta Notation
Example (…continued)

f(n) = 8n2 + 2n – 3
From lower bound we have n0 ³ Ö3.
From upper bound we have n0 ³ 1.

Combining the two, we let n0 be the larger of the two i.e., n0 ³ Ö3.
In conclusion, if we let c1 = 7, c2 = 10 and n0 = Ö3,
we have,
7n2 £ 8n2 + 2n - 3 £ 10n2 for all n ³ Ö3
Thus, we have established that
0 £ c1 g(n) £ f(n) £ c2 g(n) for all n ³ n0
Usefulness of Notation
It is not always possible to determine behaviour of an algorithm
using 𝜽-notation.
For example, given a problem with n inputs, we may have an
algorithm to solve it in 𝒄𝟏 .n2 time when n is even and 𝒄𝟐 .n time
when n is odd. OR
We may prove that an algorithm never uses more than 𝒄𝟏 .n2
time and never less than 𝒄𝟐 .n time.
In either case we can neither claim 𝜽(n) nor 𝜽(n2) to be the order
of the time usage of the algorithm.
Big O and W notation will allow us to give at least partial
information

You might also like