Faculty of Computers and
Information Technology
CS/IT 341
ALGORITHMS ANALYSIS AND DESIGN
Lecture 3
Asymptotic Growth Rate (1)
• Changing the hardware/software environment
• Affects T(n) by constant factor, but does not alter the growth rate of T(n)
• Algorithm complexity is usually very complex. The growth of the
complexity functions is what is more important for the analysis and
is a suitable measure for the comparison of algorithms with
increasing input size n.
• Asymptotic notations like big-O, big-Omega, and big-Theta are used
to compute the complexity because different implementations of
algorithm may differ in efficiency.
• The big-Oh, O(), notation gives an upper bound on the growth rate
of a function.
• The statement “f(n) is O(g(n))” means that the growth rate of f(n) is
no more than the growth rate of g(n). 2
• We can use the big-Oh notation to rank functions according to their
growth rate.
Asymptotic Growth Rate (2)
Two reasons why we are interested in asymptotic growth rates:
• Practical purposes: For large problems, when we expect to have
big computational requirements
• Theoretical purposes: concentrating on growth rates frees us
from some important issues:
• fixed costs (e.g. switching the computer on!), which may
dominate for a small problem size but be largely irrelevant
• machine and implementation details
• The growth rate will be a compact and easy to understand the
function 3
Properties of Growth-Rate Functions
Example: 5n + 3
Estimated running time for different values of n:
n = 10 => 53 steps
n = 100 => 503 steps
n = 1,000 => 5003 steps
n = 1,000,000 => 5,000,003 steps
What about the “+3” and “5” in 5n+3?
As n gets large, the +3 becomes insignificant
5 is inaccurate, as different operations require varying amounts of
time and also does not have any significant importance
As “n” grows, the number of steps grow in linear proportion to n for
this function “Sum” 4
What is fundamental is that the time is linear in n.
Important Functions
These functions often appear in algorithm analysis:
5
Performance Classification
f(n) Classification
1
:
Constant run time is fixed, and does not depend upon n. Most instructions are
executed once, or only a few times, regardless of the amount of information being processed
log n Logarithmic: when n increases, so does run time, but much slower. Common in
programs which solve large problems by transforming them into smaller problems.
n Linear: run time varies directly with n. Typically, a small amount of processing is done on
each element.
n log n When n doubles, run time slightly more than doubles. Common in programs
which break a problem down into smaller sub-problems, solves them independently, then combines
solutions
n2 Quadratic: when n doubles, runtime increases fourfold. Practical only for small
problems; typically the program processes all pairs of input (e.g. in a double nested loop).
n3 Cubic: when n doubles, runtime increases eightfold
2n
6
Exponential: when n doubles, run time squares. This is often the result of a natural, “brute
force” solution.
A comparison of Growth-Rate Functions (1)
7
A comparison of Growth-Rate Functions (2)
8
A comparison of Growth-Rate Functions (3)
Size does Matter:
What happens if we increase the input size N?
9
A comparison of Growth-Rate Functions (4)
10
Approaches to Asymptotic Growth Rate
(Nested Loops)
i) Top-Down Approach
ii) Bottom-Up Approach
11
11
Example: Top-down vs. Bottom-up
for i 1 to n do
for j 1 to m do
ab
Bottom-up Solution: Inner Loop Top-down Solution:
for j 1 to m do .......... m for i 1 to n do .......... n
ab .......... m for j 1 to m do .......... nm
Inner(m)= m+m = O(m) ab .......... nm
for i 1 to n do .......... n T(n) = n + 2mn
Inner(m) .......... m = O(nm)
= O(n2) if m>=n
T(n) = n+Inner(m)(n) = n+O(m)(n)
= O(nm) 12
= O(n2) if m>=n
12
Example: LOOP Analysis (Bottom Up Approach)
Step-3: Outer
n
For Loop (Line 1)
n
T(n) = for(i ) = (2i 2 3i )
i 1 i 1
n n
= 2 i 3 i
2
i 1 i 1
= 2*[(2n3+3n2+n)/6] + 3*[n(n+1)/2]
Step-1: Bottom while Loop (line 5 and 6) = O(n3)
j
while(j) = 1 j 1
k 0 Quadratic Series
Step-2: Inner For Loop (line 3 and 4)
2i 2i
for(i) = while( j ) j 1
j 1 j 1
2i 2i
= j 1 13
j 1 j 1
= [ 2i * (2i+1) /2 ] + 2i
2i2+3i
Data Structures Review
• Data structure is the logical or mathematical model of a
particular organization of data.
• Data structures let the input and output be represented in a
way that can be handled efficiently and effectively.
• Data may be organized in different ways.
Array
Linked list
14
Queue
Graph/Tree Stack
Arrays
Customer Customer Salesperson
Jamal 1 Jamal Tony
Sana 2 Sana Tony
Saeed 3 Saeed Nadia
Farooq 4 Farooq Owais
Salman 5 Salman Owais
Danial 6 Danial Nadia
Linear Arrays Two Dimensional Arrays 15
Example: Linear Search Algorithm
• Given a linear array A containing n elements, locate the position of
an Item ‘x’ or indicate that ‘x’ does not appear in A.
• The linear search algorithm solves this problem by comparing ‘x’,
one by one, with each element in A. That is, we compare ITEM
with A[1], then A[2], and so on, until we find the location of ‘x’.
LinearSearch(A, x) Number of times executed
i←1 1
while (i ≤ n and A[i] ≠ x) n
i ← i+1 n
if i ≤ n 1
T(n) = 2n+3
return true 1
16
else
return false 1
Best/Worst Case
Best case: ‘x’ is located in the first location of the array and loop
executes only once
T(n) = 1 + n + n + 1 +1
= 1+1+0+1+1
= O(1)
Worst case: ‘x’ is located in last location of the array or is not there
at all.
T(n) = 1 + n + n + 1 +1
= 2n +3
= O(n)
17
Average case
Average Case: Assume that it is equally likely for ‘x’ to appear at
any position in array A,. Accordingly, the number of comparisons
can be any of the numbers 1,2,3,..., n, and each number occurs
with probability p = 1/n.
T(n) = 1.1/n + 2. 1/n +……….+ n.1/n
= (1+2+3+……+n).1/n
= [n(n+1)/2] 1/n = n+1/2
= O(n)
This agrees with our intuitive feeling that the average number of
comparisons needed to find the location of ‘x’ is approximately
equal to half the number of elements in the A list. 18
Linked List
A B C
Head
• A series of connected nodes
• Each node contains a piece of data and a pointer to the next node
Operations Average Case Worst Case
Insert O(1) O(1)
Delete O(1) O(1)
Search O(N) O(N)
19
Stack IN OUT
• LIFO
• Implemented using linked-list or arrays 2132
123
123
123
123
123
123
Operations Average Case Worst Case
Push O(1) O(1)
Pop O(1) O(1) 20
IsEmpty O(1) O(1)
Queue IN
• FIFO
2132
• Implemented using linked-list or arrays
123
123
2544
33
Operations Average Case Worst Case OUT
Enqueue O(1) O(1)
Dequeue O(N) O(N) 21
Asymptotic Algorithm Analysis
• The asymptotic analysis of an algorithm determines the running
time in big-Oh notation
• To perform the asymptotic analysis
• We find the worst-case number of primitive operations executed as a
function of the input size n
• We express this function with big-Oh notation
• Example: An algorithm executes T(n) = 2n2 + n elementary
operations. We say that the algorithm runs in O(n2) time
• Growth rate is not affected by constant factors or lower-order
terms so these terms can be dropped
• The 2n2 + n time bound is said to "grow asymptotically" like n2
• This gives us an approximation of the complexity of the algorithm 22
• Ignores lots of (machine dependent) details
Algorithm Efficiency
Measuring efficiency of an algorithm
• do its analysis i.e. growth rate.
• Compare efficiencies of different algorithms for the same problem.
As inputs get larger, any algorithm of a smaller order will be more
efficient than an algorithm of a larger order
0.05 N2 = O(N2)
3N = O(N)
Time (steps)
23
Input (size)
N = 60
Running Time vs. Time Complexity
• Running time is how long it takes a program to run.
• Time complexity is a description of the asymptotic behavior of
running time as input size tends to infinity.
• The exact running time might be 2036.n2 + 17453.n + 18464 but
you can say that the running time "is" O(n2), because that's the
formal(idiomatic) way to describe complexity classes and big-O
notation.
• In fact, the running time is not a complexity class, IT'S EITHER A
DURATION, OR A FUNCTION WHICH GIVES YOU THE DURATION.
"Being O(n2)" is a mathematical property of that function, not a 24
full characterization of it.
Example:
Running Time to Sort Array of 2000 Integers
Computer Type Desktop Server Mainframe Supercomputer
Time (sec) 51.915 11.508 0.431 0.087
Array Desktop Server
Size
125 12.5 2.8
250 49.3 11.0
500 195.8 43.4
1000 780.3 172.9
2000 3114.9 690.5
25
Analysis of Results
f(n) = a n2 + b n + c
where a = 0.0001724, b = 0.0004 and c = 0.1
n f(n) a n2 % of n2
125 2.8 2.7 94.7
250 11.0 10.8 98.2
500 43.4 43.1 99.3
1000 172.9 172.4 99.7
2000 690.5 689.6 99.9 26
Complexity Examples (1)
What does the following algorithm compute?
procedure who_knows(a1, a2, …, an: integers)
m := 0
for i := 1 to n-1
for j := i + 1 to n
if |ai – aj| > m then m := |ai – aj|
{m is the maximum difference between any two numbers in the
input sequence}
Comparisons: n-1 + n-2 + n-3 + … + 1
= n*(n – 1)/2 = 0.5n2 – 0.5n
27
Time complexity is O(n2).
Complexity Examples (2)
Another algorithm solving the same problem:
procedure max_diff(a1, a2, …, an: integers)
min := a1
max := a1
for i := 2 to n
if ai < min then min := ai
else if ai > max then max := ai
m := max - min
Comparisons: 2n + 2
Time complexity is O(n).
28
Model of Computation
Drawbacks:
• poor assumption that each basic operation takes constant time
• Adding, Multiplying, Comparing etc.
Finally what about Our Model?
• With all these weaknesses, our model is not so bad because
• We have to give the comparison, not absolute analysis of any algorithm.
• We have to deal with large inputs not with the small size
• Model seems to work well describing computational power of
modern nonparallel machines
Can we do Exact Measure of Efficiency ?
• Exact, not asymptotic, measure of efficiency can be sometimes 29
computed but it usually requires certain assumptions concerning
implementation
Prerequisite Review :
Mathematics
30
Exponent Review
x a x b x a b
xa a b
x
xb
( x a ) b x ab
x0 1
xn xn 2xn
2 n 2 n 2 * 2 n 2 n 1
( xy ) a x a y a
1 31
m
x m
x
Summation Review
N
x
i 1
i
The sum of numbers from 1 to N; e.g 1 + 2 + 3 … + N
i
x 2
i 1
Suppose our list has 5 number, and they are 1, 3, 2, 5, 6.
Then resulting summation will be 12 + 32 + 22 + 52 + 62 = 75
y N
a ( y x 1)a
ix
or a Na
i 1
The First constant Rule
6
2
i 0
14
N N
ax i a xi N
The Second constant Rule
N
6x y 6 y xi
i 1 i 1
i 32
i 1 i 1
N N N
(x
i 1
i y i ) xi y i
i 1 i 1
The Distributive Rule
Series Review
n 1
1 x
1 x x x
2 n for x 1
1 x
1
1 x x
2
for |x| < 1 33
1 x
The End
Questions?
34