Analysis of Algorithm
Analysis of Algorithm
Analysis of Algorithms
• Algorithm is a finite sequence of steps that the computer
follows to solve a problem.
• When you face with a problem, the first thing you need to do
is to find out an algorithm to solve it.
• Once you determine the algorithm to be correct, next step is
to find out the resources (time and space) the algorithm will
require. This is known as algorithm analysis.
• Thus, an algorithm analysis measures the efficiency of the
algorithm.
• If your computer requires more resources than your
computer has, it is useless.
2
What is efficiency?
• Efficiency can be measured by time or space utilization by the
algorithm.
• An efficient algorithm is that which has
– Small running time (Time complexity): The time taken
by a program is the sum of the compiled time & the run
time. The time complexity of an algorithm is given by the
number of steps taken by the algorithm to compute the
function it was written for.
• Small space used (Space complexity): The space
complexity of an algorithm is the amount of memory it
needs to run.
We will consider only the time complexity.
3
What is good algorithm?
• A good algorithm is like a sharp knife-it does exactly what it is
supposed to do with a minimum amount of applied effort.
4
How to Measure Running Time of Algorithm?
1. Experimental study (run programs)
2. Growth function: Asymptotic Algorithm Analysis
Factors affecting running time
• Speed: Speed of the machine running the program.
• Language: Language in which the program was written. For example,
programs written in assembly language generally run faster than those
written in C or C++, which in turn tend to run faster than those written in
Java.
• Compiler: Efficiency of the compiler that created the program
• The size of the input: processing 1000 records will take more time than
processing 10 records.
• Organization of the input: if the item we are searching for is at the top
of the list, it will take less time to find it than if it is at the bottom.
5
Types of Algorithm analysis
Worst-case: (usually)
• T(n) = maximum time of algorithm on any input of
size n.
Average-case: (sometimes)
• T(n) = expected time of algorithm over all inputs of
size n.
• Need assumption of statistical distribution of inputs.
Best-case:
• Cheat with a slow algorithm that works fast on some
input.
General Rules for Estimation
• Simple statements: We assume that statement does not
contain a function call. It takes a fixed amount to execute.
• Sequence of simple statements: It takes an amount of
execution time equal to the sum of execution times of
individual statements. If the performance of individual
statements are
• Decision: For estimating the performance, then and else
parts of the algorithm are considered independently. The
performance estimate of the decision is taken to be the
largest of the two individual.
7
General Rules for Estimation
• Loops: The running time of a loop is at most the running time
of the statements inside of that loop times the number of
iterations.
Ex: for(i=0; i<N; i++)
This loop will run for N times.
• Nested Loops: Running time of a nested loop containing a
statement in the inner most loop is the running time of statement
multiplied by the product of the sized of all loops.
Ex:
for (i=0; i< N; i++) {
for (j=0; j< N; i++) {
sequence of simple statements
}}
This loop will run for N*N times.
8
The Running Time of Algorithms
• Each operation in an algorithm (or a program) has a cost.
Each operation takes a certain of time.
A sequence of operations:
Total Cost = c1 + c2
9
The Running Time of Algorithms (cont.)
Example: Simple If-Statement
Cost Times
if (n < 0) c1 1
absval = -n c2 1
else
absval = n; c3 1
10
The Running Time of Algorithms (cont.)
Example: Simple Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}
11
The Running Time of Algorithms (cont.)
Example: Nested Loop
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
The time required for this algorithm is proportional to n2
12
Big – O notation
• It is most commonly used notation for specifying asymptotic
complexity i.e rate of function growth.
• It refers to upper bound of functions.
• For example:
//quadratic
for(int i = 0; i < n; i++) {
for(int j = 0; j < i; j++){
//do constant time stuff
}
}
• Ans : In this case we add each loop's Big O, in
this case n+n^2. O(n^2+n) is not an acceptable
answer since we must drop the lowest term.
The upper bound is O(n^2). Why? Because it
has the largest growth rate
Example 6
for(int i = 0; i < n; i++) {
for(int j = 0; j < 2; j++){
//do stuff
}
}
• Ans: Outer loop is 'n', inner loop is 2, this we
have 2n, dropped constant gives up O(n)
Example 7
for(int i = 1; i < n; i *= 2) {
cout << i << endl;
}
• There are n iterations, however, instead of
simply incrementing, 'i' is increased by 2*itself
each run. Thus the loop is log(n).
Example 8
for(int i = 0; i < n; i++) { //linear
for(int j = 1; j < n; j *= 2){ // log (n)
//do constant time stuff
}
}
• Ans: n*log(n)
Typical Complexities
Complexity Notation Description
Constant number of
operations, not depending on
constant O(1)
the input data size, e.g.
n = 1 000 000 1-2 operations
Number of operations propor-
tional of log2(n) where n is the
logarithmic O(log n)
size of the input data, e.g. n = 1
000 000 000 30 operations
Number of operations
proportional to the input data
linear O(n)
size, e.g. n = 10 000 5 000
operations
Typical Complexities
Complexity Notation Description
Number of operations
proportional to the square of
quadratic O(n2)
the size of the input data, e.g.
n = 500 250 000 operations
Number of operations propor-
tional to the cube of the size of
cubic O(n3)
the input data, e.g. n =
200 8 000 000 operations
O(2n), Exponential number of
exponential O(kn), operations, fast growing, e.g. n
O(n!) = 20 1 048 576 operations
Time Complexity and Speed
Complexity 10 20 50 100 1 000 10 000 100 000
O(1) <1s <1s <1s <1s <1s <1s <1s
O(log(n)) <1s <1s <1s <1s <1s <1s <1s
O(n) <1s <1s <1s <1s <1s <1s <1s
O(n*log(n)) <1s <1s <1s <1s <1s <1s <1s
O(n2) <1s <1s <1s <1s <1s 2s 3-4 min
O(n3) <1s <1s <1s <1s 20 s 5 hours 231 days
260
O(2n) <1s <1s hangs hangs hangs hangs
days
O(n!) <1s hangs hangs hangs hangs hangs hangs
O(nn) 3-4 min hangs hangs hangs hangs hangs hangs
Practical Examples
• O(n): printing a list of n items to the screen, looking
at each item once.
• O(log n): taking a list of items, cutting it in half
repeatedly until there's only one item left.
• O(n^2): taking a list of n items, and comparing every
item to every other item.
How to determine Complexities
Example 1
• Sequence of statements
statement 1;
statement 2;
...
statement k;
• The outer loop executes N times. Every time the outer loop executes, the
inner loop executes M times. As a result, the statements in the inner loop
execute a total of N * M times. Thus, the complexity is O(N * M).
• In a common special case where the stopping condition of the inner loop
is j < N instead of j < M (i.e., the inner loop also executes N times), the
total complexity for the two loops is O(N2).
Complexity Examples
int FindMaxElement(int array[])
{
int max = array[0];
for (int i=0; i<n; i++)
{
if (array[i] > max)
{
max = array[i];
}
}
return max;
}
• Example :
– 5n^2 is Ω(n) because 5n^2 ≥ 5n for n ≥ 1.
• Example:
The same rate of increase for
f(n) = n + 5n^0.5 and g(n) = n
because
n ≤ (n + 5n^0.5) ≤ 6n for n > 1
Theta notation