Chapter 1 Part 2
Chapter 1 Part 2
INTRODUCTION
Pretty close to English but precise enough for a computing agent to carry out.
Algorithm arrayMax(A, n)
Input array A of n integers
Output maximum element of A
currentMax A[0]
for i 1 to n 1 do
if A[i] currentMax then
currentMax A[i]
return currentMax
PSEUDOCODE DETAILS
Problem
Algorithm 5
Algorithm 4
Algorithm 7
Algorithm 6
Time complexity: the amount of time taken by an algorithm to run as a function of the length of the input.
Space complexity: the amount of space or memory taken by an algorithm to run as a function of the length of the
input.
Time/Space TRADE OFF :- we may have to sacrifice one at the cost of the other
Efficiency of an algorithm can be analyzed at two different stages, before implementation and after implementation.
Efficiency of an algorithm is measured by assuming that all other factors, for example, processor speed, are constant and have no
effect on the implementation.
In this analysis, actual statistics like running time and space required, are collected.
Analysis of space complexity of an algorithm or program is the amount of memory it needs to run to
completion.
2. We may be interested to know in advance that whether sufficient memory is available to run the program.
4. Can be used to estimate the size of the largest problem that a program can solve
The time complexity of an algorithm or a program is the amount of time it needs to run to completion.
the implementation of the algorithm, programming language, optimizing the capabilities of the compiler used, the CPU speed,
other hardware characteristics/specifications and so on.
To measure the time complexity accurately, we have to count all sorts of operations performed in an algorithm.
If we know the time for each one of the primitive operations performed in a given computer, we can easily
compute the time taken by an algorithm to complete its execution.
Worst-Case Analysis –The maximum amount of time that an algorithm require to solve a problem of size n.
This gives an upper bound for the time complexity of an algorithm.
Best-Case Analysis –The minimum amount of time that an algorithm require to solve a problem of size n. The
best case behavior of an algorithm is NOT so useful.
Average-Case Analysis –The average amount of time that an algorithm require to solve a problem of size n.
Sometimes, it is difficult to find the average-case behavior of an algorithm.
5 20 2 23 8 10 1
Suppose you are given an array A and an integer x and you have to find if x exists in array A.
for i : 1 to length of A
if A[i] is equal to x
return TRUE
return False
DATA STRUCTURE AND ALGORITHM 12
HOW TO ESTIMATE RUN TIME COMPLEXITY OF AN ALGORITHM ?
When we analyze algorithms, we should employ mathematical techniques (theoretical model) that analyze
Before we can analyze an algorithm, we must have a model of the implementation technology that will be used,
including a model for the resources of that technology and their costs.
We shall assume a generic one processor, random-access machine (RAM) model of computation as our
implementation technology and understand that our algorithms will be implemented as computer programs.
Under the RAM model, we measure the run time of an algorithm by counting up the number of significant /basic /
primitive operations in the algorithm.
Then, we will express the efficiency of algorithms using growth functions.
Each memory access takes exactly one time step, and we have as much memory as we need.
Identifiable in pseudocode
Examples:
Evaluating an expression, Assigning a value to a variable, Indexing into an array, Calling a method, Returning from a method
Running time of a selection statement (if, switch) is the time for the condition evaluation + the maximum of the
running times for the individual clauses in the selection.
The running time of a for loop is at most the running time of the statements inside the for loop (including tests)
times the number of iterations.
Always assume that the loop executes the maximum number of iterations possible
Running time of a function call is 1 for setup + the time for any parameter calculations + the time required for
the execution of the function body.
By inspecting the pseudocode, we can determine the maximum number of primitive operations executed by an
algorithm, as a function of the input size
{ -------------------------------------------------
1 for the assignment statement: int no=0
int no=0;
1 for the output statement.
cout<< “Enter an integer”;
1 for the input statement.
cin>>n; In the for loop:
for (i=0;i<n;i++) 1 assignment, n+1 tests, and n increments.
no=no+1; n loops of 2 units for an assignment, and an addition.
return 0; 1 for the return statement.
} -------------------------------------------------------------------
Using asymptotic analysis, we can very well conclude the best case, average case, and worst case scenario of an
algorithm
Asymptotic analysis is input bound i.e., if there's no input to the algorithm, it is concluded to work in a constant
time.
Other than the "input" all other factors are considered constant.
In asymptotic notation, we use only the most significant terms to represent the time complexity of an algorithm.
Asymptotic Notation identifies the behavior of an algorithm as the input size changes.
Following are the commonly used asymptotic notations to calculate the running time complexity of an algorithm.
These are some basic function growth classifications used in various notations.
Logarithmic Function - log n
Linear Function - an + b
Quadratic Function - an2 + bn + c
Polynomial Function - an^z + . . . + an^2 + a*n^1 + a*n^0, where z is some constant
Exponential Function - a^n, where a is some constant
The list starts at the slowest growing function (logarithmic, fastest execution time) and goes on to the fastest
growing (exponential, slowest execution time).
disregard constants, and lower order terms, because as the
DATA STRUCTURE AND ALGORITHM
input size (or n in our f(n) example) increases to infinity 22
(mathematical limits), the lower order terms and constants are
of little to no importance.
BIG-O
Big-O, commonly written as O, is an Asymptotic Notation for the worst case, or ceiling of growth for a given
function.
It provides us with an asymptotic upper bound for the growth rate of runtime of an algorithm.
Say f(n) is your algorithm runtime, and g(n) is an arbitrary time complexity you are trying to relate to your
algorithm. f(n) is O(g(n)), if for some real constants c (c > 0) and n0, f(n) <= c g(n) for every input size n (n > n0).
To show that f(n) is O(g(n)) we must show that constants c and no such that
f(n) <=c.g(n) for all n>=no
Or 10n+5<=c.n for all n>=no
Try c=15. Then we need to show that 10n+5<=15n
Solving for n we get: 5<5n or 1<=n.
So f(n) =10n+5 <=15.g(n) for all n>=1.
(c=15,no=1).
O(1) < O(log n) < O(√n) < O(n) < O(n log n) < O(n2) < O(n3)….. < O(2n) < O(3n)…..< O(nn)
Big-Omega, commonly written as Ω, is an Asymptotic Notation for the best case, or a floor growth rate for a
given function.
It provides us with an asymptotic lower bound for the growth rate of runtime of an algorithm.
f(n) is Ω(g(n)), if for some real constants c (c > 0) and n0 (n0 > 0), f(n) is >= c g(n) for every input size n (n > n0)
Theta, commonly written as Θ, is an Asymptotic Notation to denote the asymptotically tight bound on the growth
rate of runtime of an algorithm.
f(n) is Θ(g(n)), if for some real constants c1, c2 and n0 (c1 > 0, c2 > 0, n0 > 0), c1 g(n) is < f(n) is < c2 g(n) for
every input size n (n > n0).