Algorithms An Algorithm Is A Type of Effective Method in Which A
Algorithms An Algorithm Is A Type of Effective Method in Which A
MODULE-I
Algorithms
Algorithm Specification
Algorithm Classification
There are various ways to classify algorithms. They are as follows
Classification by implementation
Recursion or iteration: A recursive algorithm is one that invokes (makes
reference to) itself repeatedly until a certain condition
matches, which is a method common to functional programming.
Iterative algorithms use repetitive constructs like loops and sometimes
additional data structures like stacks to solve the given problems. Some
problems are naturally suited for one implementation or the other. For
example, towers of hanoi is well understood in recursive implementation.
For a given problem, there are many ways to solve them. The
different methods are listed below.
1. Divide and Conquer.
DESIGN AND ANALYSIS OF ALGORITHMS Page 2
MODULE-I
2. Greedy Algorithm.
3. Dynamic Programming.
4. Branch and Bound.
5. Backtracking Algorithms.
6. Randomized Algorithm.
Now let us discuss each method briefly.
1. Divide and Conquer
Divide and conquer method consists of three steps.
a. Divide the original problem into a set of sub-
problems.
b. Solve every sub-problem individually, recursively.
c. Combine the solutions of the sub-problems into a
solution of the whole original problem.
2. Greedy Approach
Greedy algorithms seek to optimize a function by making choices which are the
best locally but do not look at the global problem. The result is a good solution
but not necessarily the best one. Greedy Algorithm does not always guarantee
the optimal solution however it generally produces solutions that are very close
in value to the optimal solution.
3.Dynamic Programming.
5.Backtracking Algorithm.
They try each possibility until they find the right one. It is a depth
first search of a set of possible solution. During the search if an alternative
doesn„t work, the search backtrack to the choice point, the place which
presented different alternatives and tries the next alternative. If there are no
more choice points the search fails.
6.Randomized Algorithm.
COMPUTATIONAL PROCEDURE
Algorithms that are definite and effective are also called computational
procedures. One important example of computational procedures is the
operating system of a digital computer. This procedure is designed to control the
execution of jobs, in such a way that when no jobs are available, it does not
terminate but continues in a waiting state until a new job is entered.
PROGRAM
PSEUDOCODE CONVENTIONS
We can describe an algorithm in many ways.We can use a natural language like
English, although I we select this option,we must make sure that the resulting
instructions are definite.
We can present most of our algorithms using a pseudocode that resembles c
1. Comments begin with // and continueuntill the end of line
Eg: node=record
{
datatype_1 data_1;
:
datatype_n data_n;
node *link;
}
4. Assignment of values to variables is done using the assignment statement
<variable> := <expression>;
Eg: count:= count+1;
5. There are two Boolean values true and false. In order to produce these values,
the logical operators and , or , and not and the relational relational operators
<,<=,=,!=,>= and > are provided.
For eg: if A is a two dimentional array , the (I,j) th element of the array is
denoted as A[I,j]. Array indicates start at zero.
7. The following looping statements are employed: for,while and repeat until.
The while loop takes the following form.
While (condition) do
{
<statement 1> :
:
<statement n>
}
8. A conditional statement has the following forms:
RECURSIVE ALGORITHMS
Linear Recursion
A linear recursive function is a function that only makes a single call to itself
each time the function runs (as opposed to one that would call itself multiple
times during its execution). The factorial function is a good example of linear
recursion. Another example of a linear recursive function would be one to
compute the square root of a number using Newton's method (assume
EPSILON to be a very small number close to 0):
double my_sqrt(double x, double a) {
double difference = a*x-x;
if (difference < 0.0) difference = -difference;
if (difference < EPSILON) return(a);
DESIGN AND ANALYSIS OF ALGORITHMS Page 7
MODULE-I
else return(my_sqrt(x,(a+x/a)/2.0));
}
Tail recursive
Tail recursion is a form of linear recursion. In tail recursion, the recursive call is
the last thing the function does. Often, the value of the recursive call is returned.
As such, tail recursive functions can often be easily implemented in an iterative
manner; by taking out the recursive call and replacing it with a loop, the same
effect can generally be achieved. In fact, a good compiler can recognize tail
recursion and convert it to iteration in order to optimize the performance of the
code.
A good example of a tail recursive function is a function to compute the GCD,
or Greatest Common Denominator, of two numbers:
int gcd(int m, int n)
{
int r;
if (m < n) return gcd(n,m);
r = m%n;
if (r == 0) return(n);
else return(gcd(n,r));
}
Binary Recursive
Some recursive functions don't just have one call to themself, they have two (or
more). Functions with two recursive calls are referred to as binary recursive
functions. The mathematical combinations operation is a good example of a
function that can quickly be implemented as a binary recursive function. The
number of combinations, often represented as nCk where we are choosing n
elements out of a set of k elements, can be implemented as follows:
int choose(int n, int k)
{ if (k == 0 || n == k) return(1);
else return(choose(n-1,k) + choose(n-1,k-1));
}
Exponential recursion
Mutual Recursion
}
int is_odd(unsigned int n)
{
return (!iseven(n));
}
Recursive Algorithms
In other words,
factorial(0)=> 1
factorial(3)
3 * factorial(2)
3 * 2 * factorial(1)
DESIGN AND ANALYSIS OF ALGORITHMS Page 10
MODULE-I
3 * 2 * 1 * factorial(0)
3*2*1*1
=> 6
The Towers of Hanoi puzzle (TOH) was first posed by a French professor,
´Edouard Lucas, in 1883. Although commonly sold today as a children„s toy, it
is often discussed in discrete mathematics or computer science books because it
provides a simple example of recursion. In addition, its analysis is
straightforward and it has many variations of varying difficulty.
The object of the Towers of Hanoi problem is to specify the steps required to
move the disks or, as we will sometimes call them, rings) from pole r (r = 1, 2,
or 3) to pole s (s = 1, 2, or 3; s _= r), observing the following rules:
FIGURE
.
The Towers of Hanoi problem
Solution The algorithm to solve this problem exemplifies the recursive
paradigm. We imagine that we know a solution for n − 1 disks (―reduce to a
previous case‖), and then we use this solution to solve the problem for n disks.
Thus to move n disks from pole 1 to pole 3, we would:
1. Move n − 1 disks (the imagined known solution) from pole 1 to pole 2.
However we do this, the nth disk on pole 1 will never be in our way because
any valid sequence of moves with only n −1 disks will still be valid if there is
an nth (larger) disk always sitting at the bottom of pole 1 (why?).
2. Move disk n from pole 1 to pole 3.
3. Use the same method as in Step 1 to move the n −1 disks now on pole 2 to
pole 3
Algorithm
Algorithm TowersOfHanoi ( x, n y, z)
{
If(n>=1) then
{
TowersOfHanoi (n-1, x, y);
Write (“move top disk from tower”, x, “to top of tower”, y);
TowersOfHanoi (n-1, z, y, x);
}
Space Complexity
Space complexity of an algorithm is the amount to memory needed by the
program for its completion. Space needed by a program has the following
components:
1. Instruction Space
Space needed to store the compiled version of program. It depends on
i. Compiler used
ii. Options specified at the time of compilation
e.g., whether optimization specified, Is there any overlay option
etc.
iii. Target computer
e.g., For performing floating point arithmetic, if hardware present
or not.
2. Data Space
Space needed to store constant and variable values. It has two
components:
i. Space for constants:
e.g., value „3‟ in program 1.1
Space for simple variables:
e.g., variables a,b,c in program 1.1
Program 1.1
int add (int a, int b, int c)
{
return (a+b+c)/3;
}
Program 1.2
int Radd (int a[], int n)
1 {
2 If (n>0)
3 return Radd (a, n-1) + a[n-1];
4 else
5 return 0;
6 }
Example
Refer Program 1.1
One word for variables a,b,c. No instance characteristics. Hence Sp(TC) = 0
Example
Program 1.3
int Aadd (int *a, int n)
1 {
2 int s=0;
3 for (i=0; i<n; i++)
4 s+ = a[i];
5 return s;
6 }
One word for variables n and i. Space for a[] is address of a[0]. Hence it
requires one word. No instance characteristics. Hence Sp(TC) = 0
DESIGN AND ANALYSIS OF ALGORITHMS Page 14
MODULE-I
Example
Refer Program 1.2
Instance characteristics depend on values of n. Recursive stack space
includes space for formal parameters, local variables and return address. So
one word each for a[],n, return address and return variables. Hence for each
pass it needs 4 words. Total recursive stack space needed is 4(n).
Hence Sp(TC) = 4(n).
Time Complexity
1. Comments:
No executables, hence step count = 0
2. Declarative Statements:
Define or characterize variables and constants like (int , long, enum, …)
Statement enabling data types (class, struct, union, template)
Determine access statements ( public, private, protected, friend )
Character functions ( void, virtual )
All the above are non executables, hence step count = 0
4. Iterative Statements:
While <expr> do
Do .. While <expr>
Step count = Number of step count assignable to <expr>
5. Switch Statements:
Switch (<expr>) {
Case cond1 : <statement1>
Case cond2 : <statement2>
.
.
Default : <statement>
}
6. If-else Statements:
If (<expr>) <statement1>;
Else <statement2>;
Step count of If and Else is the cost of <expr>.
7. Function invocation:
All function invocation has Step count = 1, unless it has parameters
passed as called by value which depend s on instance characteristics. If
so, Step count is the sum of the size of these values.
If function being invoked is recursive, consider the local variables also.
9. Function Statements:
Step count = 0, cost is already assigned to invoking statements.
10.Jump Statements:
continue, break, goto has Step count =1
Example
Refer Program 1.2
Introducing a counter for each executable line we can rewrite the program as
Case 2: n>0
2 + tRadd (n-1)
= 2 + 2 + tRadd (n-2)
= 2 * 2 + tRadd (n-2)
.
.
= 2n + tRadd (0)
= 2n + 2
Example
Program 1.4
int Madd (int a[][], int b[][], int c[][], int n)
1 {
2 For (int i=0; i<m; i++)
3 For (int j=0; j<n; j++)
4 c[i][j] = a[i][j] + b[i][j];
5 }
Introducing a counter for each executable line we can rewrite the program as
DESIGN AND ANALYSIS OF ALGORITHMS Page 17
MODULE-I
Step count does not reflect the complexity of statement. It is reflected in step
per execution (s/e).
Asymptotic Notations
For example, consider two programs with complexities c 1n2 + c2n and c3n
respectively. For small values of n, complexity depend upon values of c 1, c2
and c3. But there will also be an n beyond which complexity of c 3n is better
than that of c1n2 + c2n.This value of n is called break-even point. If this point
is zero, c3n is always faster (or at least as fast). Common asymptotic
functions are given below.
Function Name
1 Constant
log n Logarithmic
n Linear
n log n n log n
n2 Quadratic
n3 Cubic
2n Exponential
n! Factorial
O(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤
cg(n) for all n ≥ n0 }
It is the upper bound of any function. Hence it denotes the worse case
complexity of any algorithm. We can represent it graphically as
Fig 1.1
Linear Functions
Example 1.6
f(n) = 3n + 2
When n ≥ 2, 3n + 2 ≤ 3n + n = 4n
Hence f(n) = O(n), here c = 4 and n0 = 2
When n ≥ 1, 3n + 2 ≤ 3n + 2n = 5n
Hence f(n) = O(n), here c = 5 and n0 = 1
Hence we can have different c,n0 pairs satisfying for a given function.
Example 1.7
f(n) = 3n + 3
When n ≥ 3, 3n + 3 ≤ 3n + n = 4n
Hence f(n) = O(n), here c = 4 and n0 = 3
Example 1.8
f(n) = 100n + 6
When n ≥ 6, 100n + 6 ≤ 100n + n = 101n
Hence f(n) = O(n), here c = 101 and n0 = 6
Quadratic Functions
DESIGN AND ANALYSIS OF ALGORITHMS Page 20
MODULE-I
Example 1.9
f(n) = 10n2 + 4n + 2
When n ≥ 2, 10n2 + 4n + 2 ≤ 10n2 + 5n
When n ≥ 5, 5n ≤ n2, 10n2 + 4n + 2 ≤ 10n2 + n2 = 11n2
Hence f(n) = O(n2), here c = 11 and n0 = 5
Example 1.10
f(n) = 1000n2 + 100n - 6
f(n) ≤ 1000n2 + 100n for all values of n.
When n ≥ 100, 5n ≤ n2, f(n) ≤ 1000n2 + n2 = 1001n2
Hence f(n) = O(n2), here c = 1001 and n0 = 100
Exponential Functions
Example 1.11
f(n) = 6*2n + n2
When n ≥ 4, n2 ≤ 2n
So f(n) ≤ 6*2n + 2n = 7*2n
Hence f(n) = O(2n), here c = 7 and n0 = 4
Constant Functions
Example 1.12
f(n) = 10
f(n) = O(1), because f(n) ≤ 10*1
Fig 1.2
Example 1.13
f(n) = 3n + 2
3n + 2 > 3n for all n.
Hence f(n) = Ω(n)
Similarly we can solve all the examples specified under Big „Oh‟.
Fig 1.3
Example 1.14
f(n) = 3n + 2
f(n) = Θ(n) because f(n) = O(n) , n ≥ 2.
It defines the asymptotic tight upper bound. Main difference with Big Oh is
that Big Oh defines for some constants c by Little Oh defines for all
constants.
Recurrence Relations
(1) if n = 1
T(n)
2T(n/2) + (n)
if n > 1
Substitution Method
Use mathematical induction to derive an answer
Derive a function of n (or other variables used to express the size of the
problem) that is not a recurrence so we can establish an upper and/or
lower bound on the recurrence
May get an exact solution or may just get upper or lower bounds on the
solution
Steps
Guess the form of the solution
Example
Show that the recurrence relation
T(n) = 2 T( n/2 ) + n is O(n lgn)
Example
Show that the recurrence relation
• T(n) = c + T(n/2) is O(lgn)
Example
Show that the recurrence relation
T(n) = T(n-1) + n is O(n2)
Example
Show that the recurrence relation T(n) = 2T(n/2) + n is O(nlgn)
• Guess: T(n) = O(nlgn)
– Induction goal: T(n) ≤ cn lgn, for some c and n ≥ n0
– Induction hypothesis: T(n/2) ≤ cn/2 lg(n/2)
• Proof of induction goal:
T(n) = 2T(n/2) + n ≤ 2c (n/2)lg(n/2) + n
= cn lgn – cn + n ≤ cn lgn
if: - cn + n ≤ 0 c ≥ 1
Steps
Convert the recurrence into a tree.
Each node represents the cost of a single sub problem somewhere in the
set of recursive function invocations
Sum the costs within each level of the tree to obtain a set of per-level
costs
Sum all the per-level costs to determine the total cost of all levels of the
recursion
Example 1.18
T(n) = 3T(n/4) + (n2)
T(n) = 3T(n/4) + cn2
3 2 3 3
T (n) cn 2 cn ( ) 2 cn 2 ... ( ) log4 n 1 cn 2 (n log4 3 )
16 16 16
log4 n 1
3 i 2
i 0
(
16
) cn (n log4 3 )
3 i 2
( ) cn (n log4 3 )
i 0 16
1 16 2
cn 2 (n log4 3 ) cn (n log4 3 )
3 13
1
16
O(n 2 )
Example
W(n) = 2W(n/2) + (n2)
lg n 1 2 lg n 1i i
n 1 1 1
W (n) i 2lg n W (1) n 2 n n 2 O(n) n 2 O ( n) 2n 2
i 0 2 i 0
2 i 0
2 1 1
2
W(n) = O(n2)
Example
DESIGN AND ANALYSIS OF ALGORITHMS Page 27
MODULE-I
• The longest path from the root to a leaf is: n (2/3)n (2/3)2 n …
1
• Subproblem size hits 1 when 1 = (2/3)in i=log3/2n
• Cost of the problem at level i = n
• Total cost:
log3 / 2 n log3 / 2 n
lg n 1
n n ... n n
i 0
1 n log
i 0
3/ 2 nn
lg 3 / 2 lg 3 / 2
n lg n
W(n) = O(nlgn)
Example
T(n) = T(n/4) + T(n/2) + n2
T(n) = n² + 8T(n/2)
PROFILING
NONDETERMINISTIC ALGORITHMS
signal. The computing times for. Choice, Failure, and Success are taken to be
O(1).A machine capable of executing a nondeterministic algorithm in this way
is called a nondeterministic machine. Although nondeterministic machines do
not exist in practice, we see that they provide strong intuitive reasons to
conclude that certain problems cannot be solved by fast deterministic
algorithms.
Example
Consider the problem of searching for an element x in a given set of elements
A[1:n],n>1.We are required to determine an index j such that A[j]=x or j=0 if x
is not in A.
1. J:=Choice(1.n);
2. If A[j]=x then {write (j);Success();}
3. Write (0); Failure();
AMORTIZED COMPLEXITY
determining the step count for each operation in the sequence is quite difficult,
and instead, we obtain an upper bound on the step count for the sequence by
adding together the worst-case step count for each operation.
Example Consider the method insert of Program 2.10. This method inserts an
element into a sorted array, and its step count ranges from a low of 4 to a high
of2n+4, where n is the number of elements already in the array. Suppose we
perform 5 insert operations beginning with n = 0. Further, suppose, that the
actual step counts for these insert operations are 4, 4, 6, 10, and 8, respectively.
The actual step count for the sequence of insert operations is 4 + 4 + 6 + 10 + 8
= 32. If we did not know the actual step count for the individual operations, we
could obtain an upper bound on the actual step count for the operation sequence
using one of the following two approaches.
1. Since the worst-case step count for an insert operation is 2n+4, sum0 <= i <=
4(2i+4) = 4 + 6 + 8 + 10 + 12 = 40 is an upper bound on the step count for
the sequence of 5 inserts.
2. The maximum number of elements already in the array at the time an
insert operation begins is 4. Therefore, the worst-case step count of an
insert operation is2*4+4 = 12. Therefore, 5*12 = 60 is an upper bound on
the step count for the sequence of 5 inserts.
In the preceding example, the upper bound obtained by the first approach is
closer to the actual step count for the operation sequence. We say that the count
obtained by the first approach is a tighter (i.e., closer to the real count) upper
bound than that obtained by the second approach.
where amortized(i) and actual(i), respectively, denote the amortized and actual
You may view the amortized cost of an operation as being the amount you
charge the operation rather than the amount the operation costs. You can charge
an operation any amount you wish so long as the amount charged to all
operations in the sequence is at least equal to the actual cost of the operation
sequence.
That is, the ith operation causes the potential function to change by the
difference between the amortized and actual costs of that operation. If we sum
Equation (2) for 1 <= i <= n, we get
sum 1 <= i <= nP(i) = sum 1 <= i <= n(amortized(i) - actual(i) + P(i-1))
or
sum 1 <= i <= n(P(i) - P(i-1)) = sum 1 <= i <= n(amortized(i) - actual(i))
or
P(n) - P(0) = sum 1 <= i <= n(amortized(i) - actual(i))
Under the assumption that P(0) = 0, the potential P(i) is the amount by which
the first i operations have been overcharged (i.e., they have been charged more
than their actual cost).
1. Aggregate Method
In the aggregate method, we determine an upper
bound UpperBoundOnSumOfActualCosts(n) for the sum of the actual
costs of the n operations. The amortized cost of each operation is set
equal to UpperBoundOnSumOfActualCosts(n)/n. You may verify that
this assignment of amortized costs satisfies Equation (1) and is, therefore,
valid.
2. Accounting Method
In this method, we assign amortized costs to the operations (probably by
guessing what assignment will work), compute the P(i)s using Equation
(2), and show that P(n)-P(0) >= 0.
3. Potential Method
Here, we start with a potential function (probably obtained using good
guess work) that satisfies Equation (3), and compute the amortized
complexities using Equation (2).