0% found this document useful (0 votes)
34 views

Algorithms An Algorithm Is A Type of Effective Method in Which A

The document provides an overview of algorithms and algorithm analysis. It defines an algorithm as a well-defined set of steps to solve a problem and discusses ways to classify algorithms, such as by implementation approach (recursive vs iterative), parallelization, and determinism. It also outlines several common algorithm design techniques like divide-and-conquer, greedy algorithms, dynamic programming, branch-and-bound, and backtracking. The study of algorithms involves devising algorithms, validating their correctness, analyzing their runtime and efficiency, and developing new algorithmic ideas.

Uploaded by

chandrasekhar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Algorithms An Algorithm Is A Type of Effective Method in Which A

The document provides an overview of algorithms and algorithm analysis. It defines an algorithm as a well-defined set of steps to solve a problem and discusses ways to classify algorithms, such as by implementation approach (recursive vs iterative), parallelization, and determinism. It also outlines several common algorithm design techniques like divide-and-conquer, greedy algorithms, dynamic programming, branch-and-bound, and backtracking. The study of algorithms involves devising algorithms, validating their correctness, analyzing their runtime and efficiency, and developing new algorithmic ideas.

Uploaded by

chandrasekhar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

MODULE-I

MODULE-I

INTRODUCTION AND COMPLEXITY

Algorithms

An algorithm is a type of effective method in which a


definite list of well-defined instructions for completing a task; that given an
initial state, will proceed through a well-defined series of successive states,
eventually terminating in an end-state. The concept of an algorithm
originated as a means of recording procedures for solving mathematical
problems such as finding the common divisor of two numbers or multiplying
two numbers.

Algorithm Specification

The criteria for any set of instruction for an algorithm is as follows:


 Input : Zero of more quantities that are externally applied
 Output : At least one quantity is produced
 Definiteness : Each instruction should be clear and unambiguous
 Finiteness : Algorithm terminates after finite number of steps for
all test cases.
 Effectiveness : Each instruction is basic enough for a person to
carried out using a pen and paper. That means ensure not only definite
but also check whether feasible or not.

Algorithm Classification
There are various ways to classify algorithms. They are as follows

Classification by implementation
Recursion or iteration: A recursive algorithm is one that invokes (makes
reference to) itself repeatedly until a certain condition
matches, which is a method common to functional programming.
Iterative algorithms use repetitive constructs like loops and sometimes
additional data structures like stacks to solve the given problems. Some
problems are naturally suited for one implementation or the other. For
example, towers of hanoi is well understood in recursive implementation.

DESIGN AND ANALYSIS OF ALGORITHMS Page 1


MODULE-I

Every recursive version has an equivalent (but possibly more or less


complex) iterative version, and vice versa.
Logical: An algorithm may be viewed as controlled logical deduction.
This notion may be expressed as:
Algorithm = logic + control.
The logic component expresses the axioms that may be used in the
computation and the control component determines the way in which
deduction is applied to the axioms. This is the basis for the logic
programming paradigm. In pure logic programming languages the control
component is fixed and algorithms are specified by supplying only the
logic component. The appeal of this approach is the elegant semantics: a
change in the axioms has a well defined change in the algorithm.
Serial or parallel or distributed: Algorithms are usually discussed with
the assumption that computers execute one instruction of an algorithm at
a time. Those computers are sometimes called serial computers. An
algorithm designed for such an environment is called a serial algorithm,
as opposed to parallel algorithms or distributed algorithms. Parallel
algorithms take advantage of computer architectures where several
processors can work on a problem at the same time, whereas distributed
algorithms utilise multiple machines connected with a network. Parallel
or distributed algorithms divide the problem into more symmetrical or
asymmetrical sub problems and collect the results back together. The
resource consumption in such algorithms is not only processor cycles on
each processor but also the communication overhead between the
processors. Sorting algorithms can be parallelized efficiently, but their
communication overhead is expensive. Iterative algorithms are generally
parallelizable. Some problems have no parallel algorithms, and are called
inherently serial problems.
Deterministic or non-deterministic: Deterministic algorithms solve the
problem with exact decision at every step of the algorithm whereas non-
deterministic algorithm solves problems via guessing although typical
guesses are made more accurate through the use of heuristics.
Exact or approximate: While many algorithms reach an exact solution,
approximation algorithms seek an approximation that is close to the true
solution. Approximation may use either a deterministic or a random
strategy. Such algorithms have practical value for many hard problems.

ALGORITHM DESIGN TECHNIQUES

For a given problem, there are many ways to solve them. The
different methods are listed below.
1. Divide and Conquer.
DESIGN AND ANALYSIS OF ALGORITHMS Page 2
MODULE-I

2. Greedy Algorithm.
3. Dynamic Programming.
4. Branch and Bound.
5. Backtracking Algorithms.
6. Randomized Algorithm.
Now let us discuss each method briefly.
1. Divide and Conquer
Divide and conquer method consists of three steps.
a. Divide the original problem into a set of sub-
problems.
b. Solve every sub-problem individually, recursively.
c. Combine the solutions of the sub-problems into a
solution of the whole original problem.
2. Greedy Approach

Greedy algorithms seek to optimize a function by making choices which are the
best locally but do not look at the global problem. The result is a good solution
but not necessarily the best one. Greedy Algorithm does not always guarantee
the optimal solution however it generally produces solutions that are very close
in value to the optimal solution.

3.Dynamic Programming.

Dynamic Programming is a technique for efficient solution. It is a


method of solving problems exhibiting the properties of overlapping sub
problems and optimal sub-structure that takes much less time than other
methods.
4. Branch and Bound Algorithm.
In Branch and Bound Algorithm a given Algorithm which cannot be bounded
has to be divided into at least two new restricted sub-problems. Branch and
Bound Algorithm can be slow, however in the worst case they require efforts
that grows exponentially with problem size. But in some cases the methods
converge with much less effort. Branch and Bound Algorithms are methods for
global optimization in non-convex problems.

5.Backtracking Algorithm.

They try each possibility until they find the right one. It is a depth
first search of a set of possible solution. During the search if an alternative
doesn„t work, the search backtrack to the choice point, the place which

DESIGN AND ANALYSIS OF ALGORITHMS Page 3


MODULE-I

presented different alternatives and tries the next alternative. If there are no
more choice points the search fails.
6.Randomized Algorithm.

A Randomized Algorithm is defined as an algorithm that is allowed to


access a source of independent, unbiased, random bits, and it is then
allowed to use these random bits to influence its computation.

DIFFERENCE BETWEEN ALGORITHM, COMPUTATIONAL


PROCEDURE AND PROGRAM

COMPUTATIONAL PROCEDURE

Algorithms that are definite and effective are also called computational
procedures. One important example of computational procedures is the
operating system of a digital computer. This procedure is designed to control the
execution of jobs, in such a way that when no jobs are available, it does not
terminate but continues in a waiting state until a new job is entered.

PROGRAM

A program is the expression of an algorithm in a programming


language. Sometimes words such as procedure, function and subroutine are used
synonymously for program.

The study of algorithms.

An algorithm is a finite set of instructions that , if followed , accomplishes a


particular task.
The study of algorithms includes many important and active areas of research.
There are four distinct areas of study:

1. How to devise algorithms: - Creating an algorithm is an art which may never


be fully automated. There are several techniques with which you can devise new
and useful algorithms. Dynamic programming is one such technique. Some of
the techniques are especially useful in fields other than computer science such
as operations research and electrical engineering.
2. How to validate algorithms: - Once an algorithm is devised, it is necessary to
show that it computes the correct answer for all possible legal inputs. This
process is referred to as algorithm validation. It is sufficient to state the
algorithm in any precise way and need not be expressed as a program. The

DESIGN AND ANALYSIS OF ALGORITHMS Page 4


MODULE-I

purpose of validation is to assure us that this algorithm will work correctly


independently of the issues concerning the programming language it will
eventually be written in. once the validity of the method has been shown, a
program can be written and a second phase begins. This phase is referred to as
program proving or program verification. A proof of correctness requires that
the solution be stated in two forms. One form is usually as a program which is
annotated by a set of assertions about the input and output variables of the
program. These assertions are often expressed in predicate calculus. The second
form is called a specification, and this may also be expressed in the predicate
calculus. A proof consists of showing that these two forms are equivalent in that
for every given legal input, they describe the same output. A complete proof of
program correctness requires that each statement of the programming language
be precisely defined and all basic operations be proved correct.

3. How to analyze algorithms: - this field of study is called analysis of


algorithms. As an algorithm is executed, it uses the computer„s central
processing unit (CPU) to perform operations and its memory (both immediate
and auxiliary) to hold the program and data. Analysis of algorithms or
performance analysis refers to the task of determining how much computing
time and storage an algorithm requires. An important result of this study is that
it allows you to make quantitative judgments about the value of one algorithm
over another. Another result is that it allows you to predict whether the software
will meet any efficiency constraints that exist. Questions such as how well an
algorithm performs in the best case, in the worst case, or on the average are
typical.
4. How to test a program: - testing a program consists of two phases: debugging
and profiling (or performance measurement). Debugging is the process of
executing programs on sample data sets to determine whether faulty results
occur and, if so to correct them. In cases in which we cannot verify the
correctness of output on sample data, the following strategy can be employed:
let more than one programmer develop programs for the same problem, and
compare the outputs produced by these programs. If the outputs match, then
there is a good chance that they are correct. A proof of correctness is much
more valuable than a thousand tests, since it guarantees that the program will
work correctly for all possible inputs. Profiling or performance measurement is
the process of executing a correct program on data sets and measuring the time
and space it takes to compute the results. These timing figures are useful in that
they may confirm a previously done
analysis and point out logical places to perform useful optimization.

DESIGN AND ANALYSIS OF ALGORITHMS Page 5


MODULE-I

PSEUDOCODE CONVENTIONS
We can describe an algorithm in many ways.We can use a natural language like
English, although I we select this option,we must make sure that the resulting
instructions are definite.
We can present most of our algorithms using a pseudocode that resembles c
1. Comments begin with // and continueuntill the end of line

Eg: count :=count+1;//count is global ;It is initially zero.


2. Blocks are indicated with matching braces: { and } .A compound statement
can be represent as a block.The body of a procedure also forms a
block.Statements are delimited by ;

Eg: for j:= 1 to n do


{
Count:=count+1;
C[i,j]:=a[i,j]+b[i,j];
Count:=count +1; }
3. An identifier begins with a letter. The data types of variables are not
explicitly declared. The types will be clear from the context .Whether a variable
is global or local to a procedure will also be evident from the context.
Compound data types can be formed with records.

Eg: node=record
{
datatype_1 data_1;
:
datatype_n data_n;
node *link;
}
4. Assignment of values to variables is done using the assignment statement

<variable> := <expression>;
Eg: count:= count+1;
5. There are two Boolean values true and false. In order to produce these values,
the logical operators and , or , and not and the relational relational operators
<,<=,=,!=,>= and > are provided.

Eg: if (j>1) then k:=i-1; else k:=n-1;


6. Elements of multidimentional arrays are accessed using [ and ].

For eg: if A is a two dimentional array , the (I,j) th element of the array is
denoted as A[I,j]. Array indicates start at zero.

DESIGN AND ANALYSIS OF ALGORITHMS Page 6


MODULE-I

7. The following looping statements are employed: for,while and repeat until.
The while loop takes the following form.

While (condition) do
{
<statement 1> :
:
<statement n>
}
8. A conditional statement has the following forms:

If < condition > then <statement>


If<condition> then <statement 1> else <statement 2>
Here < condition > is a Boolean expression and <statement>,<statement 1>, and
< statement 2> are arbitrary statements.
9. Input and output are done using the instructions read and write. No format is
used to specify the size of input or output quantities.

Eg: write (―n is even‖);


10. There is only one type of procedure: Algorithm. An algorithm consists of a
heading and a body. The heading takes the form

Algorithm Nmae (<parameter list>)

RECURSIVE ALGORITHMS

A function that calls itself repeatedly, satisfying some condition is called a


Recursive Function. The algorithm that does this is called a recursive algorithm.
Using recursion, we split a complex problem into its single simplest case. The
recursive function only knows how to solve that simplest case.
TYPES OF RECURSION:

Linear Recursion
A linear recursive function is a function that only makes a single call to itself
each time the function runs (as opposed to one that would call itself multiple
times during its execution). The factorial function is a good example of linear
recursion. Another example of a linear recursive function would be one to
compute the square root of a number using Newton's method (assume
EPSILON to be a very small number close to 0):
double my_sqrt(double x, double a) {
double difference = a*x-x;
if (difference < 0.0) difference = -difference;
if (difference < EPSILON) return(a);
DESIGN AND ANALYSIS OF ALGORITHMS Page 7
MODULE-I

else return(my_sqrt(x,(a+x/a)/2.0));
}
Tail recursive
Tail recursion is a form of linear recursion. In tail recursion, the recursive call is
the last thing the function does. Often, the value of the recursive call is returned.
As such, tail recursive functions can often be easily implemented in an iterative
manner; by taking out the recursive call and replacing it with a loop, the same
effect can generally be achieved. In fact, a good compiler can recognize tail
recursion and convert it to iteration in order to optimize the performance of the
code.
A good example of a tail recursive function is a function to compute the GCD,
or Greatest Common Denominator, of two numbers:
int gcd(int m, int n)
{
int r;
if (m < n) return gcd(n,m);
r = m%n;
if (r == 0) return(n);
else return(gcd(n,r));
}
Binary Recursive
Some recursive functions don't just have one call to themself, they have two (or
more). Functions with two recursive calls are referred to as binary recursive
functions. The mathematical combinations operation is a good example of a
function that can quickly be implemented as a binary recursive function. The
number of combinations, often represented as nCk where we are choosing n
elements out of a set of k elements, can be implemented as follows:
int choose(int n, int k)
{ if (k == 0 || n == k) return(1);
else return(choose(n-1,k) + choose(n-1,k-1));
}
Exponential recursion

An exponential recursive function is one that, if you were to draw out a


representation of all the function calls, would have an exponential number of
calls in relation to the size of the data set (exponential meaning if there were n
elements, there would be O(an) function calls where a is a positive number). A
good example an exponentially recursive function is a function to compute all
the permutations of a data set. Let's write a function to take an array of n
integers and print out every permutation of it.
void print_array(int arr[], int n)
{
int i;
DESIGN AND ANALYSIS OF ALGORITHMS Page 8
MODULE-I

for(i=0; i<n; i) printf("%d ", arr[i]);


printf("\n");
}
void print_permutations(int arr[], int n, int i)
{
int j, swap;
print_array(arr, n);
for(j=i+1; j<n; j) {
swap = arr[i]; arr[i] = arr[j]; arr[j] = swap;
print_permutations(arr, n, i+1);
swap = arr[i]; arr[i] = arr[j]; arr[j] = swap;
}
}
To run this function on an array arr of length n, we'd do print_permutations(arr,
n, 0) where the 0 tells it to start at the beginning of the array.
Nested Recursion

In nested recursion, one of the arguments to the recursive function is the


recursive function itself! These functions tend to grow extremely fast. A good
example is the classic mathematical function, "Ackerman's function. It grows
very quickly (even for small values of x and y, Ackermann(x,y) is extremely
large) and it cannot be computed with only definite iteration (a completely
defined for() loop for example); it requires indefinite iteration (recursion, for
example).
Ackerman's function
int ackerman(int m, int n)
{
if (m == 0) return(n+1);
else if (n == 0) return(ackerman(m-1,1));
else return(ackerman(m-1,ackerman(m,n-1)));
}

Mutual Recursion

A recursive function doesn't necessarily need to call itself. Some recursive


functions work in pairs or even larger groups. For example, function A calls
function B which calls function C which in turn calls function A. A simple
example of mutual recursion is a set of function to determine whether an integer
is even or odd.
int is_even(unsigned int n)
{
if (n==0) return 1;
else return(is_odd(n-1));
DESIGN AND ANALYSIS OF ALGORITHMS Page 9
MODULE-I

}
int is_odd(unsigned int n)
{
return (!iseven(n));
}

Recursive Algorithms

A recursive algorithm is an algorithm which calls itself with "smaller (or


simpler)" input values, and which obtains the result for the current input by
applying simple operations to the returned value for the smaller (or simpler)
input. More generally if a problem can be solved utilizing solutions to
smaller versions of the same problem, and the smaller versions reduce to
easily solvable cases, then one can use a recursive algorithm to solve that
problem. For example, the elements of a recursively defined set, or the value
of a recursively defined function can be obtained by a recursive algorithm.
Recursive computer programs require more memory and computation
compared with iterative algorithms, but they are simpler and for many cases
a natural way of thinking about the problem.

For example consider factorial of a number, n


n! = n*(n-1)*(n-2)*...*2*1, and that 0! = 1.

In other words,

Function to calculate the factorial can be written as


int factorial(int n)
{
if (n == 0)
return 1;
else
return (n * factorial(n-1));
}

factorial(0)=> 1

factorial(3)
3 * factorial(2)
3 * 2 * factorial(1)
DESIGN AND ANALYSIS OF ALGORITHMS Page 10
MODULE-I

3 * 2 * 1 * factorial(0)
3*2*1*1
=> 6

This corresponds very closely to what actually happens on the execution


stack in the computer's memory.

EXAMPLES OF RECURSIVE ALGORITHMS:

The Towers of Hanoi

The Towers of Hanoi puzzle (TOH) was first posed by a French professor,
´Edouard Lucas, in 1883. Although commonly sold today as a children„s toy, it
is often discussed in discrete mathematics or computer science books because it
provides a simple example of recursion. In addition, its analysis is
straightforward and it has many variations of varying difficulty.
The object of the Towers of Hanoi problem is to specify the steps required to
move the disks or, as we will sometimes call them, rings) from pole r (r = 1, 2,
or 3) to pole s (s = 1, 2, or 3; s _= r), observing the following rules:

i) Only one disk at a time may be moved.

ii) At no time may a larger disk be on top of a smaller one.

The most common form of the problem has r = 1 and s = 3.

DESIGN AND ANALYSIS OF ALGORITHMS Page 11


MODULE-I

FIGURE

.
The Towers of Hanoi problem
Solution The algorithm to solve this problem exemplifies the recursive
paradigm. We imagine that we know a solution for n − 1 disks (―reduce to a
previous case‖), and then we use this solution to solve the problem for n disks.
Thus to move n disks from pole 1 to pole 3, we would:
1. Move n − 1 disks (the imagined known solution) from pole 1 to pole 2.
However we do this, the nth disk on pole 1 will never be in our way because
any valid sequence of moves with only n −1 disks will still be valid if there is
an nth (larger) disk always sitting at the bottom of pole 1 (why?).
2. Move disk n from pole 1 to pole 3.
3. Use the same method as in Step 1 to move the n −1 disks now on pole 2 to
pole 3

Algorithm

Algorithm TowersOfHanoi ( x, n y, z)
{
If(n>=1) then
{
TowersOfHanoi (n-1, x, y);

DESIGN AND ANALYSIS OF ALGORITHMS Page 12


MODULE-I

Write (“move top disk from tower”, x, “to top of tower”, y);
TowersOfHanoi (n-1, z, y, x);
}

Space and Time Complexity

Two important ways to characterize the effectiveness of an algorithm are


its space complexity and time complexity.

Space Complexity
Space complexity of an algorithm is the amount to memory needed by the
program for its completion. Space needed by a program has the following
components:

1. Instruction Space
Space needed to store the compiled version of program. It depends on
i. Compiler used
ii. Options specified at the time of compilation
e.g., whether optimization specified, Is there any overlay option
etc.
iii. Target computer
e.g., For performing floating point arithmetic, if hardware present
or not.

2. Data Space
Space needed to store constant and variable values. It has two
components:
i. Space for constants:
e.g., value „3‟ in program 1.1
Space for simple variables:
e.g., variables a,b,c in program 1.1

Program 1.1
int add (int a, int b, int c)
{
return (a+b+c)/3;
}

ii. Space for component variables like arrays, structures, dynamically


allocated memory.
DESIGN AND ANALYSIS OF ALGORITHMS Page 13
MODULE-I

e.g., variables a in program 1.2

Program 1.2
int Radd (int a[], int n)
1 {
2 If (n>0)
3 return Radd (a, n-1) + a[n-1];
4 else
5 return 0;
6 }

3. Environment stack space


Environment stack is used to store information to resume execution of
partially completed functions. When a function is invoked, following data
are stored in Environment stack.
i. Return address.
ii. Value of local and formal variables.
iii. Binding of all reference and constant reference parameters.

Space needed by the program can be divided into two parts.


i. Fixed part independent of instance characteristics. E.g., code space,
simple variables, fixed size component variables etc.
ii. Variable part. Space for component variables with space depends
on particular instance. Value of local and formal variables.
Hence we can write the space complexity as
S(P) = c + Sp (instance characteristics)

Example
Refer Program 1.1
One word for variables a,b,c. No instance characteristics. Hence Sp(TC) = 0

Example
Program 1.3
int Aadd (int *a, int n)
1 {
2 int s=0;
3 for (i=0; i<n; i++)
4 s+ = a[i];
5 return s;
6 }
One word for variables n and i. Space for a[] is address of a[0]. Hence it
requires one word. No instance characteristics. Hence Sp(TC) = 0
DESIGN AND ANALYSIS OF ALGORITHMS Page 14
MODULE-I

Example
Refer Program 1.2
Instance characteristics depend on values of n. Recursive stack space
includes space for formal parameters, local variables and return address. So
one word each for a[],n, return address and return variables. Hence for each
pass it needs 4 words. Total recursive stack space needed is 4(n).
Hence Sp(TC) = 4(n).

Time Complexity

Time complexity of an algorithm is the amount of time needed by the


program for its completion. Time taken is the sum of the compile time and
the execution time. Compile time does not depend on instantaneous
characteristics. Hence we can ignore it.

Program step: A program step is syntactically or semantically meaningful


segment of a program whose execution time is independent of instantaneous
characteristics. We can calculate complexity in terms of

1. Comments:
No executables, hence step count = 0

2. Declarative Statements:
Define or characterize variables and constants like (int , long, enum, …)
Statement enabling data types (class, struct, union, template)
Determine access statements ( public, private, protected, friend )
Character functions ( void, virtual )
All the above are non executables, hence step count = 0

3. Expressions and Assignment Statements:


Simple expressions : Step count = 1. But if expressions contain function
call, step count is the cost of the invoking functions. This will be large if
parameters are passed as call by value, because value of the actual
parameters must assigned to formal parameters.

Assignment statements : General form is <variable> = <expr>. Step


count = expr, unless size of <variable> is a function of instance
characteristics. eg., a = b, where a and b are structures. In that case, Step
count = size of <variable> + size of < expr >

4. Iterative Statements:

DESIGN AND ANALYSIS OF ALGORITHMS Page 15


MODULE-I

While <expr> do
Do .. While <expr>
Step count = Number of step count assignable to <expr>

For (<init-stmt>; <expr1>; <expr2>)


Step count = 1, unless the <init-stmt>, <expr1>,<expr2> are function of
instance characteristics. If so, first execution of control part has step
count as sum of count of <init-stmt> and <expr1>. For remaining
executions, control part has step count as sum of count of <expr1> and
<expr2>.

5. Switch Statements:
Switch (<expr>) {
Case cond1 : <statement1>
Case cond2 : <statement2>
.
.
Default : <statement>
}

Switch (<expr>) has step count = cost of <expr>


Cost of Cond statements is its cost plus cost of all preceding statements.

6. If-else Statements:
If (<expr>) <statement1>;
Else <statement2>;
Step count of If and Else is the cost of <expr>.

7. Function invocation:
All function invocation has Step count = 1, unless it has parameters
passed as called by value which depend s on instance characteristics. If
so, Step count is the sum of the size of these values.
If function being invoked is recursive, consider the local variables also.

8. Memory management Statements:


new object, delete object, sizeof(object), Step count =1.

9. Function Statements:
Step count = 0, cost is already assigned to invoking statements.

10.Jump Statements:
continue, break, goto has Step count =1

DESIGN AND ANALYSIS OF ALGORITHMS Page 16


MODULE-I

return <expr>: Step count =1, if no expr which is a function of instance


characteristics. If there is, consider its cost also.

Example
Refer Program 1.2
Introducing a counter for each executable line we can rewrite the program as

int Radd (int a[], int n)


{
count++ // if
If (n>0)
{
count++ // return
return Radd (a, n-1) + a[n-1];
}
else
{
count++ // return
return 0;
}
}
Case 1: n=0
tRadd = 2

Case 2: n>0
2 + tRadd (n-1)
= 2 + 2 + tRadd (n-2)
= 2 * 2 + tRadd (n-2)
.
.
= 2n + tRadd (0)
= 2n + 2

Example
Program 1.4
int Madd (int a[][], int b[][], int c[][], int n)
1 {
2 For (int i=0; i<m; i++)
3 For (int j=0; j<n; j++)
4 c[i][j] = a[i][j] + b[i][j];
5 }

Introducing a counter for each executable line we can rewrite the program as
DESIGN AND ANALYSIS OF ALGORITHMS Page 17
MODULE-I

int Madd (int a[][], int b[][], int c[][], int n)


{
For (int i=0; i<m; i++)
{
count++ //for i
For (int j=0; j<n; j++)
{
count++ //for j
c[i][j] = a[i][j] + b[i][j];
count++ //for assignment
}
count++ //for last j
}
count++ //for last i
}
Step count is 2mn + 2m +1.

Step count does not reflect the complexity of statement. It is reflected in step
per execution (s/e).

Refer Program 1.2


Lin s/e Frequenc Total Steps
e y
n= n>0 n=0 n>0
0
1 0 1 1 0 0
2 1 1 1 1 1
3 1 + tRadd (n-1) 0 1 0 1 + tRadd (n-1)
4 0 1 0 0 0
5 1 1 0 1 0
Total no. of steps 2 2 + tRadd (n-1)

Refer Program 1.3


Lin s/e Frequenc Total Steps
e y
1 0 1 0
2 1 1 1
3 1 n+1 n+1
4 1 n n
5 1 1 1
6 0 1 0
Total no. of steps 2n + 3

DESIGN AND ANALYSIS OF ALGORITHMS Page 18


MODULE-I

Refer Program 1.4


Lin s/e Frequenc Total Steps
e y
1 0 1 0
2 1 m+1 m+1
3 1 m(n+1) m(n+1)
4 1 mn mn
5 0 1 0
Total no. of steps 2mn + 2m + 1

Asymptotic Notations

Step count is to compare time complexity of two programs that compute


same function and also to predict the growth in run time as instance
characteristics changes. Determining exact step count is difficult and not
necessary also. Since the values are not exact quantities we need only
comparative statements like c1n2 ≤ tp(n) ≤ c2n2.

For example, consider two programs with complexities c 1n2 + c2n and c3n
respectively. For small values of n, complexity depend upon values of c 1, c2
and c3. But there will also be an n beyond which complexity of c 3n is better
than that of c1n2 + c2n.This value of n is called break-even point. If this point
is zero, c3n is always faster (or at least as fast). Common asymptotic
functions are given below.

Function Name
1 Constant
log n Logarithmic
n Linear
n log n n log n
n2 Quadratic
n3 Cubic
2n Exponential
n! Factorial

Big ‘Oh’ Notation (O)

O(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤
cg(n) for all n ≥ n0 }

DESIGN AND ANALYSIS OF ALGORITHMS Page 19


MODULE-I

It is the upper bound of any function. Hence it denotes the worse case
complexity of any algorithm. We can represent it graphically as

Fig 1.1

Find the Big „Oh‟ for the following functions:

Linear Functions
Example 1.6
f(n) = 3n + 2

General form is f(n) ≤ cg(n)

When n ≥ 2, 3n + 2 ≤ 3n + n = 4n
Hence f(n) = O(n), here c = 4 and n0 = 2

When n ≥ 1, 3n + 2 ≤ 3n + 2n = 5n
Hence f(n) = O(n), here c = 5 and n0 = 1

Hence we can have different c,n0 pairs satisfying for a given function.

Example 1.7
f(n) = 3n + 3
When n ≥ 3, 3n + 3 ≤ 3n + n = 4n
Hence f(n) = O(n), here c = 4 and n0 = 3

Example 1.8
f(n) = 100n + 6
When n ≥ 6, 100n + 6 ≤ 100n + n = 101n
Hence f(n) = O(n), here c = 101 and n0 = 6

Quadratic Functions
DESIGN AND ANALYSIS OF ALGORITHMS Page 20
MODULE-I

Example 1.9
f(n) = 10n2 + 4n + 2
When n ≥ 2, 10n2 + 4n + 2 ≤ 10n2 + 5n
When n ≥ 5, 5n ≤ n2, 10n2 + 4n + 2 ≤ 10n2 + n2 = 11n2
Hence f(n) = O(n2), here c = 11 and n0 = 5

Example 1.10
f(n) = 1000n2 + 100n - 6
f(n) ≤ 1000n2 + 100n for all values of n.
When n ≥ 100, 5n ≤ n2, f(n) ≤ 1000n2 + n2 = 1001n2
Hence f(n) = O(n2), here c = 1001 and n0 = 100

Exponential Functions
Example 1.11
f(n) = 6*2n + n2
When n ≥ 4, n2 ≤ 2n
So f(n) ≤ 6*2n + 2n = 7*2n
Hence f(n) = O(2n), here c = 7 and n0 = 4

Constant Functions
Example 1.12
f(n) = 10
f(n) = O(1), because f(n) ≤ 10*1

Omega Notation (Ω)

Ω (g(n)) = { f(n) : there exist positive constants c and n 0 such that 0 ≤


cg(n) ≤ f(n) for all n ≥ n0 }
It is the lower bound of any function. Hence it denotes the best case
complexity of any algorithm. We can represent it graphically as

Fig 1.2
Example 1.13

DESIGN AND ANALYSIS OF ALGORITHMS Page 21


MODULE-I

f(n) = 3n + 2
3n + 2 > 3n for all n.
Hence f(n) = Ω(n)
Similarly we can solve all the examples specified under Big „Oh‟.

Theta Notation (Θ)


Θ(g(n)) = {f(n) : there exist positive constants c1,c2 and n0 such that c1g(n)
≤f(n) ≤c2g(n) for all n ≥ n0 }
If f(n) = Θ(g(n)), all values of n right to n0 f(n) lies on or above c1g(n) and on
or below c2g(n). Hence it is asymptotic tight bound for f(n).

Fig 1.3
Example 1.14
f(n) = 3n + 2
f(n) = Θ(n) because f(n) = O(n) , n ≥ 2.

Similarly we can solve all examples specified under Big‟Oh‟.

Little ‘Oh’ Notation (o)


o(g(n)) = { f(n) : for any positive constants c > 0, there exists n0>0, such that
0 ≤ f(n) < cg(n) for all n ≥ n0 }

It defines the asymptotic tight upper bound. Main difference with Big Oh is
that Big Oh defines for some constants c by Little Oh defines for all
constants.

Little Omega (ω)


ω(g(n)) = { f(n) : for any positive constants c>0 and n0>0 such that 0 ≤ cg(n)
< f(n) for all n ≥ n0 }
It defines the asymptotic tight lower bound. Main difference with Ω is that,
ω defines for some constants c by ω defines for all constants.

DESIGN AND ANALYSIS OF ALGORITHMS Page 22


MODULE-I

Recurrence Relations

Recurrence is an equation or inequality that describes a function in terms of


its value on smaller inputs, and one or more base cases
e.g., recurrence for Merge-Sort


(1) if n = 1
T(n)  
2T(n/2) + (n)
 if n > 1

• Useful for analyzing recurrent algorithms


• Make it easier to compare the complexity of two algorithms
• Methods for solving recurrences
– Substitution method
– Recursion tree method
– Master method
– Iteration method

Substitution Method
 Use mathematical induction to derive an answer
 Derive a function of n (or other variables used to express the size of the
problem) that is not a recurrence so we can establish an upper and/or
lower bound on the recurrence
 May get an exact solution or may just get upper or lower bounds on the
solution

Steps
 Guess the form of the solution

DESIGN AND ANALYSIS OF ALGORITHMS Page 23


MODULE-I

 Use mathematical induction to find constants or show that they can be


found and to prove that the answer is correct

Example
Show that the recurrence relation
T(n) = 2 T( n/2 ) + n is O(n lgn)

Guess the solution as T(n) = O(n.lg(n))


Then T(n) = c.n.lgn
Substituting in T(n), we get
T(n) =
= c n lg(n/2) + n
= cn lg(n) – cnlg(2) + n
= cn lg(n) – cn + n
= cn lg(n), c >=1
To prove using mathematical induction, we have to show that the solution
holds for boundary condition also. We select boundary condition as n>=2
(Because for n = 1, T(1) = c.1.lg(1) = 0 which is false according to the
definition of T(n))

Example
Show that the recurrence relation
• T(n) = c + T(n/2) is O(lgn)

• Guess: T(n) = O(lgn)


– Induction goal: T(n) ≤ d lgn, for some d and n ≥ n0
– Induction hypothesis: T(n/2) ≤ d lg(n/2)
• Proof of induction goal:
T(n) = T(n/2) + c ≤ d lg(n/2) + c
= d lgn – d + c ≤ d lgn
if: – d + c ≤ 0, d ≥ c

Example
Show that the recurrence relation
T(n) = T(n-1) + n is O(n2)

• Guess: T(n) = O(n2)


– Induction goal: T(n) ≤ c n2, for some c and n ≥ n0
– Induction hypothesis: T(n-1) ≤ c(n-1)2 for all k < n
• Proof of induction goal:
T(n) = T(n-1) + n ≤ c (n-1)2 + n
DESIGN AND ANALYSIS OF ALGORITHMS Page 24
MODULE-I

= cn2 – (2cn – c - n) ≤ cn2


if: 2cn – c – n ≥ 0  c ≥ n/(2n-1)  c ≥ 1/(2 – 1/n)
– For n ≥ 1  2 – 1/n ≥ 1  any c ≥ 1 will work

Example
Show that the recurrence relation T(n) = 2T(n/2) + n is O(nlgn)
• Guess: T(n) = O(nlgn)
– Induction goal: T(n) ≤ cn lgn, for some c and n ≥ n0
– Induction hypothesis: T(n/2) ≤ cn/2 lg(n/2)
• Proof of induction goal:
T(n) = 2T(n/2) + n ≤ 2c (n/2)lg(n/2) + n
= cn lgn – cn + n ≤ cn lgn
if: - cn + n ≤ 0  c ≥ 1

Recursion tree Method


 Main disadvantage of Substitution method is that it is always difficult to
come up with a good guess
 Recursion tree method allows you make a good guess for the
substitution method
 Allows to visualize the process of iterating the recurrence

Steps
 Convert the recurrence into a tree.
 Each node represents the cost of a single sub problem somewhere in the
set of recursive function invocations
 Sum the costs within each level of the tree to obtain a set of per-level
costs
 Sum all the per-level costs to determine the total cost of all levels of the
recursion

Example 1.18
T(n) = 3T(n/4) + (n2)
T(n) = 3T(n/4) + cn2

DESIGN AND ANALYSIS OF ALGORITHMS Page 25


MODULE-I

• The sub problem size for a node at depth i is n/4i


– When the sub problem size is 1, n/4i = 1, i=log4n
– The tree has log4n+1 levels (0, 1, 2,.., log4n)
• The cost at each level of the tree (0, 1, 2,.., log4n-1)
– Number of nodes at depth i is 3i
– Each node at depth i has a cost of c(n/4i)2
– The total cost over all nodes at depth i is 3i c(n/4i)2=(3/16)icn2
• The cost at depth log4n
– Number of nodes 3islog n  n log 3 4 4

– Each contributing cost T(1)


– The total costnlog 3T (1)  (nlog 3 ) 4 4

3 2 3 3
T (n)  cn 2  cn  ( ) 2 cn 2  ...  ( ) log4 n 1 cn 2  (n log4 3 )
16 16 16
log4 n 1
3 i 2
 
i 0
(
16
) cn  (n log4 3 )

3 i 2
 ( ) cn  (n log4 3 )
i 0 16
1 16 2
 cn 2  (n log4 3 )  cn  (n log4 3 )
3 13
1
16
 O(n 2 )

Prove T(n)=O(n2) is an upper bound by the substitution method


T(n)  dn2 for some constant d > 0
T (n)  3T ( n / 4)  cn 2
 3d n / 4  cn 2
2

DESIGN AND ANALYSIS OF ALGORITHMS Page 26


 3d (n / 4) 2  cn 2
3 2
 dn  cn 2
16
MODULE-I

Example
W(n) = 2W(n/2) + (n2)

• Subproblem size at level i is: n/2i


• Subproblem size hits 1 when 1 = n/2i  i = lgn
• Cost of the problem at level i = (n/2i)2 No. of nodes at level i = 2i
• Total cost:

lg n 1 2 lg n 1i  i
n 1 1 1
W (n)   i  2lg n W (1)  n 2     n  n 2     O(n) n 2  O ( n)  2n 2
i 0 2 i 0  
2 i 0  
2 1 1
2
 W(n) = O(n2)

Example
DESIGN AND ANALYSIS OF ALGORITHMS Page 27
MODULE-I

W(n) = W(n/3) + W(2n/3) + O(n)

• The longest path from the root to a leaf is: n  (2/3)n  (2/3)2 n  …
1
• Subproblem size hits 1 when 1 = (2/3)in  i=log3/2n
• Cost of the problem at level i = n
• Total cost:
log3 / 2 n log3 / 2 n
lg n 1
n  n  ...  n  n
i 0
 1  n log
i 0
3/ 2 nn 
lg 3 / 2 lg 3 / 2
n lg n

 W(n) = O(nlgn)

Example
T(n) = T(n/4) + T(n/2) + n2

DESIGN AND ANALYSIS OF ALGORITHMS Page 28


MODULE-I

Solving Recurrences with the Iteration


In the iteration method we iteratively ―unfold‖ the recurrence until we ―see
the pattern‖.
The iteration method does not require making a good guess like the substitution
method (but

it is often more involved than using induction).


Example: Solve T(n) = 8T(n/2) + n² (T(1) = 1)

T(n) = n² + 8T(n/2)

DESIGN AND ANALYSIS OF ALGORITHMS Page 29


MODULE-I

= n² + 8(8T( n/2² ) + (n/2)²)


= n² + 8²T( n/2 ²) + 8(n²/4))
= n²+ 2n² + 8²T( n/2² )
= n² + 2n² + 8²(8T( n/2³ ) + ( n/2² )²)
= n² + 2n² + 8³T( n/2³ ) + 8²(n²/4² ))
= n² + 2n² + 2²n²+ 8³T( n/2³ )
=...
= n² + 2n² + 2²n²+ 2²n³ + . . .

PROFILING

Profiling or performance measurement is the process of


executing a correct program on data set and measuring the time and space it
takes to compute the result.

NONDETERMINISTIC ALGORITHMS

be containing operations whose outcomes of every operation. We can allow


algorithm to contain operations whose outcomes are not uniquely defined but
are limited to specified set of possibilities.
The machine executing such operations is The notation of algorithm that we
have been using has the property that the result of every operation is uniquely
defined.Algoithm with this property are termed deterministic algorithms.
Such algorithms agree with the programs are executed on computer. In a
theoretical framework we can remove this rustication on the outcomes of every
operation. We can allow algorithms to allowed to choose any one of these
outcomes subject to be a defined later. This leads to the concept of
nondeterministic algorithms, we introduce three functions.

1. Choice (S) arbitrarily chooses one of the elements of element of set S.


2. Failure () signals an unsuccessful completion.
3. Success () signals a successful completion.

The assignment statement x=Choice (1, n) could result in x


being assigned any one of the integers in the range [1,n].there is the rule of
specifying how this choice is to be made. The Failure () and Success () signals
are used to define a computation of the algorithm. These statements cannot be
used to effect a return. Whenever there is a set of choices that leads to a
successful completion, then one such set of choices is always made and the
algorithm terminates successfully. A nondeterministic algorithm terminates
unsuccessfully if and only if there exists no set choices leading to a success
DESIGN AND ANALYSIS OF ALGORITHMS Page 30
MODULE-I

signal. The computing times for. Choice, Failure, and Success are taken to be
O(1).A machine capable of executing a nondeterministic algorithm in this way
is called a nondeterministic machine. Although nondeterministic machines do
not exist in practice, we see that they provide strong intuitive reasons to
conclude that certain problems cannot be solved by fast deterministic
algorithms.
Example
Consider the problem of searching for an element x in a given set of elements
A[1:n],n>1.We are required to determine an index j such that A[j]=x or j=0 if x
is not in A.
1. J:=Choice(1.n);
2. If A[j]=x then {write (j);Success();}
3. Write (0); Failure();

From the way a nondeterministic computation is defined, it follows that no is 0


can be the output if and only if there is no j such that A[j]=x.
Complexity of nondeterministic search algorithms =O(1).
Since A is not ordered, every deterministic algorithm is of complexity Ω (n).

DETERMINISTIC ALGOITHM for n>p

The deterministic algorithm for a selection whose run time is O(n/p


log log p+ log n)The basic idea of this algorithm is same to sequential
algorithm.
The sequential algorithm partitions the input into groups(of size,says,5),finds
the median of each group, and output recursively the median(call it M)of these
group meadians.Then the rank
rM of M in the input is computed, and as a result, all element from the input that
are either ≤ M or >Mare dropped, depending on whether i >rM or i ≤ rM,
Respectively. Finally an appropriate selection is performed from the remaining
keys recursively.
We showed that run time of this algorithm was O(n).

AMORTIZED COMPLEXITY

The complexity of a method or operationis the actual


complexity of the method/operation. The actual complexity of an operation is
determined by the step count for that operation, and the actual complexity of a
sequence of operations is determined by the step count for that sequence. The
actual complexity of a sequence of operations may be determined by adding
together the step counts for the individual operations in the sequence. Typically,
DESIGN AND ANALYSIS OF ALGORITHMS Page 31
MODULE-I

determining the step count for each operation in the sequence is quite difficult,
and instead, we obtain an upper bound on the step count for the sequence by
adding together the worst-case step count for each operation.

Example Consider the method insert of Program 2.10. This method inserts an
element into a sorted array, and its step count ranges from a low of 4 to a high
of2n+4, where n is the number of elements already in the array. Suppose we
perform 5 insert operations beginning with n = 0. Further, suppose, that the
actual step counts for these insert operations are 4, 4, 6, 10, and 8, respectively.
The actual step count for the sequence of insert operations is 4 + 4 + 6 + 10 + 8
= 32. If we did not know the actual step count for the individual operations, we
could obtain an upper bound on the actual step count for the operation sequence
using one of the following two approaches.

1. Since the worst-case step count for an insert operation is 2n+4, sum0 <= i <=
4(2i+4) = 4 + 6 + 8 + 10 + 12 = 40 is an upper bound on the step count for
the sequence of 5 inserts.
2. The maximum number of elements already in the array at the time an
insert operation begins is 4. Therefore, the worst-case step count of an
insert operation is2*4+4 = 12. Therefore, 5*12 = 60 is an upper bound on
the step count for the sequence of 5 inserts.

In the preceding example, the upper bound obtained by the first approach is
closer to the actual step count for the operation sequence. We say that the count
obtained by the first approach is a tighter (i.e., closer to the real count) upper
bound than that obtained by the second approach.

When determining the complexity of a sequence of operations, we can, at times,


obtain tighter bounds using amortized complexity rather than worst-case
complexity. Unlike the actual and worst-case complexities of an operation
which are closely related to the step count for that operation, the amortized
complexity of an operation is an accounting artifact that often bears no direct
relationship to the actual complexity of that operation. The amortized
complexity of an operation could be anything. The only requirement is that the
sum of the amortized complexities of all operations in the sequence be greater
than or equal to the sum of the actual complexities. That is
(1) sum1 <= i <= namortized(i) >= sum1 < = i <= nactual(i)

where amortized(i) and actual(i), respectively, denote the amortized and actual

DESIGN AND ANALYSIS OF ALGORITHMS Page 32


MODULE-I

complexities of the ith operation in a sequence of n operations. Because of this


requirement on the sum of the amortized complexities of the operations in any
sequence of operations, we may use the sum of the amortized complexities as an
upper bound on the complexity of any sequence of operations.

You may view the amortized cost of an operation as being the amount you
charge the operation rather than the amount the operation costs. You can charge
an operation any amount you wish so long as the amount charged to all
operations in the sequence is at least equal to the actual cost of the operation
sequence.

Relative to the actual and amortized costs of each operation in a sequence


of n operations, we define a potential function P(i) as below
(2) P(i) = amortized(i) - actual(i) + P(i-1)

That is, the ith operation causes the potential function to change by the
difference between the amortized and actual costs of that operation. If we sum
Equation (2) for 1 <= i <= n, we get
sum 1 <= i <= nP(i) = sum 1 <= i <= n(amortized(i) - actual(i) + P(i-1))
or
sum 1 <= i <= n(P(i) - P(i-1)) = sum 1 <= i <= n(amortized(i) - actual(i))
or
P(n) - P(0) = sum 1 <= i <= n(amortized(i) - actual(i))

From Equation (1), it follows that


(3) P(n) - P(0) >= 0

Under the assumption that P(0) = 0, the potential P(i) is the amount by which
the first i operations have been overcharged (i.e., they have been charged more
than their actual cost).

Generally, when we analyze the complexity of a sequence of n operations, n can


be any nonnegative integer. Therefore, Equation (3) must hold for all
nonegative integers.

The preceding discussion leads us to the following three methods to arrive at


amortized costs for operations:

1. Aggregate Method
In the aggregate method, we determine an upper
bound UpperBoundOnSumOfActualCosts(n) for the sum of the actual
costs of the n operations. The amortized cost of each operation is set
equal to UpperBoundOnSumOfActualCosts(n)/n. You may verify that

DESIGN AND ANALYSIS OF ALGORITHMS Page 33


MODULE-I

this assignment of amortized costs satisfies Equation (1) and is, therefore,
valid.

2. Accounting Method
In this method, we assign amortized costs to the operations (probably by
guessing what assignment will work), compute the P(i)s using Equation
(2), and show that P(n)-P(0) >= 0.

3. Potential Method
Here, we start with a potential function (probably obtained using good
guess work) that satisfies Equation (3), and compute the amortized
complexities using Equation (2).

This method is often the hardest to use because it is often quite


difficult to obtain a bound on the aggregate actual cost that is smaller than the
bound obtained by using the worst-case cost of each operation in the sequence.
The accounting method is intuitive and often results in tight bounds on the
complexity of a sequence of operations. The potential method is often the
hardest to use (because of the difficulty of determining the proper potential
function to use), but for some applications remains the only way to obtain tight
complexity bounds. The complexity of a method or operation, is the actual
complexity of the method/operation. The actual complexity of an operation is
determined by the step count for that operation, and the actual complexity of a
sequence of operations is determined by the step count for that sequence. The
actual complexity of a sequence of operations may be determined by adding
together the step counts for the individual operations in the sequence. Typically,
determining the step count for each operation in the sequence is quite difficult,
and instead, we obtain an upper bound on the step count for the sequence by
adding together the worst-case step count for each operation.

DESIGN AND ANALYSIS OF ALGORITHMS Page 34

You might also like