0% found this document useful (0 votes)
17 views58 pages

UNIT 1 (DAA)

The document outlines the syllabus and key objectives for the B.Tech course on Design & Analysis of Algorithms at Chandigarh College of Engineering. It covers algorithm characteristics, analysis methods including time and space complexity, and the importance of correctness and stability in algorithms. Additionally, it discusses asymptotic notations and their significance in evaluating algorithm performance.

Uploaded by

Navjot Dhadli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views58 pages

UNIT 1 (DAA)

The document outlines the syllabus and key objectives for the B.Tech course on Design & Analysis of Algorithms at Chandigarh College of Engineering. It covers algorithm characteristics, analysis methods including time and space complexity, and the importance of correctness and stability in algorithms. Additionally, it discusses asymptotic notations and their significance in evaluating algorithm performance.

Uploaded by

Navjot Dhadli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 58

Chandigarh College of Engineering, Jhanjeri

Department of Computer Science & Engineering

Program Name: B.Tech


Course Code: BTCS 403-18
Course Name: Design & Analysis of
Algorithms

Prepared by: Ms. Navjot Kaur


Department of Computer Science & Engineering 1
Outlines
► PTU Syllabus of Unit-I
► Topic Overview
► Key objectives or learning outcomes
► Summary
► References

Department of Computer Science & Engineering


PTU Syllabus of Unit-I

► Module 1:
Introduction
Characteristics of algorithm. Analysis of algorithm: Asymptotic analysis of complexity
bounds – best, average and worst-case behavior; Performance measurements of
Algorithm, Time and space trade-offs, Analysis of recursive algorithms through
recurrence relations: Substitution method, Recursion tree method and Masters’ theorem.

[8 hrs] (CO1)

Department of Computer Science & Engineering


Key objectives or learning outcomes

1. For a given algorithms analyze worst-case running times of algorithms


based on asymptotic analysis and justify the correctness of algorithms;
2. Explain when an algorithmic design situation calls for which design
paradigm(greedy/ divide and conquer/backtrack etc.);
3. Explain model for a given engineering problem, using tree or graph, and
write the corresponding algorithm to solve the problems;
4. Demonstrate the ways to analyze approximation/randomized algorithms
(expected running time, probability of error); &
5. Examine the necessity for NP class based problems and explain the use
of heuristic techniques.

Department of Computer Science & Engineering


CO Introduction
CO TOPICS LEVEL
NUMBER
Characteristics of algorithm. Analysis of algorithm:
CO1 I

Asymptotic analysis of complexity bounds – best, average and worst-


CO2 II
case behavior;

Performance measurements of Algorithm, Time and space trade-offs,


CO3 III
Analysis of recursive algorithms through recurrence relations:

Substitution method
CO4 IV

Recursion tree method and Masters’ theorem


CO5 V
5

Department of Computer Science & Engineering


UNIT-1
Algorithm:

► The word Algorithm means ” A set of finite rules or instructions to be


followed in calculations or other problem-solving operations ”
Or
” A procedure for solving a mathematical problem in a finite number
of steps that frequently involves recursive operations”.
Some characteristics of an algorithm include:

●Definiteness: Each step of an algorithm must be precise and unambiguous. The algorithm cannot iterate a
"bunch" of times.
●Finiteness: The algorithm must terminate after a finite number of steps.
●Effectiveness: Every instruction must be basic, or simple.
●Correctness: The algorithm must solve the problem accurately.
●Efficiency: The algorithm should use time and resources optimally.
●Clarity: The algorithm should be easy to understand and implement.
An algorithm is a set of step-by-step instructions that a computer executes to solve a problem or perform a
task. Algorithms can be used to solve math problems or real-life problems
Algorithm Types
• Approximate Algorithm: If it is infinite and
repeating.

• Probabilistic algorithm: If the solution of a problem


is uncertain. Ex: Tossing of a coin.

• Infinite Algorithm: An algorithm which is not finite.


Ex: A complete solution of a chessboard, division by
zero.

• Heuristic algorithm: Giving fewer inputs getting


more outputs. Ex: All Business Applications.
Issue in algorithm
1.How to devise algorithm
2.How to express algorithm
3.How to validate algorithm
4.How to analyze algorithm
5.How to test a program
i)Debugging
ii)Profiling (or) Performance Measuring
Specification of
algoritham
• Using natural language:
• Pseudocode
• Flow chart
• Program (Using programming language)
PSEUDO-CODE FOR EXPRESSING ALGORITHMS

• Comments begin with // and continue until the


end of line.
• A block of statements / compound statements
are represented using { and } for example if
statement, while loop, functions etc.,.
Example
{
Statement 1;
Statement 2;
.........
.........
}
• An identifier begins with a letter. Example: sum, sum5, a; but not in
5sum, 4a etc.
• Assignment of values to the variables is done using the assignment
operators as := or ←.A There are two Boolean values TRUE and
FALSE. Logical operators: AND, OR, NOT.
• Relational operators: <, >, ≥,≤,=,≠. Arithmetic operators: +, -, *, /, %;
• The conditional statement if-then or if-then-else is written in the
following form.
• If (condition) then (statement)
• If (condition) then (statement-1) else (statement-2)
• If a condition is true the particular block of statements are
execute.
Example
if(a>b) then
{
write("a is big");
}
else
{
write("b is big");
}
Review of Algorithm

► In the field of computer science and computational theory, the analysis of algorithms plays a critical role in understanding the efficiency and
performance of algorithms.
► Algorithms are the step-by-step procedures used for solving problems, and their analysis involves evaluating their time complexity, space
complexity, and sometimes other factors like correctness and stability. Here's a review of the different aspects of algorithms analysis:
► 1. Time Complexity Analysis:
• Purpose: Time complexity analysis quantifies how an algorithm's running time scales with the input's size. It helps us understand how
efficient an algorithm is in terms of time.
• Methods: Common methods for time complexity analysis include counting basic operations (e.g., comparisons, additions), using Big O
notation (e.g., O(n), O(log n)), and analyzing worst-case, average-case, and best-case scenarios.
• Applications: Time complexity analysis is crucial for selecting the most efficient algorithm for a specific problem and predicting the
algorithm's performance on large inputs.

2. Space Complexity Analysis:


• Purpose: Space complexity analysis evaluates the amount of memory or storage space an algorithm requires to solve a problem. It helps us
assess the algorithm's memory efficiency.
• Methods: Similar to time complexity, space complexity can be analyzed using Big O notation (e.g., O(n), O(log n)). It involves counting the
memory usage in terms of variables, data structures, and recursion stack.
• Applications: Space complexity analysis is essential for optimizing memory usage in resource-constrained environments and for
understanding how an algorithm's space requirements may affect its performance.
► 3. Correctness Analysis:
• Purpose: Correctness analysis ensures that an algorithm produces the correct output for all valid inputs. It involves
proving that the algorithm satisfies its intended specifications.
• Methods: Correctness can be established through formal proof methods like mathematical induction, loop
invariants, and pre/post-conditions. Testing and verification tools are also used to check correctness.
• Applications: Correctness analysis is crucial for building reliable and trustworthy software systems. It helps
identify and rectify algorithmic errors before they lead to failures.
► 4. Stability Analysis (in sorting algorithms):
• Purpose: Stability analysis is specific to sorting algorithms and determines whether the sorted output's original
order of equal elements is preserved.
• Methods: Sorting algorithms are assessed for stability by examining how they handle equal elements. Stable sorts
maintain the relative order of equal elements, while unstable sorts may change it.
• Applications: Stability is essential in scenarios where you want to maintain the original order of elements with
equal keys, such as sorting by multiple criteria.
► .
► 5. Complexity Classes (P, NP, NP-Complete, NP-Hard):
• Purpose: Complexity classes classify problems based on their computational difficulty. P contains problems solvable
in polynomial time, while NP includes problems whose solutions can be verified in polynomial time. NP-Complete
and NP-Hard problems are important subclasses of NP.
• Methods: Problems are categorized into these classes based on their inherent difficulty and relationships to other
problems. Reductions (polynomial-time transformations) are used to prove problems' membership in these classes.
• Applications: Complexity classes help us understand the fundamental limits of computation and are essential for
identifying problems that are likely hard to solve efficiently.
► In summary,
► the analysis of algorithms is a critical discipline within computer science that enables us to evaluate and compare
algorithms' performance and efficiency. It helps us make informed decisions when selecting algorithms for specific
tasks, optimize existing algorithms, and design new ones. Correctness analysis ensures algorithms produce the correct
results, while complexity analysis classifies problems based on their computational difficulty. Stability analysis is
crucial for sorting algorithms, especially in applications where the order of equal elements matters
Priori Posteriori
analysis analysis

Done priori to run algoritham Analysis after running on


specific system on system

Hardware independent Dependent on hardware

Approximate analysis Actual statistics of an


algoritham

Dependent on no. Of time They do not do analysis


statement executed
Performance analysis
Performance Analysis: An algorithm is said to be efficient and fast if
it take less time to execute and consumes less memory space at run time is
called Performance Analysis.
1. SPACE COMPLEXITY:
The space complexity of an algorithm is the amount of Memory Space
required by an algorithm during course of execution is called space
complexity .There are three types of space
a) Instruction space :executable program
b) Data space: Required to store all the constant and variable data space.
c) Environment: It is required to store environment information needed
to resume the suspended space.
2. TIME COMPLEXITY:
The time complexity of an algorithm is the total amount of time required by
an algorithm to complete its execution.
Asymptotic Notations
► When it comes to analysing the complexity of any algorithm in terms of time and space, we can never provide an exact
number to define the time required and the space required by the algorithm, instead we express it using some standard
notations, also known as Asymptotic Notations.
► When we analyse any algorithm, we generally get a formula to represent the amount of time required for execution or
the time required by the computer to run the lines of code of the algorithm, number of memory accesses, number of
comparisons, temporary variables occupying memory space etc. This formula often contains unimportant details that
don't really tell us anything about the running time.
► Let us take an example,
► if some algorithm has a time complexity of T(n) = (n2 + 3n + 4), which is a quadratic equation.
► For large values of n, the 3n + 4 part will become insignificant compared to the n2 part.
For n = 1000, n2 will be 1000000 while 3n + 4 will be 3004.
Also, When we compare the execution times of two algorithms
the constant coefficients of higher order terms are also
neglected.
An algorithm that takes a time of 200n2 will be faster than some
other algorithm that takes n3 time, for any value of n larger than
200. Since we're only interested in the asymptotic behavior of
the growth of the function, the constant factor can be ignored
too.
What is Asymptotic Behaviour
► The word Asymptotic means approaching a value or curve arbitrarily closely (i.e., as some sort of limit is taken).
► Remember studying about Limits in High School, this is the same.
► The only difference being, here we do not have to find the value of any expression where n is approaching any finite number or infinity, but in case of Asymptotic notations, we use the same model
to ignore the constant factors and insignificant parts of an expression, to device a better way of representing complexities of algorithms, in a single coefficient, so that comparison between
algorithms can be done easily.
► Let's take an example to understand this:
► If we have two algorithms with the following expressions representing the time required by them for execution, then:
► Expression 1: (20n2 + 3n - 4)
► Expression 2: (n3 + 100n - 2)
► Now, as per asymptotic notations, we should just worry about how the function will grow as the value of n(input) will grow, and that will entirely depend on n2 for the Expression 1, and on n3 for
Expression 2. Hence, we can clearly say that the algorithm for which running time is represented by the Expression 2, will grow faster than the other one, simply by analysing the highest power
coeeficient and ignoring the other constants(20 in 20n2) and insignificant parts of the expression(3n - 4 and 100n - 2).
► The main idea behind casting aside the less important part is to make things manageable.
► All we need to do is, first analyse the algorithm to find out an expression to define it's time requirements and then analyse how that expression will grow as the input(n) will grow.
► Types of Asymptotic Notations
► We use three types of asymptotic notations to represent the growth of any algorithm, as input increases:
• Big Theta (Θ)
• Big Oh(O)
• Big Omega (Ω)
Tight Bounds: Theta
► When we say tight bounds, we mean that the time compexity represented by the Big-Θ notation is like the average value or
range within which the actual time of execution of the algorithm will be.
► For example, if for some algorithm the time complexity is represented by the expression 3n2 + 5n, and we use the Big-Θ
notation to represent this, then the time complexity would be Θ(n2), ignoring the constant coefficient and removing the
insignificant part, which is 5n.
► Here, in the example above, complexity of Θ(n2) means, that the avaerage time for any input n will remain in between, k1 *
n2 and k2 * n2, where k1, k2 are two constants, therby tightly binding the expression rpresenting the growth of the algorithm.
► A function f(n) is said to be Θ(g(n)), denoted as f(n) = Θ(g(n)), if and only if there exist positive constants C₁, C₂, and k such
that for all values of n greater than or equal to k, the inequality C₁ * |g(n)| ≤ |f(n)| ≤ C₂ * |g(n)| holds. This definition
represents a tight bound on the growth rate of f(n) with respect to g(n). It means that f(n) grows at the same rate as C₁ * g(n)
and does not exceed C₂ * g(n) for sufficiently large values of n.
Upper Bounds: Big-O
► This notation is known as the upper bound of the algorithm, or a Worst Case of an algorithm.
► It tells us that a certain function will never exceed a specified time for any value of input n.
► The question is why we need this representation when we already have the big-Θ notation, which represents the tightly
bound running time for any algorithm. Let's take a small example to understand this.
► Consider Linear Search algorithm, in which we traverse an array elements, one by one to search a given number.
► In Worst case, starting from the front of the array, we find the element or number we are searching for at the end, which will
lead to a time complexity of n, where n represents the number of total elements.
► But it can happen, that the element that we are searching for is the first element of the array, in which case the time
complexity will be 1.
► Now in this case, saying that the big-Θ or tight bound time complexity for Linear search is Θ(n), will mean that the time
required will always be related to n, as this is the right way to represent the average time complexity, but when we use the
big-O notation, we mean to say that the time complexity is O(n), which means that the time complexity will never exceed n,
defining the upper bound, hence saying that it can be less than or equal to n, which is the correct representation.
► This is the reason, most of the time you will see Big-O notation being used to represent the time complexity of any
algorithm, because it makes more sense.
► A function f(n) is said to be O(g(n)), denoted as f(n) = O(g(n)), if and only if there exist positive constants C and k
such that for all values of n greater than or equal to k, the inequality |f(n)| ≤ C * |g(n)| holds.
► This definition describes the upper bound on the growth rate of f(n) with respect to g(n). In simple terms, it means
that f(n) does not grow faster than C * g(n) for sufficiently large values of n.
Lower Bounds: Omega
► Big Omega notation is used to define the lower bound of any algorithm or we can say the best case of any algorithm.
► This always indicates the minimum time required for any algorithm for all input values, therefore the best case of any algorithm.
► In simple words, when we represent a time complexity for any algorithm in the form of big-Ω, we mean that the algorithm will
take atleast this much time to cmplete it's execution. It can definitely take more time than this too.
► A function f(n) is said to be Ω(g(n)), denoted as f(n) = Ω(g(n)), if and only if there exist positive constants C and k such that for
all values of n greater than or equal to k, the inequality |f(n)| ≥ C * |g(n)| holds.
► This definition represents the lower bound on the growth rate of f(n) with respect to g(n). It means that f(n) does not grow slower
than C * g(n) for sufficiently large values of n.
Examples:
► for(i=0; i < N; i++)
► {
► statement;
► }
► The time complexity for the above algorithm will be Linear. The running time of the loop is directly proportional
to N. When N doubles, so does the running time.
► for(i=0; i < N; i++)
► {
► for(j=0; j < N;j++)
► {
► statement;
► }
► }
► This time, the time complexity for the above code will be Quadratic. The running time of the two loops is
proportional to the square of N. When N doubles, the running time increases by N * N.
► while(low <= high)
► {
► mid = (low + high) / 2;
► if (target < list[mid])
► high = mid - 1;
► else if (target > list[mid])
► low = mid + 1;
► else break;
► }
► This is an algorithm to break a set of numbers into halves, to search a particular field(we will study this in detail
later). Now, this algorithm will have a Logarithmic Time Complexity. The running time of the algorithm is
proportional to the number of times N can be divided by 2(N is high-low here). This is because the algorithm
divides the working area in half with each iteration.
Analysis of an Algorithm

 The goal of analysis of an algorithm is to compare algorithm in


running time and also Memory management.
 Running time of an algorithm depends on how long it takes a
computer to run the lines of code of the algorithm.
Running time of an algorithm depends on
1.Speed of computer
2.Programming language
3.Compiler and translator
Examples: binary search, linear search
Space Complexity of Algorithms
► Whenever a solution to a problem is written some memory is required to complete. For any
algorithm memory may be used for the following:
• Variables (This include the constant values, temporary values)
• Program Instruction
• Execution
► Space complexity is the amount of memory used by the algorithm (including the input values
to the algorithm) to execute and produce the result.
► Sometime Auxiliary Space is confused with Space Complexity. But Auxiliary Space is the
extra space or the temporary space used by the algorithm during it's execution.
► Space Complexity = Auxiliary Space + Input space
Space complexity

Now there are two types of space complexity

a) Constant space complexity

b) Linear(variable)space complexity
2.Linear (variable)space complexity: The space needed for algorithm
is based on size.
 Size of the variable ‘n’ = 1 word
 Array of a values = n word
 Loop variable = 1 word
 Sum variable = 1 word
Example:

int sum(int A[],int n)


{ n
int sum=0,i; 1
for (i=0;i<n;i++) 1
Sum=sum+A[i]; 1
Return sum;
} Ans : 1+n+1+1 = n+3
DAA

Algorithm-1 Algorithm-2 Algorithm-3:recursive procedure


Space Complexity of Algorithms
► Whenever a solution to a problem is written some memory is required to complete. For any
algorithm memory may be used for the following:
• Variables (This include the constant values, temporary values)
• Program Instruction
• Execution
► Space complexity is the amount of memory used by the algorithm (including the input values
to the algorithm) to execute and produce the result.
► Sometime Auxiliary Space is confused with Space Complexity. But Auxiliary Space is the
extra space or the temporary space used by the algorithm during it's execution.
► Space Complexity = Auxiliary Space + Input space
Memory Usage while Execution

► While executing, algorithm uses memory space for three reasons:

• InstructionSpace
It's the amount of memory used to save the compiled version of instructions.

• EnvironmentalStack
Sometimes an algorithm(function) may be called inside another algorithm(function). In such a situation, the
current variables are pushed onto the system stack, where they wait for further execution and then the call to the
inside algorithm(function) is made.
For example, If a function A() calls function B() inside it, then all th variables of the function A() will get stored
on the system stack temporarily, while the function B() is called and executed inside the funciton A().

• DataSpace
Amount of space used by the variables and constants.
► But while calculating the Space Complexity of any algorithm, we usually consider only Data Space and we
neglect the Instruction Space and Environmental Stack.
Calculating the Space Complexity
► For calculating the space complexity, we need to know the value of memory used by different type of datatype
variables, which generally varies for different operating systems, but the method for calculating the space
complexity remains the same.

Type Size
bool, char, unsigned char, signed char, __int8 1 byte
__int16, short, unsigned short, wchar_t, __wchar_t 2 bytes

float, __int32, int, unsigned int, long, unsigned long 4 bytes

double, __int64, long double, long 8 bytes


► {
► int z = a + b + c;
► return(z);
► }
► In the above expression, variables a, b, c and z are all integer types, hence they will take up 4 bytes each, so total memory requirement will be (4(4) + 4) =
20 bytes, this additional 4 bytes is for return value. And because this space requirement is fixed for the above example, hence it is called Constant Space
Complexity.
► int sum(int a[], int n)
► {
► int x = 0; // 4 bytes for x
► for(int i = 0; i < n; i++) // 4 bytes for i
► {
► x = x + a[i];
► }
► return(x);
► }
• In the above code, 4*n bytes of space is required for the array a[] elements.
• 4 bytes each for x, n, i and the return value.
► Hence the total memory requirement will be (4n + 12), which is increasing linearly with the increase in the input value
n, hence it is called as Linear Space Complexity.
► Similarly, we can have quadratic and other complex space complexity as well, as the complexity of an algorithm
increases.
► But we should always focus on writing algorithm code in such a way that we keep the space complexity minimum.
DAA
1.Constant time complexity : If a program required fixed amount of
time for all input values is called Constant time complexity .

Example : int sum(int a , int b)


{
return a+b;
}
2.Linear time complexity: If the input values are increased then the
time complexity will changes.
 comments = 0 step
 Assignment statement= 1 step
 condition statement= 1 step
 loop condition for n times = n+1 steps
 body of the loop = n steps
Example : int sum(int A[],int n)
{
int sum=0,i;
for (i=0;i<n;i++)
sum=sum+A[i];
return sum;
cost repetation total
1 1 1
1+1+1 1+(n+1)+n 2n+2
2 n 2n
1 1 1
4n+4
TIME COMPLEXITY

The time T(p) taken by a program P is the sum of the


compile time and the run time(execution time)
Statement S/e Frequency Total

1. Algorithm Sum(a,n) 0 - 0
2.{ 0 - 0
3. S=0.0; 1 1 1
4. for i=1 to n do 1 n+1 n+1
5. s=s+a[I]; 1 n n
6. return s; 1 1 1
7. } 0 - 0

Total 2n+3
KINDS OF ANALYSIS

1.Worst-case: (usually)
• T(n) = maximum time of algorithm on any input of size n.

2.Average-case: (sometimes)
• T(n) = expected time of algorithm over all inputs of
size n.
• Need assumption of statistical distribution of inputs.

3.Best-case:
• T(n) = minimum time of algorithm on any input of size n.

COMPLEXITY:
Complexity refers to the rate at which the storage time grows as a
function of the problem size
Analysis of recursive algorithms through recurrence relations

► The analysis of the complexity of a recurrence relation involves finding the asymptotic upper bound on
the running time of a recursive algorithm.

► This is usually done by finding a closed-form expression for the number of operations performed by the
algorithm as a function of the input size, and then determining the order of growth of the expression as
the input size becomes large.

► Here are the general steps to analyze the complexity of a recurrence relation:

► Substitute the input size into the recurrence relation to obtain a sequence of terms.

► Identify a pattern in the sequence of terms, if any, and simplify the recurrence relation to obtain a
closed-form expression for the number of operations performed by the algorithm.

► Determine the order of growth of the closed-form expression by using techniques such as the Master
Theorem, or by finding the dominant term and ignoring lower-order terms.

► Use the order of growth to determine the asymptotic upper bound on the running time of the
algorithm, which can be expressed in terms of big O notation.
Analysis of recursive algorithms through recurrence
relations:
► The solution of recurrences is important because it provides
information about the running time of a recursive algorithm. By
solving a recurrence, we can determine the asymptotic upper bound
on the number of operations performed by the algorithm, which is
crucial for evaluating the efficiency and scalability of the algorithm
Recurrence relation for the time
complexity of the binary search
► The recurrence relation for the time complexity of the binary search is T(n) =
T(n/2) + k, where k is constant. At every iteration, we divide the array into 2
hence reducing the problem size by 2. K is the constant time required for
comparison at each iteration and making a decision accordingly
► Binary search algorithm searches an element by comparing it with the middle most
element of the array. Then, following three cases are possible-
Case-01:
► If the element being searched is found to be the middle most element, its index is returned.
Case-02:
► If the element being searched is found to be greater than the middle most element,
then its search is further continued in the right sub array of the middle most element.
Case-03:
► If the element being searched is found to be smaller than the middle most element,
then its search is further continued in the left sub array of the middle most element.
This iteration keeps on repeating on the sub arrays until the desired element is found
OR size of the sub array reduces to zero
► ALGORITHM: BinarySearch(Data,l,r,Loc,Mid,Item)
► DATA is a sorted array with lower bound LB(0) & upper bound UB(size-1), and Item is given item of
information to be searched. The variables l, r and mid denote, respectively, the beginning, the end and the
middle locations of an array. This algorithm finds the location Loc of Item in Data or sets Loc to NULL.
1) Set l:= o,r:=size-1 and mid:=int((l+r)/2).
2) Repeat steps 3 & 4 while l≤r and Data[mid]≠Item.
3) If Item <DATA[MID],then:
► Set r:=mid-1
► else
► Set l:=mid+1
1) Set mid:=int((l+r)/2).
2) If Data[mid]=Item, then Set Loc:=mid
► else
► Set Loc:=NULL
1) Exit.
Substitution Method:
► 1.We make a guess for the solution and
► 2.Then we use mathematical induction to prove the guess is correct or incorrect.
► For example consider the recurrence T(n) = 2T(n/2) + n

► We guess the solution as T(n) = O(nLogn). Now we use induction to prove our guess.

► We need to prove that T(n) <= cnLogn. We can assume that it is true for values smaller than n.

► T(n) = 2T(n/2) + n
<= 2cn/2Log(n/2) + n
= cnLogn – cnLog2 + n
= cnLogn – cn + n
<= cnLogn
Recurrence Tree Method:
► In this method, we draw a recurrence tree and calculate the time
taken by every level of the tree. Finally, we sum the work done at all
levels. To draw the recurrence tree, we start from the given
recurrence and keep drawing till we find a pattern among levels.
► The pattern is typically arithmetic or geometric series. For example, consider the recurrence relation

► T(n) = T(n/4) + T(n/2) + cn2

► cn2
/ \
T(n/4) T(n/2)

► If we further break down the expression T(n/4) and T(n/2),


we get the following recursion tree.

► cn2
/ \
c(n2)/16 c(n2)/4
/ \ / \
T(n/16) T(n/8) T(n/8) T(n/4)

► Breaking down further gives us following

► cn2
/ \
c(n )/16
2
c(n2)/4
/ \ / \
c(n )/256 c(n )/64 c(n )/64 c(n2)/16
2 2 2

/ \ / \ / \ / \
► To know the value of T(n), we need to calculate the sum of tree
nodes level by level. If we sum the above tree level by level,

► we get the following series T(n) = c(n^2 + 5(n^2)/16 + 25(n^2)/256)


+ ….
The above series is a geometrical progression with a ratio of 5/16.

► To get an upper bound, we can sum the infinite series. We get the sum
as (n2)/(1 – 5/16) which is O(n2)
Master Method:
► Master Method is a direct way to get the solution. The master method works only for
the following type of recurrences or for recurrences that can be transformed into the
following type:
► T(n) = aT(n/b) + f(n) where a >= 1 and b > 1


There are the following three cases:

• If f(n) = O(nc) where c < Logba then T(n) = Θ(nLogba)

• If f(n) = Θ(nc) where c = Logba then T(n) = Θ(ncLog n)

• If f(n) = Ω(nc) where c > Logba then T(n) = Θ(f(n))


How does this work?
► The master method is mainly derived from the recurrence tree method. If we draw the
recurrence tree of T(n) = aT(n/b) + f(n), we can see that the work done at the root is f(n),
and work done at all leaves is Θ(nc) where c is Logba. And the height of the recurrence tree
is Logbn
► In the recurrence tree method, we calculate the total work done. If the work done at leaves is
polynomially more, then leaves are the dominant part, and our result becomes the work done at
leaves (Case 1).
► If work done at leaves and root is asymptotically the same, then our result becomes height
multiplied by work done at any level (Case 2). If work done at the root is asymptotically more,
then our result becomes work done at the root (Case 3).
Reference Books
1. Algorithm Design, 1stEdition, Jon Kleinberg and ÉvaTardos, Pearson.
2. Algorithm Design: Foundations, Analysis, and Internet Examples, Second Edition, Michael T Goodrich
and Roberto Tamassia, Wiley.
3. Algorithms -- A Creative Approach, 3RD Edition, UdiManber, Addison-Wesley, Reading, MA.
Suggested Books:
1. Introduction to Algorithms, 4TH Edition, Thomas H Cormen, Charles E Lieserson, Ronald L Rivest and
Clifford Stein, MIT Press/McGraw-Hill.
2. Data Structures and Algorithms in C++, Weiss, 4th edition, Pearson.
3. Fundamentals of Computer Algorithms – E. Horowitz, Sartaj Saini, Galgota Publications.
Thank you

You might also like