Overview of Various Data Structures
Overview of Various Data Structures
Data structures are systematic ways to organize, manage, and store data to
enable efficient access and modification. Common types include:
Concept of Algorithm
Algorithm Specification
1. Best Case: Minimum resources needed (e.g., sorted input for a sorting
algorithm).
2. Worst Case: Maximum resources needed (e.g., reverse-sorted input for a
sorting algorithm).
3. Average Case: Expected resources needed for random inputs.
Asymptotic notation describes the limiting behavior of algorithms for large inputs,
simplifying the complexity comparison.
Examples of Complexity:
In essence, algorithms are not just instructions for computers but the driving
force behind technological advancements, making systems efficient, reliable, and
capable of addressing real-world challenges.
Algorithm Specification
Algorithm specification refers to the process of clearly defining the steps and requirements of an
algorithm to ensure its correctness and usability. It provides a structured description of how the
algorithm operates, what it requires, and what it produces. This specification is essential for both
understanding the algorithm and implementing it effectively.
1. Time Complexity
Time complexity measures the amount of time an algorithm takes to complete as a function of
the input size n. It is expressed using asymptotic notation, such as O(n), Ω(n)\Omega(n), or
Θ(n)\Theta(n).
1. Best Case: The minimum time the algorithm takes (e.g., already sorted data for a
sorting algorithm).
2. Worst Case: The maximum time the algorithm takes (e.g., reverse-sorted data for
sorting).
3. Average Case: The expected time taken for random input.
Example:
2. Space Complexity
Space complexity measures the amount of memory an algorithm uses as a function of the
input size n. It includes:
Components:
Example:
By understanding time and space complexities, developers can design systems that are both
effective and resource-efficient.
UNIT 2
Substitution Method to solve Recurrence
Relations
Recursion is a method to solve a problem where the solution depends on solutions to smaller
subproblems of the same problem. Recursive functions (function calling itself) are used to solve
problems based on Recursion. The main challenge with recursion is to find the time complexity
of the Recursive function. In this article, we will learn about how to find the time complexity of
Recursive functions using Substitution Method.
Whenever any function makes a recursive call to itself, its time can be computed by a
Recurrence Relation. Recurrence Relation is simply a mathematical relation/equation that can
give the value of any term in terms of some previous smaller terms. For example,
T(n) = T(n-1) + N
It is a recurrence relation because the value of the nth term is given in its previous term i.e (n-
1)the term.
Types of Recurrence Relation:
There are different types of recurrence relation that can be possible in the mathematical world.
Some of them are-
1. Linear Recurrence Relation: In case of Linear Recurrence Relation every term is dependent
linearly on its previous term. Example of Linear Recurrence Relation can be
2. Divide and Conquer Recurrence Relation: It the type of Recurrence Relation which is
obtained from Divide and Conquer Algorithm. Example of such recurrence relation can be
T(n) = 3T(n/2) + 9n
3. First Order Recurrence Relation: It is the type of recurrence relation in which every term is
dependent on just previous term. Example of this type of recurrence relation can be-
T(n) = T(n-1)2
(4) Higher Order Recurrence Relation- It is the type of recurrence relation where one term is not
only dependent on just one previous term but on multiple previous terms. If it will be dependent
on two previous term then it will be called to be second order. Similarly, for three previous term
its will be called to be of third order and so on. Let us see example of an third order Recurrence
relation
Till now we have seen different recurrence relations but how to find time taken by any recursive
algorithm. So to calculate time we need to solve the recurrence relation. Now for solving
recurrence we have three famous methods-
● Substitution Method
● Recursive Tree Method
● Master Theorem
Substitution Method:
Substitution Method is very famous method for solving any recurrences. There are two types of
substitution methods-
1. Forward Substitution
2. Backward Substitution
1. Forward Substitution:
It is called Forward Substitution because here we substitute recurrence of any term into next
terms. It uses following steps to find Time using recurrences-
Now we will use these steps to solve a problem. The problem is-
T(n) = 1, n=1
2. Backward Substitution:
It is called Backward Substitution because here we substitute recurrence of any term into
previous terms. It uses following steps to find Time using recurrences-
● Take the main recurrence and try to write recurrences of previous terms
● Take just previous recurrence and substitute into main recurrence
● Again take one more previous recurrence and substitute into main recurrence
● Do this process until you reach to the initial condition
● After this substitute the the value from initial condition and get the solution
Now we will use these steps to solve a problem. The problem is-
1. Take the main recurrence and try to write recurrences of previous terms:
3. Again take one more previous recurrence and substitute into main recurrence
So similarly we can find T(n-3), T(n-4)......and so on and can insert into T(n). Eventually we will
get following: T(n)=T(1) + 2 + 3 + 4 +.........+ n-1 + n
5. After this substitute the the value from initial condition and get the solution
The Substitution method is a useful technique to solve recurrence relations, but it also has some
limitations. Some of the limitations are:
● It is not guaranteed that we will find the solution as substitution method is based on
guesses.
● It doesn't provide guidance on how to make an accurate guess, often relying on
intuition or trial and error.
● It may only yield a specific or approximate solution rather than the most general or
precise one.
Tree Structure
Each node in a recursion tree represents a particular recursive call. The initial call is
depicted at the top, with subsequent calls branching out beneath it. The tree grows
downward, forming a hierarchical structure. The branching factor of each node depends on
the number of recursive calls made within the function. Additionally, the depth of the tree
corresponds to the number of recursive calls before reaching the base case.
Base Case
The base case serves as the termination condition for a recursive function. It defines the
point at which the recursion stops and the function starts returning values. In a recursion
tree, the nodes representing the base case are usually depicted as leaf nodes, as they do
not result in further recursive calls.
Recursive Calls
The child nodes in a recursion tree represent the recursive calls made within the function.
Each child node corresponds to a separate recursive call, resulting in the creation of new
sub problems. The values or parameters passed to these recursive calls may differ, leading
to variations in the sub problems' characteristics.
Execution Flow:
Traversing a recursion tree provides insights into the execution flow of a recursive function.
Starting from the initial call at the root node, we follow the branches to reach subsequent
calls until we encounter the base case. As the base cases are reached, the recursive calls
start to return, and their respective nodes in the tree are marked with the returned values.
The traversal continues until the entire tree has been traversed.
Introduction
○ Think of a program that determines a number's factorial. This function takes a
number N as an input and returns the factorial of N as a result. This function's
pseudo-code will resemble,
// find factorial of a number
factorial(n) {
// Base case
if n is less than 2: // Factorial of 0, 1 is 1
return 1
// Recursive step
return n * factorial(n-1); // Factorial of 5 => 5 * Factorial(4)...
}
Factorial(5) [ 120 ]
|
5 * Factorial(4) ==> 120
|
4. * Factorial(3) ==> 24
|
3 * Factorial(2) ==> 6
|
2 * Factorial(1) ==> 2
|
1
*/
○ Recursion is exemplified by the function that was previously mentioned. We are
invoking a function to determine a number's factorial. Then, given a lesser value
of the same number, this function calls itself. This continues until we reach the
basic case, in which there are no more function calls.
○ Recursion is a technique for handling complicated issues when the outcome is
dependent on the outcomes of smaller instances of the same issue.
○ If we think about functions, a function is said to be recursive if it keeps calling
itself until it reaches the base case.
○ Any recursive function has two primary components: the base case and the
recursive step. We stop going to the recursive phase once we reach the basic
case. To prevent endless recursion, base cases must be properly defined and
are crucial. The definition of infinite recursion is a recursion that never reaches
the base case. If a program never reaches the base case, stack overflow will
continue to occur.
Recursion Types
Generally speaking, there are two different forms of recursion:
○ Linear Recursion
○ Tree Recursion
○ Linear Recursion
Linear Recursion
○ A function that calls itself just once each time it executes is said to be linearly
recursive. A nice illustration of linear recursion is the factorial function. The name
"linear recursion" refers to the fact that a linearly recursive function takes a linear
amount of time to execute.
○ Take a look at the pseudo-code below:
function doSomething(n) {
// base case to stop recursion
if nis 0:
return
// here is some instructions
// recursive step
doSomething(n-1);
}
○ If we look at the function doSomething(n), it accepts a parameter named n and
does some calculations before calling the same procedure once more but with
lower values.
○ When the method doSomething() is called with the argument value n, let's say
that T(n) represents the total amount of time needed to complete the
computation. For this, we can also formulate a recurrence relation, T(n) = T(n-1)
+ K. K serves as a constant here. Constant K is included because it takes time
for the function to allocate or de-allocate memory to a variable or perform a
mathematical operation. We use K to define the time since it is so minute and
insignificant.
○ This recursive program's time complexity may be simply calculated since, in the
worst scenario, the method doSomething() is called n times. Formally speaking,
the function's temporal complexity is O(N).
Tree Recursion
○ When you make a recursive call in your recursive case more than once, it is
referred to as tree recursion. An effective illustration of Tree recursion is the
fibonacci sequence. Tree recursive functions operate in exponential time; they
are not linear in their temporal complexity.
○ Take a look at the pseudo-code below,
function doSomething(n) {
// base case to stop recursion
if n is less than 2:
return n;
// here is some instructions
// recursive step
return doSomething(n-1) + doSomething(n-2);
}
○ The only difference between this code and the previous one is that this one
makes one more call to the same function with a lower value of n.
○ Let's put T(n) = T(n-1) + T(n-2) + k as the recurrence relation for this function. K
serves as a constant once more.
○ When more than one call to the same function with smaller values is performed,
this sort of recursion is known as tree recursion. The intriguing aspect is now:
how time-consuming is this function?
○ Take a guess based on the recursion tree below for the same function.
1. The magnitude of the problem at each level is all that matters for determining the value of
a node. The issue size is n at level 0, n/2 at level 1, n/2 at level 2, and so on.
2. In general, we define the height of the tree as equal to log (n), where n is the size of the
issue, and the height of this recursion tree is equal to the number of levels in the tree. This
is true because, as we just established, the divide-and-conquer strategy is used by
recurrence relations to solve problems, and getting from issue size n to problem size 1
simply requires taking log (n) steps.
log(16) base 2
log(2^4) base 2
3. At each level, the second term in the recurrence is regarded as the root.
Although the word "tree" appears in the name of this strategy, you don't need to be an
expert on trees to comprehend it.
How to Use a Recursion Tree to Solve Recurrence Relations?
The cost of the sub problem in the recursion tree technique is the amount of time needed to
solve the sub problem. Therefore, if you notice the phrase "cost" linked with the recursion
tree, it simply refers to the amount of time needed to solve a certain sub problem.
Example
T(n) = 2T(n/2) + K
Solution
A problem size n is divided into two sub-problems each of size n/2. The cost of combining
the solutions to these sub-problems is K.
Each problem size of n/2 is divided into two sub-problems each of size n/4 and so on.
At the last level, the sub-problem size will be reduced to 1. In other words, we finally hit the
base case.
Since we know that when we continuously divide a number by 2, there comes a time when
this number is reduced to 1. Same as with the problem size N, suppose after K divisions by
2, N becomes equal to 1, which implies, (n / 2^k) = 1
Here n / 2^k is the problem size at the last level and it is always equal to 1.
Now we can easily calculate the value of k from the above expression by taking log() to both
sides. Below is a more clear derivation,
n = 2^k
○ log(n) = log(2^k)
○ log(n) = k * log(2)
○ k = log(n) / log(2)
○ k = log(n) base 2
Let's first determine the number of nodes in the last level. From the recursion tree, we can
deduce this
The cost of the last level is calculated separately because it is the base case and no
merging is done at the last level so, the cost to solve a single problem at this level is some
constant value. Let's take it as O (1).
If you closely take a look to the above expression, it forms a Geometric progression (a, ar,
ar^2, ar^3 ...... infinite time). The sum of GP is given by S(N) = a / (r - 1). Here is the first
term and r is the common ratio.