0% found this document useful (0 votes)
2 views

Overview of Various Data Structures

The document provides an overview of data structures and algorithms, detailing various types of data structures, the concept of algorithms, their role in computing, and performance analysis methods. It explains algorithm specification, including input, output, logic, and performance metrics like time and space complexity, as well as asymptotic notation for evaluating algorithm efficiency. Additionally, it introduces recurrence relations and the substitution method for solving them, emphasizing the importance of understanding algorithm efficiency in computational tasks.

Uploaded by

Ha Yanga
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Overview of Various Data Structures

The document provides an overview of data structures and algorithms, detailing various types of data structures, the concept of algorithms, their role in computing, and performance analysis methods. It explains algorithm specification, including input, output, logic, and performance metrics like time and space complexity, as well as asymptotic notation for evaluating algorithm efficiency. Additionally, it introduces recurrence relations and the substitution method for solving them, emphasizing the importance of understanding algorithm efficiency in computational tasks.

Uploaded by

Ha Yanga
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Overview of Various Data Structures

Data structures are systematic ways to organize, manage, and store data to
enable efficient access and modification. Common types include:

1. Primitive Data Structures: Basic structures like integers, floats, characters,


etc.
2. Non-Primitive Data Structures:
○ Linear:
■ Arrays
■ Linked Lists
■ Stacks
■ Queues
○ Non-Linear:
■ Trees (Binary Trees, Binary Search Trees, AVL Trees, etc.)
■ Graphs (Directed, Undirected, Weighted, etc.)
○ Hash-based:
■ Hash Tables
○ Advanced:
■ Heaps
■ Tries
■ Disjoint Sets

Concept of Algorithm

An algorithm is a finite, well-defined sequence of computational steps to solve a


problem or perform a task. It is characterized by:

1. Input: Takes zero or more inputs.


2. Output: Produces one or more outputs.
3. Finiteness: Terminates after a finite number of steps.
4. Definiteness: Each step is clearly defined.
5. Effectiveness: Performable within a reasonable time using basic
operations.

Role of Algorithms in Computing

Algorithms are the backbone of computational systems. They:


1. Define the process for solving computational problems.
2. Optimize resource usage, such as time and space.
3. Enable scalability of software solutions.
4. Provide the foundation for machine learning, cryptography, and data
processing.

Algorithm Specification

Algorithms can be expressed using:

1. Pseudo-code: High-level structured language.


2. Flowcharts: Graphical representation using symbols.
3. Programming Languages: Implementations in languages like Python, C++,
etc.

Performance Analysis of Algorithms

Performance analysis evaluates an algorithm based on:

1. Time Complexity: The amount of time an algorithm takes as a function of


input size.
2. Space Complexity: The amount of memory an algorithm requires as a
function of input size.
Components:

1. Best Case: Minimum resources needed (e.g., sorted input for a sorting
algorithm).
2. Worst Case: Maximum resources needed (e.g., reverse-sorted input for a
sorting algorithm).
3. Average Case: Expected resources needed for random inputs.

Growth of Functions: Asymptotic Notation

Asymptotic notation describes the limiting behavior of algorithms for large inputs,
simplifying the complexity comparison.

1. Big O (O): Upper bound, worst-case growth rate.


○ Example: O(n2)O(n^2)O(n2) means at most quadratic growth.
2. Big Omega (Ω): Lower bound, best-case growth rate.
○ Example: Ω(n)\Omega(n)Ω(n) means at least linear growth.
3. Big Theta (Θ): Tight bound, average-case growth rate.
○ Example: Θ(nlog⁡n)\Theta(n \log n)Θ(nlogn) means growth is
bounded by nlog⁡nn \log nnlogn from above and below.
4. Little o (o): Strict upper bound, grows slower than a function.
○ Example: o(n2)o(n^2)o(n2) indicates growth is less than quadratic.
5. Little omega (ω): Strict lower bound, grows faster than a function.
○ Example: ω(n)\omega(n)ω(n) indicates growth is faster than linear.

Examples of Complexity:

1. Constant Time (O(1)): Accessing an array element.


2. Logarithmic Time (O(log n)): Binary search.
3. Linear Time (O(n)): Traversing a list.
4. Quadratic Time (O(n^2)): Bubble sort.
5. Exponential Time (O(2^n)): Solving the traveling salesman problem.

Role of Algorithms in Computing: Detailed Explanation

Algorithms play a pivotal role in computing, acting as the backbone of all


computational systems. They are crucial in various aspects of technology, from
solving complex problems to enabling efficient and scalable software. Here's a
deeper explanation of their significance:

1. Define the Process for Solving Computational Problems:


Algorithms provide a step-by-step procedure to solve problems
systematically. Whether it’s sorting data, searching for information, or
performing arithmetic operations, algorithms guide the process to ensure
accuracy and consistency. For instance, the Dijkstra algorithm helps find
the shortest path in a graph, ensuring reliable solutions in navigation
systems.
2. Optimize Resource Usage (Time and Space):
Efficient algorithms are designed to minimize the use of computational
resources. For example:
○ Time Optimization: Algorithms like Quick Sort or Merge Sort reduce
the time required to sort large datasets compared to simpler
methods like Bubble Sort.
○ Space Optimization: In memory-constrained environments,
algorithms like Dynamic Programming reuse previously computed
results, avoiding redundant calculations and conserving memory.
3. Enable Scalability of Software Solutions:
Scalable algorithms ensure that systems can handle increased loads or
larger datasets without significant performance degradation. For instance:
○ Search engines use scalable indexing algorithms to process billions
of web pages efficiently.
○ Load-balancing algorithms help distribute requests across servers in
large-scale web applications, ensuring smooth user experiences
even with high traffic.
4. Provide the Foundation for Machine Learning, Cryptography, and Data
Processing:
Algorithms are the cornerstone of many advanced fields:
○ Machine Learning: Algorithms like Gradient Descent optimize
models to make accurate predictions, while clustering algorithms like
k-Means analyze and categorize data.
○ Cryptography: Algorithms such as RSA and AES secure data by
encrypting and decrypting sensitive information, ensuring privacy in
digital communication.
○ Data Processing: Algorithms like MapReduce process and analyze
massive datasets in distributed computing environments, enabling
insights in areas like big data analytics.

In essence, algorithms are not just instructions for computers but the driving
force behind technological advancements, making systems efficient, reliable, and
capable of addressing real-world challenges.

Algorithm Specification
Algorithm specification refers to the process of clearly defining the steps and requirements of an
algorithm to ensure its correctness and usability. It provides a structured description of how the
algorithm operates, what it requires, and what it produces. This specification is essential for both
understanding the algorithm and implementing it effectively.

Components of Algorithm Specification


1. Input:
Clearly specify the data or parameters that the algorithm will take as input.
Example: "The algorithm takes a sorted array of integers and a target value."
2. Output:
Define the expected result or outcome of the algorithm.
Example: "The algorithm outputs the index of the target value if found; otherwise, it
returns -1."
3. Logic/Steps:
Provide a step-by-step description of the procedure the algorithm follows to achieve the
desired output.
Example:
○ Initialize a pointer at the start of the array.
○ Loop through each element to compare it with the target.
○ If the target is found, return its index.
○ If the loop ends without finding the target, return -1.
4. Preconditions:
State the assumptions or conditions that must be satisfied before running the algorithm.
Example: "The input array must be sorted in ascending order."
5. Postconditions:
Define the conditions that will hold true after the algorithm completes its execution.
Example: "The output index will correspond to the position of the target in the array."
6. Performance Analysis:
Include an analysis of the algorithm's efficiency in terms of time and space complexity.
Example: "Time Complexity: O(log⁡n); Space Complexity: O(1)"

Methods of Algorithm Specification


1. Natural Language:
Algorithms are described in plain language, making them easy to understand but
sometimes ambiguous.
2. Pseudo-Code:
A high-level representation that combines natural language with programming constructs
like loops and conditions.
3. Flowcharts:
A graphical representation of the algorithm using symbols to illustrate the flow of
operations.
4. Mathematical Representation:
When the algorithm is derived from mathematical concepts, it may be expressed using
formulas or equations.

Importance of Algorithm Specification


1. Clarity: Helps programmers and stakeholders understand the algorithm's purpose and
operation.
2. Implementation: Serves as a blueprint for coding the algorithm in any programming
language.
3. Verification: Ensures that the algorithm meets its objectives and adheres to the defined
constraints.
4. Optimization: Identifying inefficiencies in the specification allows for improvements
before implementation.

Performance Analysis (Time and Space Complexities)


Performance analysis evaluates an algorithm's efficiency and resource usage. The two primary
metrics used are time complexity and space complexity.

1. Time Complexity
Time complexity measures the amount of time an algorithm takes to complete as a function of
the input size n. It is expressed using asymptotic notation, such as O(n), Ω(n)\Omega(n), or
Θ(n)\Theta(n).

Types of Time Complexity:

1. Best Case: The minimum time the algorithm takes (e.g., already sorted data for a
sorting algorithm).
2. Worst Case: The maximum time the algorithm takes (e.g., reverse-sorted data for
sorting).
3. Average Case: The expected time taken for random input.

Example:

Consider Linear Search in an array:

● Best Case: O(1) (Target is the first element).


● Worst Case: O(n) (Target is the last element or absent).
● Average Case: O(n)(Assuming all positions are equally likely).

Common Time Complexities:

1. Constant Time (O(1)):


The time taken is independent of input size.
Example: Accessing an array element by index.
2. Logarithmic Time (O(log⁡n)):
Time grows logarithmically with input size.
Example: Binary Search.
3. Linear Time (O(n)):
Time grows proportionally with input size.
Example: Iterating through an array.
4. Quadratic Time (O(n^2)):
Time grows quadratically with input size.
Example: Bubble Sort, Selection Sort.
5. Exponential Time (O(2n)):
Time doubles with each additional input element.
Example: Solving the Traveling Salesman Problem with brute force.

2. Space Complexity
Space complexity measures the amount of memory an algorithm uses as a function of the
input size n. It includes:

1. Fixed Part: Memory required for constants, variables, and instructions.


2. Variable Part: Memory required for dynamic inputs and outputs.

Components:

1. Auxiliary Space: Temporary or extra space used during execution.


2. Input Space: Memory required to store input data.

Example:

Consider Merge Sort:

● Requires extra memory for temporary arrays during merging.


● Space Complexity: O(n).

Comparing Time and Space Complexities


1. Trade-off: Many algorithms have a trade-off between time and space. For instance,
faster algorithms (e.g., Merge Sort) may use more memory than slower ones (e.g., In-
Place Quick Sort).
2. Choosing the Best Algorithm: Based on system constraints (e.g., memory availability
or speed requirements), a balance between time and space complexity is often sought.

Asymptotic Notations for Performance


1. Big O (O): Upper bound of an algorithm (worst-case performance).
Example: In Binary Search, O(log⁡n).
2. Big Omega (Ω\OmegaΩ): Lower bound of an algorithm (best-case performance).
Example: In Linear Search, Ω(1).
3. Big Theta (Θ\ThetaΘ): Tight bound (average-case performance).
Example: In Bubble Sort, Θ(n2).

Importance of Performance Analysis


1. Efficiency: Helps identify the most efficient algorithm for a problem.
2. Scalability: Ensures the algorithm performs well for large inputs.
3. Optimization: Provides insights for refining and improving algorithms.

By understanding time and space complexities, developers can design systems that are both
effective and resource-efficient.

UNIT 2
Substitution Method to solve Recurrence
Relations
Recursion is a method to solve a problem where the solution depends on solutions to smaller
subproblems of the same problem. Recursive functions (function calling itself) are used to solve
problems based on Recursion. The main challenge with recursion is to find the time complexity
of the Recursive function. In this article, we will learn about how to find the time complexity of
Recursive functions using Substitution Method.

What is a Recurrence Relation?

Whenever any function makes a recursive call to itself, its time can be computed by a
Recurrence Relation. Recurrence Relation is simply a mathematical relation/equation that can
give the value of any term in terms of some previous smaller terms. For example,

T(n) = T(n-1) + N

It is a recurrence relation because the value of the nth term is given in its previous term i.e (n-
1)the term.
Types of Recurrence Relation:

There are different types of recurrence relation that can be possible in the mathematical world.
Some of them are-

1. Linear Recurrence Relation: In case of Linear Recurrence Relation every term is dependent
linearly on its previous term. Example of Linear Recurrence Relation can be

T(n) = T(n-1) + T(n-2) + T(n-3)

2. Divide and Conquer Recurrence Relation: It the type of Recurrence Relation which is
obtained from Divide and Conquer Algorithm. Example of such recurrence relation can be

T(n) = 3T(n/2) + 9n

3. First Order Recurrence Relation: It is the type of recurrence relation in which every term is
dependent on just previous term. Example of this type of recurrence relation can be-

T(n) = T(n-1)2

(4) Higher Order Recurrence Relation- It is the type of recurrence relation where one term is not
only dependent on just one previous term but on multiple previous terms. If it will be dependent
on two previous term then it will be called to be second order. Similarly, for three previous term
its will be called to be of third order and so on. Let us see example of an third order Recurrence
relation

T(n) = 2T(n-1)2 + KT(n-2) + T(n-3)

Till now we have seen different recurrence relations but how to find time taken by any recursive
algorithm. So to calculate time we need to solve the recurrence relation. Now for solving
recurrence we have three famous methods-

● Substitution Method
● Recursive Tree Method
● Master Theorem

Now in this article we are going to focus on Substitution Method.

Substitution Method:

Substitution Method is very famous method for solving any recurrences. There are two types of
substitution methods-

1. Forward Substitution
2. Backward Substitution

1. Forward Substitution:

It is called Forward Substitution because here we substitute recurrence of any term into next
terms. It uses following steps to find Time using recurrences-

● Pick Recurrence Relation and the given initial Condition


● Put the value from previous recurrence into the next recurrence
● Observe and Guess the pattern and the time
● Prove that the guessed result is correct using mathematical Induction.

Now we will use these steps to solve a problem. The problem is-

T(n) = T(n-1) + n, n>1

T(n) = 1, n=1

Now we will go step by step-

1. Pick Recurrence and the given initial Condition:

T(n)=T(n-1)+n, n>1T(n)=1, n=1


2. Put the value from previous recurrence into the next recurrence:

T(1) = 1T(2) = T(1) + 2 = 1 + 2 = 3T(3) = T(2) + 3 = 1 + 2 + 3 = 6T(4)= T(3) + 4 = 1 + 2 + 3 + 4 =


10

3. Observe and Guess the pattern and the time:

So guessed pattern will be-T(n) = 1 + 2 + 3 .... + n = (n * (n+1))/2Time Complexity will be O(n2)

4. Prove that the guessed result is correct using mathematical Induction:

● Prove T(1) is true:


T(1) = 1 * (1+1)/2 = 2/2 = 1 and from definition of recurrence we know T(1) = 1.
Hence proved T(1) is true

● Assume T(N-1) to be true:


Assume T(N-1) = ((N - 1) * (N-1+1))/2 = (N * (N-1))/2 to be true

● Then prove T(N) will be true:T(N) = T(N-1) + N from recurrence definition


Now, T(N-1) = N * (N-1)/2So, T(N) = T(N-1) + N = (N * (N-1))/2 + N
= (N * (N-1) + 2N)/2 =N * (N+1)/2And from our guess also
T(N)=N(N+1)/2Hence T(N) is true.Therefore our guess was correct and time will be
O(N2)

2. Backward Substitution:

It is called Backward Substitution because here we substitute recurrence of any term into
previous terms. It uses following steps to find Time using recurrences-

● Take the main recurrence and try to write recurrences of previous terms
● Take just previous recurrence and substitute into main recurrence
● Again take one more previous recurrence and substitute into main recurrence
● Do this process until you reach to the initial condition
● After this substitute the the value from initial condition and get the solution
Now we will use these steps to solve a problem. The problem is-

T(n) = T(n-1) + n, n>1T(n) = 1, n = 1

Now we will go step by step-

1. Take the main recurrence and try to write recurrences of previous terms:

T(n) = T(n-1) + nT(n-1) = T(n-2) + n - 1T(n-2) = T(n-3) + n - 2

2. Take just previous recurrence and substitute into main recurrence

put T(n-1) into T(n)So, T(n)=T(n-2)+ n-1 + n

3. Again take one more previous recurrence and substitute into main recurrence

put T(n-2) into T(n)So, T(n)=T(n-3)+ n-2 + n-1 + n

4. Do this process until you reach to the initial condition

So similarly we can find T(n-3), T(n-4)......and so on and can insert into T(n). Eventually we will
get following: T(n)=T(1) + 2 + 3 + 4 +.........+ n-1 + n

5. After this substitute the the value from initial condition and get the solution

Put T(1)=1, T(n) = 1 +2 +3 + 4 +..............+ n-1 + n = n(n+1)/2. So Time will be O(N2)

Limitations of Substitution method:

The Substitution method is a useful technique to solve recurrence relations, but it also has some
limitations. Some of the limitations are:

● It is not guaranteed that we will find the solution as substitution method is based on
guesses.
● It doesn't provide guidance on how to make an accurate guess, often relying on
intuition or trial and error.

● It may only yield a specific or approximate solution rather than the most general or
precise one.

● The substitution method isn't universally applicable to all recurrence relations,


especially those with complex or variable forms that do not get simplified using
substitution

Recursion Tree Method


Recursion is a fundamental concept in computer science and mathematics that allows
functions to call themselves, enabling the solution of complex problems through iterative
steps. One visual representation commonly used to understand and analyze the execution
of recursive functions is a recursion tree. In this article, we will explore the theory behind
recursion trees, their structure, and their significance in understanding recursive algorithms.

What is a Recursion Tree?


A recursion tree is a graphical representation that illustrates the execution flow of a
recursive function. It provides a visual breakdown of recursive calls, showcasing the
progression of the algorithm as it branches out and eventually reaches a base case. The
tree structure helps in analyzing the time complexity and understanding the recursive
process involved.

Tree Structure
Each node in a recursion tree represents a particular recursive call. The initial call is
depicted at the top, with subsequent calls branching out beneath it. The tree grows
downward, forming a hierarchical structure. The branching factor of each node depends on
the number of recursive calls made within the function. Additionally, the depth of the tree
corresponds to the number of recursive calls before reaching the base case.

Base Case
The base case serves as the termination condition for a recursive function. It defines the
point at which the recursion stops and the function starts returning values. In a recursion
tree, the nodes representing the base case are usually depicted as leaf nodes, as they do
not result in further recursive calls.

Recursive Calls
The child nodes in a recursion tree represent the recursive calls made within the function.
Each child node corresponds to a separate recursive call, resulting in the creation of new
sub problems. The values or parameters passed to these recursive calls may differ, leading
to variations in the sub problems' characteristics.

Execution Flow:
Traversing a recursion tree provides insights into the execution flow of a recursive function.
Starting from the initial call at the root node, we follow the branches to reach subsequent
calls until we encounter the base case. As the base cases are reached, the recursive calls
start to return, and their respective nodes in the tree are marked with the returned values.
The traversal continues until the entire tree has been traversed.

Time Complexity Analysis


Recursion trees aid in analyzing the time complexity of recursive algorithms. By examining
the structure of the tree, we can determine the number of recursive calls made and the work
done at each level. This analysis helps in understanding the overall efficiency of the
algorithm and identifying any potential inefficiencies or opportunities for optimization.

Introduction
○ Think of a program that determines a number's factorial. This function takes a
number N as an input and returns the factorial of N as a result. This function's
pseudo-code will resemble,
// find factorial of a number
factorial(n) {
// Base case
if n is less than 2: // Factorial of 0, 1 is 1
return 1

// Recursive step
return n * factorial(n-1); // Factorial of 5 => 5 * Factorial(4)...
}

/* How function calls are made,

Factorial(5) [ 120 ]
|
5 * Factorial(4) ==> 120
|
4. * Factorial(3) ==> 24
|
3 * Factorial(2) ==> 6
|
2 * Factorial(1) ==> 2
|
1

*/
○ Recursion is exemplified by the function that was previously mentioned. We are
invoking a function to determine a number's factorial. Then, given a lesser value
of the same number, this function calls itself. This continues until we reach the
basic case, in which there are no more function calls.
○ Recursion is a technique for handling complicated issues when the outcome is
dependent on the outcomes of smaller instances of the same issue.
○ If we think about functions, a function is said to be recursive if it keeps calling
itself until it reaches the base case.
○ Any recursive function has two primary components: the base case and the
recursive step. We stop going to the recursive phase once we reach the basic
case. To prevent endless recursion, base cases must be properly defined and
are crucial. The definition of infinite recursion is a recursion that never reaches
the base case. If a program never reaches the base case, stack overflow will
continue to occur.

Recursion Types
Generally speaking, there are two different forms of recursion:

○ Linear Recursion
○ Tree Recursion
○ Linear Recursion

Linear Recursion
○ A function that calls itself just once each time it executes is said to be linearly
recursive. A nice illustration of linear recursion is the factorial function. The name
"linear recursion" refers to the fact that a linearly recursive function takes a linear
amount of time to execute.
○ Take a look at the pseudo-code below:
function doSomething(n) {
// base case to stop recursion
if nis 0:
return
// here is some instructions
// recursive step
doSomething(n-1);
}
○ If we look at the function doSomething(n), it accepts a parameter named n and
does some calculations before calling the same procedure once more but with
lower values.
○ When the method doSomething() is called with the argument value n, let's say
that T(n) represents the total amount of time needed to complete the
computation. For this, we can also formulate a recurrence relation, T(n) = T(n-1)
+ K. K serves as a constant here. Constant K is included because it takes time
for the function to allocate or de-allocate memory to a variable or perform a
mathematical operation. We use K to define the time since it is so minute and
insignificant.
○ This recursive program's time complexity may be simply calculated since, in the
worst scenario, the method doSomething() is called n times. Formally speaking,
the function's temporal complexity is O(N).

Tree Recursion
○ When you make a recursive call in your recursive case more than once, it is
referred to as tree recursion. An effective illustration of Tree recursion is the
fibonacci sequence. Tree recursive functions operate in exponential time; they
are not linear in their temporal complexity.
○ Take a look at the pseudo-code below,
function doSomething(n) {
// base case to stop recursion
if n is less than 2:
return n;
// here is some instructions
// recursive step
return doSomething(n-1) + doSomething(n-2);
}
○ The only difference between this code and the previous one is that this one
makes one more call to the same function with a lower value of n.
○ Let's put T(n) = T(n-1) + T(n-2) + k as the recurrence relation for this function. K
serves as a constant once more.
○ When more than one call to the same function with smaller values is performed,
this sort of recursion is known as tree recursion. The intriguing aspect is now:
how time-consuming is this function?
○ Take a guess based on the recursion tree below for the same function.

○ It may occur to you that it is challenging to estimate the time complexity by


looking directly at a recursive function, particularly when it is a tree recursion.
Recursion Tree Method is one of several techniques for calculating the temporal
complexity of such functions. Let's examine it in further detail.

What Is Recursion Tree Method?


○ Recurrence relations like T(N) = T(N/2) + N or the two we covered earlier in the
kinds of recursion section are solved using the recursion tree approach. These
recurrence relations often use a divide and conquer strategy to address
problems.
○ It takes time to integrate the answers to the smaller sub problems that are
created when a larger problem is broken down into smaller sub problems.
○ The recurrence relation, for instance, is T(N) = 2 * T(N/2) + O(N) for the Merge
sort. The time needed to combine the answers to two sub problems with a
combined size of T(N/2) is O(N), which is true at the implementation level as well.
○ For instance, since the recurrence relation for binary search is T(N) = T(N/2) + 1,
we know that each iteration of binary search results in a search space that is cut
in half. Once the outcome is determined, we exit the function. The recurrence
relation has +1 added because this is a constant time operation.
○ The recurrence relation T(n) = 2T(n/2) + Kn is one to consider. Kn denotes the
amount of time required to combine the answers to n/2-dimensional sub
problems.
○ Let's depict the recursion tree for the aforementioned recurrence relation.
We may draw a few conclusions from studying the recursion tree above, including

1. The magnitude of the problem at each level is all that matters for determining the value of
a node. The issue size is n at level 0, n/2 at level 1, n/2 at level 2, and so on.

2. In general, we define the height of the tree as equal to log (n), where n is the size of the
issue, and the height of this recursion tree is equal to the number of levels in the tree. This
is true because, as we just established, the divide-and-conquer strategy is used by
recurrence relations to solve problems, and getting from issue size n to problem size 1
simply requires taking log (n) steps.

○ Consider the value of N = 16, for instance. If we are permitted to divide N by 2 at


each step, how many steps are required to get N = 1? Considering that we are
dividing by two at each step, the correct answer is 4, which is the value of log(16)
base 2.

log(16) base 2

log(2^4) base 2

4 * log(2) base 2, since log(a) base a = 1

so, 4 * log(2) base 2 = 4

3. At each level, the second term in the recurrence is regarded as the root.

Although the word "tree" appears in the name of this strategy, you don't need to be an
expert on trees to comprehend it.
How to Use a Recursion Tree to Solve Recurrence Relations?
The cost of the sub problem in the recursion tree technique is the amount of time needed to
solve the sub problem. Therefore, if you notice the phrase "cost" linked with the recursion
tree, it simply refers to the amount of time needed to solve a certain sub problem.

Let's understand all of these steps with a few examples.

Example

Consider the recurrence relation,

T(n) = 2T(n/2) + K

Solution

The given recurrence relation shows the following properties,

A problem size n is divided into two sub-problems each of size n/2. The cost of combining
the solutions to these sub-problems is K.

Each problem size of n/2 is divided into two sub-problems each of size n/4 and so on.

At the last level, the sub-problem size will be reduced to 1. In other words, we finally hit the
base case.

Let's follow the steps to solve this recurrence relation,

Step 1: Draw the Recursion Tree


Step 2: Calculate the Height of the Tree

Since we know that when we continuously divide a number by 2, there comes a time when
this number is reduced to 1. Same as with the problem size N, suppose after K divisions by
2, N becomes equal to 1, which implies, (n / 2^k) = 1

Here n / 2^k is the problem size at the last level and it is always equal to 1.

Now we can easily calculate the value of k from the above expression by taking log() to both
sides. Below is a more clear derivation,

n = 2^k

○ log(n) = log(2^k)
○ log(n) = k * log(2)
○ k = log(n) / log(2)
○ k = log(n) base 2

So the height of the tree is log (n) base 2.

Step 3: Calculate the cost at each level

○ Cost at Level-0 = K, two sub-problems are merged.


○ Cost at Level-1 = K + K = 2*K, two sub-problems are merged two times.
○ Cost at Level-2 = K + K + K + K = 4*K, two sub-problems are merged four times.
and so on....
Step 4: Calculate the number of nodes at each level

Let's first determine the number of nodes in the last level. From the recursion tree, we can
deduce this

○ Level-0 have 1 (2^0) node


○ Level-1 have 2 (2^1) nodes
○ Level-2 have 4 (2^2) nodes
○ Level-3 have 8 (2^3) nodes

So the level log(n) should have 2^(log(n)) nodes i.e. n nodes.

Step 5: Sum up the cost of all the levels

○ The total cost can be written as,


○ Total Cost = Cost of all levels except last level + Cost of last level
○ Total Cost = Cost for level-0 + Cost for level-1 + Cost for level-2 +.... + Cost for
level-log(n) + Cost for last level

The cost of the last level is calculated separately because it is the base case and no
merging is done at the last level so, the cost to solve a single problem at this level is some
constant value. Let's take it as O (1).

Let's put the values into the formulae,

○ T(n) = K + 2*K + 4*K + .... + log(n)` times + `O(1) * n


○ T(n) = K(1 + 2 + 4 + .... + log(n) times)` + `O(n)
○ T(n) = K(2^0 + 2^1 + 2^2 + ....+ log(n) times + O(n)

If you closely take a look to the above expression, it forms a Geometric progression (a, ar,
ar^2, ar^3 ...... infinite time). The sum of GP is given by S(N) = a / (r - 1). Here is the first
term and r is the common ratio.

You might also like