0% found this document useful (0 votes)
3 views

PPS UNIT-6

The document provides an overview of recursion, including its definition, types (direct and indirect), and examples such as calculating factorials and the Fibonacci series. It discusses the advantages and disadvantages of recursion, memory allocation, and real-world applications, including algorithms like Quick Sort and Merge Sort. Additionally, it introduces the Ackermann function and its implementation in C, emphasizing the divide and conquer strategy in sorting algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

PPS UNIT-6

The document provides an overview of recursion, including its definition, types (direct and indirect), and examples such as calculating factorials and the Fibonacci series. It discusses the advantages and disadvantages of recursion, memory allocation, and real-world applications, including algorithms like Quick Sort and Merge Sort. Additionally, it introduces the Ackermann function and its implementation in C, emphasizing the divide and conquer strategy in sorting algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 18

UNIT-6

RECURSION
Recursion: Recursion is the technique of making a function call itself. This technique
provides a way to break complicated problems down into simple problems which are
easier to solve.
Recursion cannot be applied to all the problem, but it is more useful for the tasks that can
be defined in terms of similar subtasks. For Example, recursion may be applied to
sorting, searching, and traversal problems.
A physical world example would be to place two parallel mirrors facing each other. Any
object in between them would be reflected recursively.
Types of Recursion: There are primarily two types of C recursion:
1. Direct Recursion: In this type of recursion, a function calls itself directly. It is the
most common form of recursion, where a function invokes itself during its execution.
2. Indirect Recursion: In this type recursion, multiple functions call each other in a
circular manner. A function calls another function, which in turn may call the first one or
another function, creating a chain of recursive calls.
How recursion works?

Syntax:
This is how a general recursive function looks like −
In the following example, recursion is used to calculate the factorial of a
number
#include <stdio.h>
#include<conio.h>
int fact (int);
int main()
{ int n,f;
printf("Enter the number whose factorial you want to calculate?");
scanf("%d",&n);
f = fact(n);
printf("factorial = %d",f);
getch();
return 0;
}
int fact(int n)
{
if (n==0)
{
return 0;
}
else if ( n == 1)
{
return 1;
}
else
{
return n*fact(n-1);
}
}

We can understand the above program of the recursive method call by the figure given
below:

What is Fibonacci Series?


The Fibonacci series is a sequence where each number is the sum of the two numbers that
come before it. The problems usually start with 0 and 1. The Fibonacci numbers are in
the integer sequence of 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on. In mathematics, it’s
expressed using the recurrence relation:
F(n) = F(n-1) + F(n-2)
With the starting conditions:
F(0) = 0
F(1) = 1
The simple formula of this series can be used to build the entire sequence. Fibonacci
series forms the foundation of many mathematical concepts and real world applications
so learning to write a program for it is important.

Program to find the nth term of the Fibonacci Series using Recursion.
#include<stdio.h>
int fibonacci(int);
void main ()
{
int n,f;
printf("Enter the value of n?");
scanf("%d", &n);
f = fibonacci(n);
printf("%d",f);
}
int fibonacci (int n)
{
if (n==0)
{
return 0;
}
else if (n == 1)
{
return 1;
}
else
{
return fibonacci(n-1)+fibonacci(n-2);
}
}
Memory allocation of Recursive method
Each recursive call creates a new copy of that method in the memory. Once some data is
returned by the method, the copy is removed from the memory. Since all the variables
and other stuff declared inside function get stored in the stack, therefore a separate stack
is maintained at each recursive call. Once the value is returned from the corresponding
function, the stack gets destroyed. Recursion involves so much complexity in resolving
and tracking the values at each recursive call. Therefore we need to maintain the stack
and track the values of the variables defined in the stack.
Let us consider the following example to understand the memory allocation of the
recursive functions.
Explanation: Let us examine this recursive function for n = 4. First, all the stacks are
maintained which prints the corresponding value of n until n becomes 0, Once the
termination condition is reached, the stacks get destroyed one by one by returning 0 to its
calling stack. Consider the following image for more information regarding the stack
trace for the recursive functions.

Advantages of Recursion:
 Simplicity: Code gets cleaner and uses fewer needless function calls.
 Elegant solution: Helpful for resolving difficult algorithms and formula-based
challenges.
 Tractable approach: They are useful for traversing trees and graphs because
they are naturally recursive.
 Code reusability: Recursive functions in C can be used repeatedly within the
program for different inputs, promoting code reusability.
Disadvantages of Recursion:
 Complexity: It gets challenging to read and decipher the code.
 Stack consumption: The copies of recursive functions take up a significant
amount of memory.
 Performance: Recursive calls can be slower and less efficient than iterative
solutions due to function call overhead and repeated calculations.
 Limited applicability: Not all problems are well-suited for C recursion, and
converting iterative solutions into recursive ones may not always be practical or
beneficial.

Real Applications of C Recursion


In this section, explore the various real-life applications of C recursion.
 Finding Data Structures Like Trees & Graphs: One can leverage the recursive
methods to thoroughly explore and access a tree of graph’s nodes.
 Divide-and-Conquer Algorithms: Algorithms like the binary search algorithm
leverages this recursion technique to split the issues into smaller sub-issues.
 Fractal Generation: Recursive algorithms can be used to create fractal shapes
and patterns. For instance, the Mandelbrot set was created by continually applying
complex numbers to a recursive algorithm.
 Algorithms for Backtracking: Backtracking algorithms are implemented to
handle issues that need a series of decisions, each of which is dependent on the
prior one.

Ackermann function
The Ackermann function, named after the German mathematician Wilhelm Ackermann,
is a recursive mathematical function that takes two non-negative integers as inputs and
produces a non-negative integer as its output. Recursion is used to implement the
Ackermann function in C.

Its arguments are never negative and it always terminates.


Let’s Take an example to solve the ackermann function in C for values 1 and 2.
Solve A(1, 2)?
Answer:
Given problem is A(1, 2)
Here m = 1,
n=2 e.g m > 0 and n > 0

Hence applying third condition of Ackermann function


A(1, 2) = A(0, A(1, 1)) ———- (1)

Now, Let’s find A(1, 1) by applying third condition of Ackermann function


A(1, 1) = A(0, A(1, 0)) ———- (2)

Now, Let’s find A(1, 0) by applying second condition of Ackermann function


A(1, 0) = A(0, 1) ———- (3)

Now, Let’s find A(0, 1) by applying first condition of Ackermann function


A(0, 1) = 1 + 1 = 2

Now put this value in equation 3


Hence A(1, 0) = 2

Now put this value in equation 2


A(1, 1) = A(0, 2) ———- (4)

Now, Let’s find A(0, 2) by applying first condition of Ackermann function


A(0, 2) = 2 + 1 = 3

Now put this value in equation 4


Hence A(1, 1) = 3
Now put this value in equation 1
A(1, 2) = A(0, 3) ———- (5)
Now, Let’s find A(0, 3) by applying first condition of Ackermann function
A(0, 3) = 3 + 1 = 4
Now put this value in equation 5
Hence A(1, 2) = 4
So, A (1, 2) = 4

Implementation of of Ackermann function


#include <stdio.h>
int ack(int m, int n)
{
if (m == 0) {
return n+1;
}
else if((m > 0) && (n == 0))
{
return ack(m-1, 1);
}
else if((m > 0) && (n > 0))
{
return ack(m-1, ack(m, n-1));
}
}
int main() {
int A;
A = ack(1, 2);
printf("%d", A);
return 0;
}
Output: 4

Ackermann algorithm:
Ackermann(m, n)
{next and goal are arrays indexed from 0 to m, initialized so that next[O] through next[m]
are 0, goal[O] through goal[m - l] are 1, and goal[m] is -1}
repeat
value ← next[O] + 1
transferring ← true
current ← O
while transferring do begin
if next[current] = goal[current] then goal[current] ← value
else transferring ← false
next[current] ← next[current]+l
current ← current + 1
end while
until next[m] = n + 1
return value {the value of A(m, n)}
end Ackermann

Quick sort
Quick Sort is a divide and conquer algorithm. This selects a pivot element and divides the
array into two subarrays
 Step 1: Pick an element from an array, and call it a pivot element.
 Step 2: Divide an unsorted array element into two arrays.
 Step 3: If the value less than the pivot element comes under the first sub-array,
the remaining elements with a value greater than the pivot come in the second
sub-array.
How it works:
1. Choose a value in the array to be the pivot element.
2. Order the rest of the array so that lower values than the pivot element are on the
left, and higher values are on the right.
3. Swap the pivot element with the first element of the higher values so that the
pivot element lands in between the lower and higher values.
4. Do the same operations (recursively) for the sub-arrays on the left and right side
of the pivot element.
Example: Quicksort algorithm
Step 1: We start with an unsorted array.
[ 11, 9, 12, 7, 3]

Step 2: We choose the last value 3 as the pivot element.


[ 11, 9, 12, 7, 3]

Step 3: The rest of the values in the array are all greater than 3, and must be on the right
side of 3. Swap 3 with 11.
[ 3, 9, 12, 7, 11]

Step 4: Value 3 is now in the correct position. We need to sort the values to the right of
3. We choose the last value 11 as the new pivot element.
[ 3, 9, 12, 7, 11]
Step 5: The value 7 must be to the left of pivot value 11, and 12 must be to the right of it.
Move 7 and 12.
[ 3, 9, 7, 12, 11]

Step 6: Swap 11 with 12 so that lower values 9 and 7 are on the left side of 11, and 12 is
on the right side.
[ 3, 9, 7, 11, 12]

Step 7: 11 and 12 are in the correct positions. We choose 7 as the pivot element in sub-
array [ 9, 7], to the left of 11.
[ 3, 9, 7, 11, 12]

Step 8: We must swap 9 with 7.


[ 3, 7, 9, 11, 12]
And now, the array is sorted.

Run the simulation below to see the steps above animated:


[ 11, 9, 12, 7, 3 ]

EXAMPLE-2:
Now,
 The pivot is in a fixed position.
 All the remaining elements are less.
 The right elements are greater than the pivot.
 Now, divide the array into 2 sub-arrays, the left and right parts.
 Take the left partition and apply a quick sort.

Now,
 The pivot is in a fixed position.
 All the remaining elements are fewer and sorted
 The right elements are greater and are in sorted order.
 The final sorted list combines two sub-arrays 2, 3, 4, 5, 6, 7
Time Complexity : The speed of an algorithm is often captured by its time
complexity, which expresses the relationship between the input size and the time required

to complete the task.


 Best and Average Case: In the best and average scenarios, where the pivot
element is chosen to divide the array into proportionate halves consistently, the
time complexity is O(n log n). This is because the array is halved with each pass
(log n), and this operation is performed for each n element.
 Worst Case: The worst case occurs when the pivot selection results in one
partition containing all but one of the elements, leading to O(n^2). This situation
commonly happens when the pivot is the smallest or largest element in the array,
as it would be in an already sorted or reverse-sorted array.
Space Complexit : Space complexity refers to the amount of memory space required
by an algorithm in its life cycle.

 In-Place Sorting: Quick Sort is an in-place sorting algorithm. However, it


requires space for the stack frames of the recursive calls. In the best case, this
leads to a space complexity of O(log n), representing the height of the balanced
partition tree.
 Worst Case: In the worst case, with unbalanced partitions, the space complexity
can degrade to O(n), where n is the depth of the recursive call stack, one for each
array element.
Implimentation of Quick short.
#include <stdio.h>
void swap(int* a, int* b) // Function to swap two elements
{
int t = *a;
*a = *b;
*b = t;
}
int partition(int arr[], int low, int high) {
int pivot = arr[high];
int i = (low - 1);
for (int j = low; j <= high - 1; j++) {
if (arr[j] < pivot) {
i++;
swap(&arr[i], &arr[j]);
}
}
swap(&arr[i + 1], &arr[high]);
return (i + 1);
}
void quickSort(int arr[], int low, int high) {
if (low < high) {
int pi = partition(arr, low, high);
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}
// Function to print the array
void printArray(int arr[], int size) {
int i;
for (i = 0; i < size; i++)
printf("%d ", arr[i]);
printf("\n");
}

int main() {
int arr[] = { 12, 17, 6, 25, 1, 5 };
int n = sizeof(arr) / sizeof(arr[0]);
quickSort(arr, 0, n - 1);
printf("Sorted array: \n");
printArray(arr, n);
return 0;
}
Output Sorted array: 1 5 6 12 17 25
Merge Sort Algorithm
Merge Sort is one of the most popular sorting algorithms that is based on the principle of
Divide and Conquer Algorithm.
Here, a problem is divided into multiple sub-problems. Each sub-problem is solved
individually. Finally, sub-problems are combined to form the final solution.

The time complexity for Merge Sort is O(n⋅logn)

Divide and Conquer Strategy


Using the Divide and Conquer technique, we divide a problem into subproblems. When
the solution to each subproblem is ready, we 'combine' the results from the subproblems
to solve the main problem.

Suppose we had to sort an array A. A subproblem would be to sort a sub-section of this


array starting at index p and ending at index r, denoted as A[p..r].

Divide

If q is the half-way point between p and r, then we can split the subarray A[p..r] into two
arrays A[p..q] and A[q+1, r].

Conquer

In the conquer step, we try to sort both the subarrays A[p..q] and A[q+1, r]. If we haven't
yet reached the base case, we again divide both these subarrays and try to sort them.

Combine

When the conquer step reaches the base step and we get two sorted subarrays A[p..q] and
A[q+1, r] for array A[p..r], we combine the results by creating a sorted array A[p..r]
from two sorted subarrays A[p..q] and A[q+1, r].

You might also like