0% found this document useful (0 votes)
108 views

Recursion 1.

The document is a lesson plan on recursion that includes: 1) An outline with 4 topics: recursive functions and algorithms, recursion and stack, recursion versus iteration, and tail-recursive functions. 2) A section explaining that recursion involves dividing a problem into sub-problems, solving the sub-problems, and combining results in a divide-and-conquer approach. 3) A definition of recursive functions including base cases that produce results without recurring, and recursive cases that call the function itself to break inputs into simpler forms until the base case is reached.

Uploaded by

Farida Mammadli
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views

Recursion 1.

The document is a lesson plan on recursion that includes: 1) An outline with 4 topics: recursive functions and algorithms, recursion and stack, recursion versus iteration, and tail-recursive functions. 2) A section explaining that recursion involves dividing a problem into sub-problems, solving the sub-problems, and combining results in a divide-and-conquer approach. 3) A definition of recursive functions including base cases that produce results without recurring, and recursive cases that call the function itself to break inputs into simpler forms until the base case is reached.

Uploaded by

Farida Mammadli
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Teacher: Ellada İbrahimova

Student: Mahishov Elman

Group: 652.21E

Specialty:Computer engineering

Subject: Data structure and algorithms

Topic: Recursion
Plan Recursive functions and algorithms
01

02 Recursion and stack

03 Recursion versus iteration

04 Tail-recursive functions
A common algorithm design tactic is to
divide a problem into sub-problems of the
same type as the original, solve those
sub-problems, and combine the results.
This is often referred to as the divide-and-
conquer method; when combined with
a lookup table that stores the results of
previously solved sub-problems (to avoid
solving them repeatedly and incurring
extra computation time), it can be referred
to as dynamic
programming or memoization.
A recursive function definition has one or more base
cases, meaning input(s) for which the function produces a
result trivially (without recurring), and one or more recursive
cases, meaning input(s) for which the program recurs (calls
itself).
For example, the factorial function can be
defined recursively by the equations 0! =
1 and, for all n > 0, n! = n(n − 1)!. Neither
The job of the recursive cases can be seen as
breaking down complex inputs into simpler ones. In a
equation by itself constitutes a complete properly designed recursive function, with each
definition; the first is the base case, and the recursive call, the input problem must be simplified in
second is the recursive case. such a way that eventually the base case must be

Base Case
reached.

Because the base case breaks the For some functions (such as one that computes
the series for e = 1/0! + 1/1! + 1/2! + 1/3! + ...)
chain of recursion, it is sometimes
there is not an obvious base case implied by the
also called the "terminating case". input data; for these one may add
a parameter (such as the number of terms to be
added, in our series example) to provide a
'stopping criterion' that establishes the base
case. 
As a first example, let's write a
function pow(x, n) that raises x to
the natural power of n. In other
words, it multiplies x by itself n
times.
Consider two ways to implement it.

1.Iterative way: for loop:


2. Recursive way: simplifying the
task and calling the function itself:
Note that the recursive version is fundamentally
different. When pow(x, n) is called, execution is
divided into two branches:

1. If n == 1, then everything is simple. This


branch is called the base of the recursion because it
immediately leads to the obvious result: pow(x, 1)
equals x.

2. We can represent pow(x, n) as: x * pow(x, n - 1). Which in mathematics


is written as: xn = x * xn-1. This branch is a recursion step: we reduce the
problem to a simpler action (multiplication by x) and a simpler analogous
problem (pow with smaller n). The next steps make the task easier and
easier until n reaches 1.
The pow function is said to call itself
recursively up to n == 1.
For example, the recursive version of
calculating pow(2, 4) consists of the
following steps:

1.pow(2, 4) = 2 * pow(2, 3)
2.pow(2, 3) = 2 * pow(2, 2)
3.pow(2, 2) = 2 * pow(2, 1)
4.pow(2, 1) = 2
Recursion versus iteration

Recursion and iteration are equally expressive: recursion can be


replaced by iteration with an explicit call stack, while iteration can be
replaced with tail recursion. Which approach is preferable depends on
the problem under consideration and the language used. In imperative
programming, iteration is preferred, particularly for simple recursion, as
it avoids the overhead of function calls and call stack management, but
recursion is generally used for multiple recursion. By contrast,
in functional languages recursion is preferred, with tail recursion
optimization leading to little overhead. Implementing an algorithm
using iteration may not be easily achievable.
Compare the templates to compute xn defined by xn = f(n,
xn-1) from xbase:

For an imperative language


the overhead is to define the
function, and for a functional
language the overhead is to
define the accumulator
variable x.
For example, a factorial function may be implemented
iteratively in C by assigning to a loop index variable
and accumulator variable, rather than by passing
arguments and returning values by recursion:
Tail-recursive functions

Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up
any deferred operations. For example, the gcd function (shown again below) is tail-recursive. In contrast,
the factorial function (also below) is not tail-recursive; because its recursive call is not in tail position, it
builds up deferred multiplication operations that must be performed after the final recursive call
completes. With a compiler or interpreter that treats tail-recursive calls as jumps rather than function calls,
a tail-recursive function such as gcd will execute using constant space. Thus the program is essentially
iterative, equivalent to using imperative language control structures like the "for" and "while" loops.
The significance of tail recursion is that when making a tail-recursive call (or
any tail call), the caller's return position need not be saved on the call stack;
when the recursive call returns, it will branch directly on the previously saved
return position. Therefore, in languages that recognize this property of tail calls,
tail recursion saves both space and time.
Thank you for your
attention

You might also like