0% found this document useful (0 votes)
89 views

Daa Assignments

The recursive tree method represents recursive algorithms as tree structures. Each node shows the cost at a recursion level. For factorial of 5, the root node calls the function on 4, which calls it on 3, and so on until the base case of 1 is reached. Then the values are returned up the tree by multiplying at each level.

Uploaded by

yashchakote27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views

Daa Assignments

The recursive tree method represents recursive algorithms as tree structures. Each node shows the cost at a recursion level. For factorial of 5, the root node calls the function on 4, which calls it on 3, and so on until the base case of 1 is reached. Then the values are returned up the tree by multiplying at each level.

Uploaded by

yashchakote27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

ASSIGNMENT NO 1

Question 1:What is an algorithm? How do we represent it?

Answer: An algorithm is a process or a set of rules required to perform


calculations or some other problem-solving operations especially by a computer.

The formal definition of an algorithm is that it contains the finite set of


instructions which are being carried in a specific order to perform the specific
task. It is not the complete program or code; it is just a solution (logic) of a
problem, which can be represented either as an informal description using a
Flowchart or Pseudocode.

Characteristics of an Algorithm:

Input: An algorithm has some input values. We can pass 0 or some input value to
an algorithm.

Output: We will get 1 or more output at the end of an algorithm.

Unambiguity: An algorithm should be unambiguous which means that the


instructions in an algorithm should be clear and simple.

Finiteness: An algorithm should have finiteness. Here, finiteness means that the
algorithm should contain a limited number of instructions, i.e., the instructions
should be countable.

Effectiveness: An algorithm should be effective as each instruction in an algorithm


affects the overall process.

Language independent: An algorithm must be language-independent so that the


instructions in an algorithm can be implemented in any of the languages with the
same output.
Dataflow of an Algorithm

Problem: A problem can be a real-world problem or any instance from the


real-world problem for which we need to create a program or the set of
instructions. The set of instructions is known as an algorithm.

Algorithm: An algorithm will be designed for a problem which is a step by step


procedure.

Input: After designing an algorithm, the required and the desired inputs are
provided to the algorithm.

Processing unit: The input will be given to the processing unit, and the processing
unit will produce the desired output.

Output: The output is the outcome or the result of the program.

An algorithm can be represented in three different ways. They are:

1. Natural Language: An algorithm can be expressed in a natural language

like English. But it is usually hard to understand and is not the best way

to express an algorithm.

2. Flow Charts: Flow charts are another way of expressing an algorithm

where we make use of diagrams to represent the algorithm.

Eg. Flowchart: check number is even or odd.


3. Pseudo-Code: It is the best way to express an algorithm. In pseudo-code,

we explain the algorithm in steps.. It doesn’t have any syntax like any

other programming language. Therefore, it can’t be interpreted or

compiled.

Eg. Algorithm to check number is even or odd.

Step1: take Number


Stept 2: Calculate reminder with remainder=Number%2
Step 3: if remainder==0;
then print number is even
else
print number is odd
Question 2: Write rules for pseudocode conventions.

Answer:

Pseudocode : pseudocode is a one of the representation of an algorithm.


Pseudocode can be easily understood by someone who has basic knowledge of
programming.

Pseudocode does not have a specific syntax unlike a program that is written using
syntaxes of a particular language. Hence, pseudocode cannot be executed on a
computer, rather it eases the work of a programmer as it can be easily understood.

Difference between Algorithm and Pseudocode:

1. An algorithm is a well defined sequence of instructions that provide a solution


to the given problem. While A pseudocode is a method which is used to
represent an algorithm.

2. An algorithm has some specific characteristics that describe the process. While
A pseudocode on the other hand is not restricted to something. It’s only objective is
to represent an algorithm in a realistic manner.

3. An algorithm is written in plain English or general language.While a


pseudocode is written with a hint of programming concepts such as control
structures.

Pseudocode Conventions:.
Question 3: Explain in brief about performance analysis.

Answer:

Performance Analysis:

In a computer science, there are multiple algorithms to solve a problem. When we


have more than one algorithm to solve a problem, we need to select the best one.
Performance analysis helps us to select the best algorithm from multiple algorithms
to solve a problem.
When there are multiple alternative algorithms to solve a problem, we analyze
them and pick the one which is best suitable for our requirements. The formal
definition is as follows...

Definition: “Performance of an algorithm is a process of making evaluative


judgement about algorithms.”

or

“Performance of an algorithm means predicting the resources which are required


to an algorithm to perform its task.”

That means when we have multiple algorithms to solve a problem, we need to


select a suitable algorithm to solve that problem.
We compare algorithms with each other which are solving the same problem, to
select the best algorithm. To compare algorithms, we use a set of parameters or set
of elements like memory required by that algorithm, the execution speed of that
algorithm, easy to understand, easy to implement, etc.,

When we want to analyse an algorithm, we consider only the space and time
required by that particular algorithm and we ignore all the remaining elements.
Based on this information, performance analysis of an algorithm can also be
defined as follows...

Definition: Performance analysis of an algorithm is the process of calculating


space and time required by that algorithm.
The performance of a program is the amount of computer memory and time needed
to run a program.

Time Complexity: The time needed by an algorithm expressed as a function of the


size of a problem is called the time complexity of the algorithm.

The time complexity of a program is the amount of computer time it needs to run
to completion.

The limiting behaviour of the complexity as size increases is called the asymptotic
time complexity. It is the asymptotic complexity of an algorithm, which ultimately
determines the size of problems that can be solved by the algorithm.

Space Complexity:

The space complexity of a program is the amount of memory it needs to run to


completion.

The space need by a program has the following components:

Instruction space: Instruction space is the space needed to store the compiled
version of the program instructions.

Data space: Data space is the space needed to store all constant and variable
values.

Data space has two components:

i. Space needed by constants and simple variables in program.

ii. Space needed by dynamically allocated objects such as arrays


and class instances.

Environment stack space: The environment stack is used to save information


needed to resume execution of partially completed functions.

Instruction Space: The amount of instructions space that is needed depends on


factors such as:
i. The compiler used to complete the program into machine code.

ii. The compiler options in effect at the time of compilation

iii. The target computer.


Question 4: What is a recursive algorithm., Write algorithm for factorial

Answer:

A recursive algorithm calls itself with smaller input values and returns the result
for the current input by carrying out basic operations on the returned value for the
smaller input. Generally, if a problem can be solved by applying solutions to
smaller versions of the same problem, and the smaller versions shrink to readily
solvable instances, then the problem can be solved using a recursive algorithm.

To build a recursive algorithm, you will break the given problem statement into
two parts. The first one is the base case, and the second one is the recursive step.

· Base Case: It is nothing more than the simplest instance of a problem,


consisting of a condition that terminates the recursive function. This base
case evaluates the result when a given condition is met.

· Recursive Step: It computes the result by making recursive calls to the


same function, but with the inputs decreased in size or complexity.

For example, consider this problem statement: Print sum of n natural numbers
using recursion. This statement clarifies that we need to formulate a function that
will calculate the summation of all natural numbers in the range 1 to n. Hence,
mathematically you can represent the function as:

F(n) = 1 + 2 + 3 + 4 + …………..+ (n-2) + (n-1) + n

It can further be simplified as:

You can breakdown this function into two parts as follows:

Algorithm for Factorial of Program.

Step 1: Start

Step 2: Declare Variable n, fact, i


Step 3: Read number from User

Step 4: Initialize Variable fact=1 and i=1

Step 5: Repeat Until i<=number

5.1 fact=fact*i

5.2 i=i+1

Step 6: Print fact

Step 7: Stop

Pseudocode:
Read number
Fact = 1
i=1
WHILE i<=number
Fact=Fact*i
i=i+1
ENDWHILE
WRITE Fact

We first take input from user and store that value in variable named “n”. Then we
initialize a variable “Fact” with value 1 (i.e Fact=1) and variable i with value 1(i.e
i=1). Repeat next two steps until i is less than n.

Multiply Fact with current value of i


Increment i with 1
At last, print the value of Fact.

Let’s take an example,


Let the input be 5.
The equation that gets created by our algorithm is 5x4x3x2x1.
So, The Factorial of 5 is 120(5x4x3x2x1).
Question 5: Explain Recursive tree with example

Answer:

The Recursion Tree Method resolves recurrence relations by converting them into
recursive trees, where each node signifies the cost at different recursion levels.
This visual representation simplifies understanding. Recursion, vital in computer
science and mathematics, enables functions to self-reference. Recursion trees
visually depict the iterative execution of recursive functions, aiding comprehension
in problem-solving.

Types of Recursion

Broadly speaking, there are two types of recursion namely,

Linear Recursion

Tree Recursion

1. Linear Recursion

A linear recursive function is a function that only makes a single call to itself each
time the function runs. The factorial function is a good example of linear recursion.
A linearly recursive function takes linear time to complete its execution that’s why
it is called linear recursion.

2. Tree Recursion

Tree Recursion is just a phrase to describe when you make a recursive call more
than once in your recursive case. The fibonacci function is a good example of Tree
recursion. The time complexity of tree recursive function is not linear, they run in
exponential time.

Consider the pseudo-code written below,

function fib(n) {

if n is less than 2:

return n;
return fib(n-1) + fib(n-2); // recursive step

Here the function fib(n) is function which calls itself 2 times. These call is being
made to the same function with a smaller value of n.

Let's write the recurrence relation for this function as well, T(n) = T(n-1) + T(n-2)
+ k. Again K is some constant here.

These types of recursion are called tree recursion where more than one call is made
to the same function with smaller values. Now the interesting part, what is the time
complexity of this function?

Take the below recursion tree for the same function like a hint and take a guess.

You may realize that it's hard to determine the time complexity by directly looking
into a recursive function, especially when it's a tree recursion. There are a lot of
ways to determine the time complexity of such functions, one of them is Recursion
Tree Method.

· Recursion tree method is used to solve recurrence relations like


T(N)=T(N/2)+N or the two we have discussed above in types of recursion
section. Generally, these recurrence relations follow the divide and conquer
approach to solve a problem.

· When a problem is divided into smaller subproblems, there is also some time
needed to combine the solutions to those subproblems to form the answer to the
original problem.

· Consider the recurrence relation,

T(n)=2T(n/2)+Kn. Here Kn is the time needed to combine the solutions of


subproblems of size n/2.

· Let's draw the recursion tree for the recurrence relation stated above,
Question 6: Solve substitution method for recurrence relation.

Answer:

The substitution method is based on the idea of guessing a solution for the
recurrence relation and then proving it by induction.

For example, suppose you have the recurrence relation T(n) = 2T(n/2) + n, with
T(1) = 1. This recurrence relation describes the running time of a binary search
algorithm. You may guess that the solution is T(n) = n log n, based on your
intuition or previous knowledge. To verify your guess, you need to show that it
satisfies the recurrence relation for all n. This can be done by using induction,
which involves two steps: the base case and the induction step.

1 The base case

The base case is the simplest case where the recurrence relation holds. Usually, this
is when n is equal to the smallest value for which the recurrence relation is defined.
For example, in the binary search recurrence relation, the base case is when n = 1.
You need to show that your guessed solution matches the given value for the base
case. In this case, T(1) = 1 log 1 = 1, which is equal to the given value of T(1) = 1.
Therefore, the base case is satisfied.

2 The induction step

The induction step is where you assume that the recurrence relation holds for some
n and then show that it also holds for a larger n. Usually, this is done by
substituting the recurrence relation into your guessed solution and simplifying it.
For example, in the binary search recurrence relation, the induction step is to
assume that T(n) = n log n for some n and then show that T(2n) = 2n log 2n.
To do this, you can substitute T(2n) = 2T(n) + 2n into your guessed solution and
simplify it as follows:

T(2n) = 2T(n) + 2n
= 2(n log n) + 2n
= 2n(log n + 1)
= 2n log (n * 2)
= 2n log 2n

Therefore, the induction step is satisfied.

Example 2:

In the above equation, to find T(n)= 1 + T(n-1) – (1), we have to find T(n-1).
To get the value of T(n-1), we have to substitute n-1 in the place of n.
T(n-1) = 1 + T(n-2) – (2).
Similarly, T(n-2) = 1 + T(n-3) – (3).
By substituting the value of T(n-1) in the equation (1), we get T(n) = 2 + T(n-2).
Now substitute the value of T(n-2) in the above equation.
By substituting we will get T(n) = 3 + T(n-3).
The above equation looks like T(n) = k + T(n-k).
This way, we can substitute to any length, but the algorithm will stop executing
when the value of n-k = 1.
Now, k = n-1.
Substitute the value k in the above equation.
T(n) = n-1 + T(n – n + 1).
T(n) = n-1 + T(1).
We know from the given equation that T(1) = 1.
T(n) = n.
Therefore, the efficiency of the above recurrence equation T(n) = n.
Assignment 2

Question 1. Write a Program for Binary Search.


Answer:

Binary Search Algorithm

binarySearch(arr, x, low, high)

if low > high

return False

else

mid = (low + high) / 2

if x == arr[mid]

return mid

else if x > arr[mid] // x is on the right side

return binarySearch(arr, x, mid + 1, high)

else // x is on the left side

return binarySearch(arr, x, low, mid - 1)

Program:

#include <stdio.h>

int binarySearch(int array[], int x, int low, int high) {

if (high >= low) {

int mid = low + (high - low) / 2;

// If found at mid, then return it

if (array[mid] == x)
return mid;

// Search the left half

if (array[mid] > x)

return binarySearch(array, x, low, mid - 1);

// Search the right half

return binarySearch(array, x, mid + 1, high);

return -1;

int main(void) {

int array[] = {3, 4, 5, 6, 7, 8, 9};

int n = sizeof(array) / sizeof(array[0]);

int x = 4;

int result = binarySearch(array, x, 0, n - 1);

if (result == -1)

printf("Not found");

else

printf("Element is found at index %d", result);

Time Complexities
Best case complexity : O(1)

Average case complexity : O(log n)

Worst case complexity : O(log n)

Space Complexity : O(1).

Question 2. Explain MinMax Algorithm (Divide and Conquer)


Answer:
In MinMax using divide and conquer, a recursive function that accepts the array
and its start and end indices as input parameters,
i.e., findMinMax(int X[], int l, int r).

Base case 1: If the array size is 1, we return that single element as both the
maximum and minimum.

Base case 2: If the array size is 2, we compare both elements and return the
maximum and minimum.

Divide part: We calculate the mid index i.e. mid = l + (r - l)/2.


Conquer part
1. We recursively calculate and store the maximum and minimum for
the left part,

i.e., leftMinMax[2] = findMinMax(X, l, mid).


2. We recursively calculate and store the maximum and minimum for
the right part,

i.e., rightMinMax[2] = findMinMax(X, mid + 1, r).


Combine part: Now we find the overall maximum and minimum by
comparing the min and max of both halves. For this, we need to perform two
comparisons only.
Finally, we store max and min in extra memory maxMin[2] and return it.
MinMax Algorithm using Divide Conquer:

int[] findMinMax(int X[], int l, int r)


{
int max, min
if (l == r)
{
max = X[l]
min = X[l]
}
else if (l + 1 == r)
{
if (X[l] < X[r])
{
max = X[r]
min = X[l]
}
else
{
max = X[l]
min = X[r]
}
}
else
{
int mid = l + (r - l)/2
int leftMinMax[2] = findMinMax(X, l, mid)
int rightMinMax[2] = findMinMax(X, mid + 1, r)
if (leftMinMax[0] > rightMinMax[0])
max = leftMinMax[0]
else
max = rightMinMax[0]
if (leftMinMax[1] < rightMinMax[1])
min = leftMinMax[1]
else
min = rightMinMax[1]
}
int maxMin[2] = {max, min}
return maxMin
}
Question 3.Derive Recurrence relation for Quicksort
Answer:

Quick Sort Reccurence

Quick sort algorithm visualization

Analysis of the quicksort


We first need to define the recurrence relation to analyse the recursive function.

Suppose T(n) is the time complexity of quicksort for input size n. To get the

expression for T(n), we add the time complexities of each step in the above

recursive code:
Divide step: The time complexity of this step is equal to the time complexity of

the partition algorithm i.e. O(n).

Conquer step: We solve two subproblems recursively, where subproblem size will

depend on the value of the pivot during partition algorithm. Suppose i elements are

on the left of the pivot and n — i — 1 elements are on the right of the pivot after

partition.

· Input size of left subarray = i

· Input size of right subarray = n — i — 1

Conquer step time complexity = Time complexity to sort left subarray recursively

+ Time complexity to sort right subarray recursively = T(i) + T(n — i — 1)

Combine step: This is a trivial step and there is no operation in the combine part

of quick sort. So time complexity of combine step = O(1).

T(n) = O(n) + T(i) + T(n — i — 1) + O(1)

= T(i) + T(n — i — 1) + O(n)


= T(i) + T(n — i — 1) + cn

Recurrence relation of the quick sort:

· T(n) = c, if n = 1

· T(n) = T(i) + T(n — i — 1) + cn, if n > 1

Worst-case time complexity analysis of the quick sort

Quick sort worst-case occurs when pivot is always the largest or smallest element

during each call of partition algorithm. In such a situation, partition process is

highly unbalanced, where one subarray has n — 1 element and the other subarray

is empty. Note: Worst case always occurs in the case of sorted or revere sorted

array. Think!

So, for calculating quick sort time complexity in the worst case, we put i = n — 1

in the above equation of T(n) => T(n) = T(n — 1) + T(0) + cn = T(n — 1) + cn

Analysis using the recursion tree method


In this method, we draw the recursion tree and sum the partitioning cost for each

level => cn + c(n−1) + c(n−2) +⋯+ 2c + c ​= c (n + n−1 + n−2 + ⋯+ 2 + 1) =

c(n(n+1)/2) = O(n²). So worst case time complexity of quick sort is O(n²).


Question 4 Write Algorithm for Merge sort
Answer:

Algorithm
In the following algorithm, arr is the given array, beg is the starting element, and
end is the last element of the array.

MERGE_SORT(arr, beg, end)

if beg < end


set mid = (beg + end)/2
MERGE_SORT(arr, beg, mid)
MERGE_SORT(arr, mid + 1, end)
MERGE (arr, beg, mid, end)
end of if

END MERGE_SORT

The important part of the merge sort is the MERGE function. This function
performs the merging of two sorted sub-arrays that are A[beg…mid] and
A[mid+1…end], to build one sorted array A[beg…end]. So, the inputs of the
MERGE function are A[], beg, mid, and end.

The implementation of the MERGE function is given as follows -

/* Function to merge the subarrays of a[] */

void merge(int a[], int beg, int mid, int end)


{
int i, j, k;
int n1 = mid - beg + 1;
int n2 = end - mid;

int LeftArray[n1], RightArray[n2]; //temporary arrays

/* copy data to temp arrays */


for (int i = 0; i < n1; i++)
LeftArray[i] = a[beg + i];
for (int j = 0; j < n2; j++)
RightArray[j] = a[mid + 1 + j];
i = 0, /* initial index of first sub-array */
j = 0; /* initial index of second sub-array */
k = beg; /* initial index of merged sub-array */

while (i < n1 && j < n2)


{
if(LeftArray[i] <= RightArray[j])
{
a[k] = LeftArray[i];
i++;
}
else
{
a[k] = RightArray[j];
j++;
}
k++;
}
while (i<n1)
{
a[k] = LeftArray[i];
i++;
k++;
}

while (j<n2)
{
a[k] = RightArray[j];
j++;
k++;
}
}

Time Complexity
Case Time Complexity
Best Case O(n*logn)
Average Case O(n*logn)
Worst Case O(n*logn)
Question 5. Write a program for Quick sort.
Answer:

Algorithm: QuickSort
QUICKSORT (array A, start, end)
{
if (start < end)
{
p = partition(A, start, end)
QUICKSORT (A, start, p - 1)
QUICKSORT (A, p + 1, end)
}
}

Algorithm: Partition
PARTITION (array A, start, end)
{
pivot = A[end]
i = start-1
for j = start to end -1 {
do if (A[j] < pivot) {
then i = i + 1
swap A[i] with A[j]
}}
swap A[i+1] with A[end]
return i+1
}

Program:

#include<stdio.h>

void quicksort(int number[25],int first,int last){

int i, j, pivot, temp;


if(first<last){

pivot=first;

i=first;

j=last;

while(i<j){

while(number[i]<=number[pivot]&&i<last)

i++;

while(number[j]>number[pivot])

j--;

if(i<j){

temp=number[i];

number[i]=number[j];

number[j]=temp;

temp=number[pivot];

number[pivot]=number[j];

number[j]=temp;

quicksort(number,first,j-1);

quicksort(number,j+1,last);
}

int main(){

int i, count, number[25];

printf("How many elements are u going to enter?: ");

scanf("%d",&count);

printf("Enter %d elements: ", count);

for(i=0;i<count;i++)

scanf("%d",&number[i]);

quicksort(number,0,count-1);

printf("Order of Sorted elements: ");

for(i=0;i<count;i++)

printf(" %d",number[i]);

return 0;

You might also like