Daa Assignments
Daa Assignments
Characteristics of an Algorithm:
Input: An algorithm has some input values. We can pass 0 or some input value to
an algorithm.
Finiteness: An algorithm should have finiteness. Here, finiteness means that the
algorithm should contain a limited number of instructions, i.e., the instructions
should be countable.
Input: After designing an algorithm, the required and the desired inputs are
provided to the algorithm.
Processing unit: The input will be given to the processing unit, and the processing
unit will produce the desired output.
like English. But it is usually hard to understand and is not the best way
to express an algorithm.
we explain the algorithm in steps.. It doesn’t have any syntax like any
compiled.
Answer:
Pseudocode does not have a specific syntax unlike a program that is written using
syntaxes of a particular language. Hence, pseudocode cannot be executed on a
computer, rather it eases the work of a programmer as it can be easily understood.
2. An algorithm has some specific characteristics that describe the process. While
A pseudocode on the other hand is not restricted to something. It’s only objective is
to represent an algorithm in a realistic manner.
Pseudocode Conventions:.
Question 3: Explain in brief about performance analysis.
Answer:
Performance Analysis:
or
When we want to analyse an algorithm, we consider only the space and time
required by that particular algorithm and we ignore all the remaining elements.
Based on this information, performance analysis of an algorithm can also be
defined as follows...
The time complexity of a program is the amount of computer time it needs to run
to completion.
The limiting behaviour of the complexity as size increases is called the asymptotic
time complexity. It is the asymptotic complexity of an algorithm, which ultimately
determines the size of problems that can be solved by the algorithm.
Space Complexity:
Instruction space: Instruction space is the space needed to store the compiled
version of the program instructions.
Data space: Data space is the space needed to store all constant and variable
values.
Answer:
A recursive algorithm calls itself with smaller input values and returns the result
for the current input by carrying out basic operations on the returned value for the
smaller input. Generally, if a problem can be solved by applying solutions to
smaller versions of the same problem, and the smaller versions shrink to readily
solvable instances, then the problem can be solved using a recursive algorithm.
To build a recursive algorithm, you will break the given problem statement into
two parts. The first one is the base case, and the second one is the recursive step.
For example, consider this problem statement: Print sum of n natural numbers
using recursion. This statement clarifies that we need to formulate a function that
will calculate the summation of all natural numbers in the range 1 to n. Hence,
mathematically you can represent the function as:
Step 1: Start
5.1 fact=fact*i
5.2 i=i+1
Step 7: Stop
Pseudocode:
Read number
Fact = 1
i=1
WHILE i<=number
Fact=Fact*i
i=i+1
ENDWHILE
WRITE Fact
We first take input from user and store that value in variable named “n”. Then we
initialize a variable “Fact” with value 1 (i.e Fact=1) and variable i with value 1(i.e
i=1). Repeat next two steps until i is less than n.
Answer:
The Recursion Tree Method resolves recurrence relations by converting them into
recursive trees, where each node signifies the cost at different recursion levels.
This visual representation simplifies understanding. Recursion, vital in computer
science and mathematics, enables functions to self-reference. Recursion trees
visually depict the iterative execution of recursive functions, aiding comprehension
in problem-solving.
Types of Recursion
Linear Recursion
Tree Recursion
1. Linear Recursion
A linear recursive function is a function that only makes a single call to itself each
time the function runs. The factorial function is a good example of linear recursion.
A linearly recursive function takes linear time to complete its execution that’s why
it is called linear recursion.
2. Tree Recursion
Tree Recursion is just a phrase to describe when you make a recursive call more
than once in your recursive case. The fibonacci function is a good example of Tree
recursion. The time complexity of tree recursive function is not linear, they run in
exponential time.
function fib(n) {
if n is less than 2:
return n;
return fib(n-1) + fib(n-2); // recursive step
Here the function fib(n) is function which calls itself 2 times. These call is being
made to the same function with a smaller value of n.
Let's write the recurrence relation for this function as well, T(n) = T(n-1) + T(n-2)
+ k. Again K is some constant here.
These types of recursion are called tree recursion where more than one call is made
to the same function with smaller values. Now the interesting part, what is the time
complexity of this function?
Take the below recursion tree for the same function like a hint and take a guess.
You may realize that it's hard to determine the time complexity by directly looking
into a recursive function, especially when it's a tree recursion. There are a lot of
ways to determine the time complexity of such functions, one of them is Recursion
Tree Method.
· When a problem is divided into smaller subproblems, there is also some time
needed to combine the solutions to those subproblems to form the answer to the
original problem.
· Let's draw the recursion tree for the recurrence relation stated above,
Question 6: Solve substitution method for recurrence relation.
Answer:
The substitution method is based on the idea of guessing a solution for the
recurrence relation and then proving it by induction.
For example, suppose you have the recurrence relation T(n) = 2T(n/2) + n, with
T(1) = 1. This recurrence relation describes the running time of a binary search
algorithm. You may guess that the solution is T(n) = n log n, based on your
intuition or previous knowledge. To verify your guess, you need to show that it
satisfies the recurrence relation for all n. This can be done by using induction,
which involves two steps: the base case and the induction step.
The base case is the simplest case where the recurrence relation holds. Usually, this
is when n is equal to the smallest value for which the recurrence relation is defined.
For example, in the binary search recurrence relation, the base case is when n = 1.
You need to show that your guessed solution matches the given value for the base
case. In this case, T(1) = 1 log 1 = 1, which is equal to the given value of T(1) = 1.
Therefore, the base case is satisfied.
The induction step is where you assume that the recurrence relation holds for some
n and then show that it also holds for a larger n. Usually, this is done by
substituting the recurrence relation into your guessed solution and simplifying it.
For example, in the binary search recurrence relation, the induction step is to
assume that T(n) = n log n for some n and then show that T(2n) = 2n log 2n.
To do this, you can substitute T(2n) = 2T(n) + 2n into your guessed solution and
simplify it as follows:
T(2n) = 2T(n) + 2n
= 2(n log n) + 2n
= 2n(log n + 1)
= 2n log (n * 2)
= 2n log 2n
Example 2:
In the above equation, to find T(n)= 1 + T(n-1) – (1), we have to find T(n-1).
To get the value of T(n-1), we have to substitute n-1 in the place of n.
T(n-1) = 1 + T(n-2) – (2).
Similarly, T(n-2) = 1 + T(n-3) – (3).
By substituting the value of T(n-1) in the equation (1), we get T(n) = 2 + T(n-2).
Now substitute the value of T(n-2) in the above equation.
By substituting we will get T(n) = 3 + T(n-3).
The above equation looks like T(n) = k + T(n-k).
This way, we can substitute to any length, but the algorithm will stop executing
when the value of n-k = 1.
Now, k = n-1.
Substitute the value k in the above equation.
T(n) = n-1 + T(n – n + 1).
T(n) = n-1 + T(1).
We know from the given equation that T(1) = 1.
T(n) = n.
Therefore, the efficiency of the above recurrence equation T(n) = n.
Assignment 2
return False
else
if x == arr[mid]
return mid
Program:
#include <stdio.h>
if (array[mid] == x)
return mid;
if (array[mid] > x)
return -1;
int main(void) {
int x = 4;
if (result == -1)
printf("Not found");
else
Time Complexities
Best case complexity : O(1)
Base case 1: If the array size is 1, we return that single element as both the
maximum and minimum.
Base case 2: If the array size is 2, we compare both elements and return the
maximum and minimum.
Suppose T(n) is the time complexity of quicksort for input size n. To get the
expression for T(n), we add the time complexities of each step in the above
recursive code:
Divide step: The time complexity of this step is equal to the time complexity of
Conquer step: We solve two subproblems recursively, where subproblem size will
depend on the value of the pivot during partition algorithm. Suppose i elements are
on the left of the pivot and n — i — 1 elements are on the right of the pivot after
partition.
Conquer step time complexity = Time complexity to sort left subarray recursively
Combine step: This is a trivial step and there is no operation in the combine part
· T(n) = c, if n = 1
Quick sort worst-case occurs when pivot is always the largest or smallest element
highly unbalanced, where one subarray has n — 1 element and the other subarray
is empty. Note: Worst case always occurs in the case of sorted or revere sorted
array. Think!
So, for calculating quick sort time complexity in the worst case, we put i = n — 1
Algorithm
In the following algorithm, arr is the given array, beg is the starting element, and
end is the last element of the array.
END MERGE_SORT
The important part of the merge sort is the MERGE function. This function
performs the merging of two sorted sub-arrays that are A[beg…mid] and
A[mid+1…end], to build one sorted array A[beg…end]. So, the inputs of the
MERGE function are A[], beg, mid, and end.
while (j<n2)
{
a[k] = RightArray[j];
j++;
k++;
}
}
Time Complexity
Case Time Complexity
Best Case O(n*logn)
Average Case O(n*logn)
Worst Case O(n*logn)
Question 5. Write a program for Quick sort.
Answer:
Algorithm: QuickSort
QUICKSORT (array A, start, end)
{
if (start < end)
{
p = partition(A, start, end)
QUICKSORT (A, start, p - 1)
QUICKSORT (A, p + 1, end)
}
}
Algorithm: Partition
PARTITION (array A, start, end)
{
pivot = A[end]
i = start-1
for j = start to end -1 {
do if (A[j] < pivot) {
then i = i + 1
swap A[i] with A[j]
}}
swap A[i+1] with A[end]
return i+1
}
Program:
#include<stdio.h>
pivot=first;
i=first;
j=last;
while(i<j){
while(number[i]<=number[pivot]&&i<last)
i++;
while(number[j]>number[pivot])
j--;
if(i<j){
temp=number[i];
number[i]=number[j];
number[j]=temp;
temp=number[pivot];
number[pivot]=number[j];
number[j]=temp;
quicksort(number,first,j-1);
quicksort(number,j+1,last);
}
int main(){
scanf("%d",&count);
for(i=0;i<count;i++)
scanf("%d",&number[i]);
quicksort(number,0,count-1);
for(i=0;i<count;i++)
printf(" %d",number[i]);
return 0;