0% found this document useful (0 votes)
6 views

DAA - I UNIT

Uploaded by

saikiranmonster
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

DAA - I UNIT

Uploaded by

saikiranmonster
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Design and Analysis of Algorithms – I UNIT

2 marks:
1. What is an Algorithm? What are the criteria for writing an algorithm?
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a
required output for any legitimate input in a finite amount of time.
The criteria for writing an algorithms:
• Input: Each algorithm should have zero or more inputs and it should work for a range of inputs
• Output: the algorithm should produce correct result. It should produce atleast one output
• Definiteness: Each instruction should be clear and unambiguous
• Effectiveness: The instructions should be simple and should transform the given input to the
desired o/p.
• Finiteness: The algorithm should terminate after a finite sequence of instructions.
2. What are the methods of specifying an algorithm?
• Natural language
• Pseudocode
• Flowchart
3. List the steps of Algorithm design and analysis process.

4. What is exact algorithm and approximation algorithm? Give example.


An algorithm used to solve the problem exactly and produce correct result is called an exact
algorithm.
If the problem is so complex and not able to get exact solution, then we have to choose an algorithm
called an approximation algorithm. i.e., produces an approximate answer
5. List the important Problem Types.
• Sorting
• Searching
• String processing
• Graph problems
• Combinatorial problems
• Geometric problems
• Numerical problems
6. Define the different methods for measuring algorithm efficiency.
• Analysis framework.
• Asymptotic notations and its properties.
• Mathematical analysis for recursive algorithms.
• Mathematical analysis for non-recursive algorithms.
7. Write the Euclid algorithm to find the GCD of 2 numbers.
ALGORITHM Euclid(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n = 0 do
r ← m mod n
m←n
n←r
return m
8. What are combinatorial problems? Give an example.
These are problems that ask, explicitly or implicitly, to find a combinatorial object such as a
permutation, a combination, or a subset that satisfies certain constraints. A desired combinatorial
object may also be required to have some additional property such as a maximum value or a
minimum cost.
Eg:- traveling salesman problem and the graph colouring problem

9. What are following data structures?


a) Single linked list
In singly linked list, each node except the last one contains a single pointer to the next element.

b) Doubly linked list


In doubly linked list, every node except the first and the last contains pointers to both its successor
and its predecessor.

c) Stack
It is a list in which insertions and deletions can be done only at the end. This end is called the top.
The structure operates in the “last-in-first-out” (LIFO) fashion.
d) Queue
It is a list from which elements are deleted from one end of the structure, called the front (this
operation is called dequeue), and new elements are added to the other end, called the rear (this
operation is called enqueue).
e) Graph
A graph is a collection of points called vertices, some of which are connected by line segments
called edges. Some of the graph problems are graph traversal, shortest path algorithm, topological
sort, traveling salesman problem and the graph-coloring problem and so on.
f) Tree
A tree is a non-linear and hierarchical data structure where the elements are arranged in a tree-like
structure. In a tree, the topmost node is called the root node. Each node contains some data, and data
can be of any type. A tree with no more than two children is known as a binary tree.

10.Explain the terms (w.r.t graph):


a) Directed graph
A graph in which edge has direction. That is the nodes are ordered pairs in the definition of every edge.

b) Undirected graph
A graph in which edges do not have any direction. That is the nodes are unordered pairs in the definition
of every edge.

c) Adjacency matrix
The adjacency matrix of a graph with n vertices is an n × n boolean matrix with
one row and one column for each of the graph’s vertices, in which the element in
the i th row and the j th column is equal to 1 if there is an edge from the i th vertex
to the j th vertex, and equal to 0 if there is no such edge.

d) Adjacency lists
The adjacency lists of a graph or a digraph is a collection of linked lists, one for
each vertex, that contain all the vertices adjacent to the list's vertex (i.e., all the
vertices connected to it by an edge). Usually, such lists start with a header
identifying a vertex for which the list is compiled.

e) Weighted graph
A weighted graph is a graph with numbers assigned to its edges. These numbers are called weights
or costs.
f) Path
A path is a type of open walk where neither edges nor vertices are allowed to repeat. There is a
possibility that only the starting vertex and ending vertex are the same in a path. In an open walk,
the length of the walk must be more than 0.
g) Cycle
A closed path in the graph theory is also known as a Cycle. A cycle is a type of closed walk where
neither edges nor vertices are allowed to repeat. There is a possibility that only the starting vertex
and ending vertex are the same in a cycle.

11. Explain the terms (w.r.t trees)


a) Free tree : A tree (more accurately, a free tree) is a connected acyclic graph.
b) Forest : A graph that has no cycles but is not necessarily connected is called a forest: each of its
connected components is a tree.
c) Rooted tree : A rooted tree is a tree data structure in which one vertex (node) is designated as the
root. This root serves as the starting point and a point of reference for the rest of the tree.
d) Ordered tree : An ordered tree is a rooted tree in which all the children of each vertex are
ordered. A binary tree can be defined as an ordered tree in which every vertex has no more than two
children and each child is designated as either a left child or a right child of its parent; a binary tree
may also be empty.
e) Binary search tree : A binary search tree is a rooted binary tree data structure with the key of
each internal node being greater than all the keys in the respective node's left subtree and less than
the ones in its right subtree.

12. Define Sets and Dictionaries.


Sets : A set is an unordered collection of distinct items called elements of the set. A specific set is
defined either by an explicit listing of elements
e.g, S={2,3,5,7}
or by specifying a property that all the set’s elements and only they must specify
e.g, S= {n : n is a prime number and n>`10}
Dictionary : In computing, the operations we need to perform for a set or a multiset most often are
searching for a given item, adding a new item, and deleting an item from the collection. A data
structure that implements these three operations is called the dictionary.

13. Define the two types of efficiencies used in algorithm.


• Time efficiency, indicating how fast the algorithm runs, and
• Space efficiency, indicating how much extra memory it uses

14. What are Best case and Worst case in algorithm?


The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which is
an input of size n for which the algorithm runs the fastest among all possible inputs of that size.
Cbest(n) = 1
The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n, which
is an input (or inputs) of size n for which the algorithm runs the longest among all possible inputs of
that size.
Cworst(n) = n
15. Why order growth necessary in algorithm analysis?
The order of growth of an algorithm is an approximation of the time required to run a computer
program as the input size increases. The order of growth ignores the constant factor needed for
fixed operations and focuses instead on the operations that increase proportional to input size. For
example, a program with a linear order of growth generally requires double the time if the input
doubles.
16. What are asymptotic notation? Why it is required?
Asymptotic notation is a notation, which is used to take meaningful statement about the efficiency
of a program. It focuses on the algorithm's overall performance trend rather than exact execution
times, which can vary depending on hardware and implementation details.
17. Find the time complexity for the given algorithm.

• The inner loop executes n times for each of the m outer loop iterations.
• This results in a total of m * n iterations for the combination of both loops.
• Therefore, the overall time complexity is O(m * n), indicating that the algorithm's running time
grows proportionally to the product of m and n.
• If m=n, then O(m * n) becomes O(n * n), which simplifies to O(n2).
18. What is Big O notation? Give an example.
A function t(n) is said to be in O(g(n)), denoted t(n) ∈ O(g(n)), if t(n) is bounded above by some
constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that
t(n) ≤ cg(n) for all n ≥ n0.
where t(n) and g(n) are nonnegative functions defined on the set of natural numbers.
Example : Prove the assertion 3n + 2 and n.
Let f(n) = 3n +2 and g(n) = n
f(n) ≤ c.g(n) Let c = 4 Then, 3n + 2 ≤ 4n
Let n = 3 3.3 + 2 ≤ 4.3
∴ f(n) = Ο(g(n))
19. What is Big Omega notation? Give an example.
A function t(n) is said to be in Ω(g(n)), denoted t(n) ∈ Ω(g(n)), if t(n) is bounded below by some
positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and
some nonnegative integer n0 such that
t (n) ≥ cg(n) for all n ≥ n0.
where t(n) and g(n) are nonnegative functions defined on the set of natural numbers.
Example : Let f(n) = 3n +2 and g(n) = n
f(n) ≥ c.g(n) Let c = 1 Then, 3n + 2 ≥ 1.n
Let n = 1 3.1 + 2 ≥ 1
∴ f(n) = Ω(g(n))
20. Define Big Theta notation. Give an example.
A function t(n) is said to be in Θ(g(n)), denoted t(n) ∈ Θ(g(n)), if t(n) is bounded both above and
below by some positive constant multiples of g(n) for all large n, i.e., if there exist some positive
constants c1 and c2 and some nonnegative integer n0 such that
c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0.
where t(n) and g(n) are nonnegative functions defined on the set of natural numbers.
Example : Let f(n) = 3n +2 and g(n) = n
f(n) ≤ c1.g(n) Let c1 = 4 Then, 3n + 2 ≤ 4.n
Let n = 3 3.3 + 2 ≤ 4.3
f(n) ≥ c2.g(n) Let c2 = 1 Then, 3n + 2 ≥ 1.n
Let n = 1 3.1 + 2 ≥ 1.1
∴ f(n) = Θ(g(n))
21. Define Little Oh notation. Give an example.
Let f (n) and g(n) be functions that map positive integers to positive real numbers. Given two
functions f(n) and g(n), we say that f(n) is o(g(n)) if for any positive constant c, there exists a
positive constant n₀ such that:
0 ≤ f(n) < c • g(n) for all n ≥ n₀
Example : Given f(n) = 2n + 3 and g(n) = n2, to show that f(n) = o(g(n)), we need to find constants
c > 0 and n₀ > 0 such that 0 ≤ 2n + 3 < c • n2 for all n ≥ n₀.
In this case, let’s choose c=1 and n₀ = 4 : 0 ≤ 2n + 3 < 1 • n2
Thus, f(n) = 2n + 3 is o(n2)

22. What is recurrence relation? Give an example.


A recurrence relation is an equation that defines a sequence based on a rule that gives the next term
as a function of the previous term(s) for some function f.
Example : Recurrence relation for computing n! : M(n) = M(n − 1) + 1 for n > 0
M(0) = 0

23. Prove the followings tatements.


a) 100n + 5 = O(n2)
100n + 5 ≤ 100n + 5n ≤ 105n for all n ≥ 1
Thus, 100n + 5 = O(n2).
b) n2 + 5n + 7= Θ(n2)
The function n2 + 5n + 7 grows at the same rate as n2 for sufficiently large n. Thus, it is Θ(n2).
c) n2 + n =O(n3)
n2 + n ≤ n2 + n2 ≤ 2n2 for all n ≥ 1
Thus, n2 + n =O(n3).
d) ½ n(n-1)=Θ(n2)
Similar to part (b), it can be proved by showing it is bounded by c1• n2 and c2• n2 for constants c1>0
and c2>0 for all n ≥ n₀.
e) 5n2 + 3n + 20 =O(n2)
5n2 + 3n + 20 ≤ 28n2 for all n ≥ 1
Thus, 5n2 + 3n + 20 =O(n2).
f) ½n2+3n=Θ(n2)
Similar to part (d), it can be proved by showing it is bounded by c1• n2 and c2• n2 for constants c1>0
and c2>0 for all n ≥ n₀.
g) n3 + 4n2 =Ω(n2)
n3 + 4n2 ≥ n2 + 4n2 ≥ 5n2 for all n ≥ 1
Thus, n3 + 4n2 =Ω(n2).
24. Algorithm Sum(n)
S←0
for i ← 1 to n do
S← S+i
return S
a) What does this algorithm compute? Sum of first n natural number
b) What it its basic operation? Addition
c) How many times is the basic operation executed? n times
d) What is the efficiency class of this algorithm? O(n)

Long Answers Questions (THREE, FOUR OR FIVE Marks Questions)


1. What is an Algorithm? Explain the various criteria for writing an algorithms with
example?
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a
required output for any legitimate input in a finite amount of time.
An algorithm should satisfy the following criteria:
• Input: Each algorithm should have zero or more inputs and it should work for a range of inputs
• Output: the algorithm should produce correct result. It should produce atleast one output
• Definiteness: Each instruction should be clear and unambiguous
• Effectiveness: The instructions should be simple and should transform the given input to the
desired o/p.
• Finiteness: The algorithm should terminate after a finite sequence of instructions.
Example : Euclid algorithm to find the GCD of 2 numbers.
ALGORITHM Euclid(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n = 0 do
r ← m mod n
m←n
n←r
return m

2. Explain Euclid Algorithm with example to find the GCD of two numbers.
Computing the greatest common divisor of two integers, denoted as gcd(m,n), defined as the largest
integer that divides both m and n evenly, i.e., with a remainder of zero.
Step 1 : If n = 0, return the value of m as the answer and stop; otherwise, proceed to Step 2.
Step 2 : Divide m by n and assign the value of the remainder to r.
Step 3 : Assign the value of n to m and the value of r to n. Go to Step 1.
ALGORITHM Euclid(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n = 0 do
r ← m mod n
m←n
n←r
return m
Ex: gcd(60, 24) can be computed as follows:
gcd(60,24) = gcd (24, 60 mod 24)
gcd(24,12) = gcd (12, 24 mod 12)
gcd(12, 0)
gcd = 12

3. Explain Consecutive integer checking methods to find the GCD of two numbers.
Consecutive integer checking algorithm for computing gcd(m, n)
Step 1 : Assign the value of min{m, n} to t.
Step 2 : Divide m by t. If the remainder of this division is 0, go to Step 3; otherwise, go to Step 4.
Step 3 : Divide n by t. If the remainder of this division is 0, return the value of t as the answer and
stop; otherwise, proceed to Step 4.
Step 4 : Decrease the value of t by 1. Go to Step 2.
Ex: For the numbers 60 and 24, the algorithm will try first 24, then 23, and so on, until it reaches
12, where it stops.
Unlike Euclid’s algorithm, this algorithm, in the form presented, does not work correctly when one
of its input numbers is zero. This example illustrates why it is so important to specify the set of an
algorithm’s inputs explicitly and carefully.

4. Explain Algorithm design and analysis process with flow diagram.


Understanding the Problem:
• The first thing we need to do before designing an algorithm is to understand completely the
problem given. Read the problem’s description carefully .
• Input (instance) to the problem and range of the input get fixed in this phase.
Decide on :
(a) Ascertaining the Capabilities of the Computational Device
(b) Choosing between Exact and Approximate Problem Solving
(c) Deciding on Appropriate Data Structures
(d) Algorithm Design Techniques
Designing an algorithm
• Now design the algorithm and specify it using any of the following notations.
• Methods of Specifying an Algorithm Once designing an algorithm, it needs to be specified in
some fashion. There exist different notations, used for specifying algorithms.
➢ Natural language ➢ Pseudocode ➢ Flowchart:
Proving an Algorithm’s Correctness: That is, we have to prove that the algorithm yields a required
result for every legitimate input in a finite amount of time. For some algorithms, a proof of
correctness is quite easy; for others, it can be quite complex. • A common technique for proving
correctness is to use mathematical induction because an algorithm’s iterations provide a natural
sequence of steps needed for such proofs.
Analyzing an Algorithm:
• Analyzing an algorithm deals with the efficiency measurement.
• There are two kinds of algorithm efficiency:
➢ time efficiency, indicates how fast the algorithm runs
➢ space efficiency, indicates how much extra memory it uses.
Coding an Algorithm: • The final step is to implement algorithms as computer programs. As a
practical matter, the validity of programs is established by testing.

5. Explain any FIVE Problem types.


(i) Sorting : The sorting problem is to rearrange the items of a given list in ascending order. We
usually need to sort lists of numbers, characters from an alphabet, character strings etc.
Eg : We can choose to sort student records in alphabetical order of names or by student number.
Such a specially chosen piece of information is called a key.
(ii) Searching : The searching problem deals with finding a given value, called a search key, in a
given set. There are plenty of algorithms to choose from. There is no single algorithm that fits all
situations. Some algorithms work faster than others but require more memory; some are very fast
but applicable to sorted arrays; and so on.
Examples :sequential search, binary search
(iii) String processing : A string is a sequence of characters from an alphabet. Strings comprise
letters, numbers, and special characters; bit strings, which comprise zeros and ones; gene sequences,
which can be modeled by strings of characters from the four character alphabet {A, C, G, T}.
Eg : string matching.
(iv) Graph problems : A graph can be thought of as a collection of points called vertices, some of
which are connected by line segments called edges. Some of the graph problems are graph traversal,
shortest path algorithm, topological sort, traveling salesman problem and the graph-coloring
problem and so on.
(v) Geometric problems : Geometric algorithms deal with geometric objects such as points, lines,
and polygons. Geometric algorithms are used in computer graphics, robotics, and tomography. The
closest-pair problem and the convex-hull problem are comes under this category.

6. Explain following
a. Graph problem
A graph can be thought of as a collection of points called vertices, some of which are connected by
line segments called edges. Some of the graph problems are graph traversal, shortest path algorithm,
topological sort, traveling salesman problem and the graph-coloring problem and so on.
b. Combinatorial problems
These are problems that ask, explicitly or implicitly, to find a combinatorial object—such as a
permutation, a combination, or a subset—that satisfies certain constraints. A desired combinatorial
object may also be required to have some additional property such as a maximum value or a
minimum cost. The traveling salesman problem and the graph coloring problem are examples of
combinatorial problems.
c. Geometrical problems
Geometric algorithms deal with geometric objects such as points, lines, and polygons. Geometric
algorithms are used in computer graphics, robotics, and tomography. The closest-pair problem and
the convex-hull problem are comes under this category.

7. Explain the fundamentals of data structure.


Data structure is typically classified into two : linear data structure and non-linear data structure.
Linear data means the information is sequential. Non-linear data means the data types are not
dependent on a sequence — rather the data is hierarchical, often tree or graph-based.
Linear Data Structures:
(i) Array: A sequence of n items of the same data type that are stored contiguously in computer
memory and made accessible by specifying a value of the array’s index. The index is an integer
either between 0 and n-1 or between 1 and n.

(ii) Linked List : Sequence of zero or more elements called nodes each containing two kinds of
information: some data and one or more links called pointers to other nodes of the linked list.
In singly linked list, each node except the last one contains a single pointer to the next element.

In doubly linked list, every node except the first and the last contains pointers to both its successor
and its predecessor.
(iii) Stacks : It is a list in which insertions and deletions can be done only at the end. This end is
called the top. The structure operates in the “last-in-first-out” (LIFO) fashion.
(iv) Queue : It is a list from which elements are deleted from one end of the structure , called the
front(this operation is called dequeue), and new elements are added to the other end, called the rear
(this operation is called enqueue). A queue operates in the “first-in-first-out” (FIFO) fashion.
Non-Linear DS:
(v) Graph : A collection points in the plane called “vertices” or “nodes”, which are connected by
line segments called edges. a graph G=<V,E>is defined by a pair of two sets: a finite set V of items
called vertices and a set E of pairs of these items called edges.
A graph is called “undirected” if every edge in it is undirected. A graph whose every edge is
directed is called directed.

(vi) Trees : A tree is a non-linear and hierarchical data structure where the elements are arranged in
a tree-like structure. In a tree, the topmost node is called the root node. Each node contains some
data, and data can be of any type. A tree with no more than two children is known as a binary tree.

Tree Binary Tree

(vii) Sets : A set is an unordered collection of distinct items called elements of the set. It can be
defined under collection of items:
(a) Universal set
(b) Subset
(c) Bit vector: Representation of subset using binary division
Eg : U = {1,2,3,4,5,6,7}
S = {2,4,6}
Bit vector = 0 1 0 1 0 1 0

(viii) Dictionaries: In computing, the operations we need to perform for a set or a multiset most
often are searching for a given item, adding a new item, and deleting an item from the collection. A
data structure that implements these three operations is called the dictionary.

There are quite a few ways a dictionary can be implemented. They range from an unsophisticated
use of arrays (sorted or not) to much more sophisticated techniques such as hashing and balanced
search trees.
8. Write a note on Graph data structure.
A collection points in the plane called “vertices” or “nodes”, which are connected by line segments
called edges. a graph G=<V,E>is defined by a pair of two sets: a finite set V of items called vertices
and a set E of pairs of these items called edges.
A graph is called “undirected” if every edge in it is undirected. A graph whose every edge is
directed is called directed. Directed graphs are also called digraphs. A graph with weights is called
a weighted graph.
If a pair of vertices (u, v) is not the same as the pair (v, u), we say that the edge (u, v) is directed
from the vertex u, called the edge's tail, to the vertex v, called the edge's head.
A graph with every pair of its vertices connected by an
edge is called complete. A graph with relatively few
possible edges missing is called dense. A graph with
few edges relative to the number of its vertices is called
sparse.

9. Write a note on following data structures.


a. Tree
b. Sets
c. Dictionary
(Refer Q.No. 7)

10. Explain Space complexity and Time complexity with example.


Time Complexity: The time complexity of an algorithm quantifies the amount of time taken by an
algorithm to run as a function of the length of the input. To estimate the time complexity, we need to
consider the cost of each fundamental instruction and the number of times the instruction is
executed.
Space Complexity: The space complexity of an algorithm quantifies the amount of space taken by
an algorithm to run as a function of the length of the input. It is the amount of memory needed for
the completion of an algorithm. To estimate the memory requirement we need to focus on two
parts:
A fixed part: It is independent of the input size.
A variable part: It is dependent on the input size.
Eg : Addition of two integers.
ALGORITHM ADD(A, B)
//Description: Perform arithmetic addition of two numbers
//Input: Two integersA and B
//Output: variable C, which holds the addition of A and B
C←A+B
return C
The addition of two integers requires one addition operation. the time complexity of this algorithm
is constant, so T(n) = O(1) .
The addition of two integers requires one extra memory location to hold the result. Thus the space
complexity of this algorithm is constant, hence S(n) = O(1).
11. Write an algorithm find sum of two matrixes also calculate its time complexity.
ALGORITHM MatrixAddition(A[0…n-1, 0…m-1], B[0…n-1, 0…m-1])
//Adds two n-by-m matrices
// Input: Two matrices A and B of dimensions n x m
// Output: Matrix C = A + B
for i ← 0 to n -1 do
for j ← 0 to m-1 do
C[i,j] ← A[i,j] + B[i,j]
return C
Algorithms basic operation is to access each element using two loops and add each element of 'A'
matrix to each element of 'B' matrix designated by 'i' and 'j', and construct another matrix 'C’ using
it.
The basic operation is executed n x m times where n is the number of rows and m is the number of
columns.
Therefore, the time complexity is O(n * m).

12. Write an algorithm find the sum of n numbers also calculate its space and time complexity.
ALGORITHM SumOfNNumbers(A,n)
//Addition of n numbers
//Input: Array A of n numbers
//Output: Sum of the elements in A
sum ← 0
for i ← 0 to n-1 do
sum ← sum + A[i]
return sum
The time complexity is O(n), where n is the number of elements. Each element n the array is added
to the sum exactly once.
The space complexity is O(1) since the algorithm uses a constant amount of extra space (the sum
variable) regardless of the size of the input array.

13. Explain the following w.r.t algorithm efficiency.


a. Measuring Input Size:
The input size in algorithms is typically denoted by a variable, often represented as "n." It can be
the number of elements in an array, the number of vertices or edges in a graph, or any other relevant
parameter.
Measuring input size is crucial for analyzing how the algorithm's performance scales with
increasing problem sizes.
b. Unit for Measuring Runtime:
The runtime of an algorithm is measured in terms of the number of basic operations (such as
comparisons, assignments, or arithmetic operations) performed during its execution.
The unit for measuring runtime is often denoted as "Big O" notation (O(f(n))), where f(n) represents
an upper bound on the growth rate of the algorithm's runtime concerning the input size.
c. Order Growth:
The order growth of an algorithm, expressed in Big O notation, describes the algorithm's
performance as the input size approaches infinity.
Common order growths include O(1) (constant time), O(log n) (logarithmic time), O(n) (linear
time), O(n log n) (linearithmic time), O(n^2) (quadratic time), and so on.
Evaluating the order growth helps compare the scalability of different algorithms and choose the
most efficient one for a given problem size.

14.Explain Worst case, Best case and average case with example.
Worst-case : The worst-case efficiency of an algorithm is its efficiency for the worst-case input of
size n, which is an input (or inputs) of size n for which the algorithm runs the longest among all
possible inputs of that size.
Eg: For sequential search, the worst case is when there are no matching elements or the first
matching element happens to be the last one on the list.
Cworst(n) = n
Best-case : The best-case efficiency of an algorithm is its efficiency for the best-case input of size
n, which is an input of size n for which the algorithm runs the fastest among all possible inputs of
that size.
For example, the best-case inputs for sequential search are lists of size n with their first element
equal to a search key.
Cbest(n) = 1
Average Case : Average-case efficiency of an algorithm is its efficiency for the “random” input. To
analyze the algorithm’s average-case-efficiency, we must make some assumptions about possible
inputs of size n.
Eg: Consider again sequential search. The average case is when the key element is present in the
exact middle of the array.
Cavg(n) = (n+1) / 2

15.Write an algorithm to perform sequential search and also calculate its Worst case, Best
case and average case complexity.
ALGORITHM SequentialSearch(A[0..n − 1], K)
//Searches for a given value in a given array by sequential search
//Input: An array A[0..n − 1] and a search key K
//Output: The index of the first element in A that matches K
// or −1 if there are no matching elements
A[n] ← K
i←0
while A[i] ≠ K do
i←i+1
if i < n return i
else return −1
Best case : If key element K matches with the first element.
Cbest(n) = 1
Worst case : When there are no matching elements or the first matching element happens to be the
last one on the list.
Cworst(n) = n
Average case : If the key element is present in the exact middle of the array.
Cavg(n) = (n+1) / 2

16.Explain Big O notation with example.


A function t(n) is said to be in O(g(n)), denoted t(n) ∈ O(g(n)), if t(n) is bounded above by some
constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that
t(n) ≤ cg(n) for all n ≥ n0.
where t(n) and g(n) are nonnegative functions defined on the set of natural numbers.
O(g(n)) is the set of all functions with a lower or same order of growth as g(n).

Example : Prove the assertion 3n + 2 and n.


Let f(n) = 3n +2 and g(n) = n
f(n) ≤ c.g(n) Let c = 4 Then, 3n + 2 ≤ 4n
Let n = 3 3.3 + 2 ≤ 4.3
∴ f(n) = Ο(g(n))
17.Explain Big Omega notation with example.
A function t(n) is said to be in Ω(g(n)), denoted t(n) ∈ Ω(g(n)), if t(n) is bounded below by some
positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and
some nonnegative integer n0 such that
t (n) ≥ cg(n) for all n ≥ n0.
where t(n) and g(n) are nonnegative functions defined on the set of natural numbers.
Ω (g(n)) is the set of all functions with a higher or same order of growth as g(n).

Example : Let f(n) = 3n +2 and g(n) = n


f(n) ≥ c.g(n) Let c = 1 Then, 3n + 2 ≥ 1.n
Let n = 1 3.1 + 2 ≥ 1
∴ f(n) = Ω(g(n))
18. Explain Big Theta notation with example.
A function t(n) is said to be in Θ(g(n)), denoted t(n) ∈ Θ(g(n)), if t(n) is bounded both above and
below by some positive constant multiples of g(n) for all large n, i.e., if there exist some positive
constants c1 and c2 and some nonnegative integer n0 such that
c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0.
where t(n) and g(n) are nonnegative functions defined on the set of natural numbers.
Ω (g(n)) is the set of all functions that have the same order of growth as g(n).

Example : Let f(n) = 3n +2 and g(n) = n


f(n) ≤ c1.g(n) Let c1 = 4 Then, 3n + 2 ≤ 4.n
Let n = 3 3.3 + 2 ≤ 4.3
f(n) ≥ c2.g(n) Let c2 = 1 Then, 3n + 2 ≥ 1.n
Let n = 1 3.1 + 2 ≥ 1.1
∴ f(n) = Θ(g(n))

19.Explain asymptotic notations Big O, Big Ω and Big θ that are used to compare the order of
growth of an algorithm with example.
Refer Q.No. 16, 17, 18

20.Define Big O notation and prove


a) 100n + 5=O(n2)
To prove, we need to find constants c and no such that 100n +5 ≤ c* n2 for all n ≥ no.
Let c = 101 and no = 1
For all n ≥ 1, 100n + 5 ≤ 101n ≤ 101n2
Thus, 100n + 5 = O(n2)
b) 5n2 + 3n + 20 = O(n2)
Let c = 9 and no = 1
For all n ≥ 1, 5n2 + 3n +20 ≤ 9n2
Thus, 5n2 + 3n + 20 = O(n2)
c) n2 + n = O(n3)
Let c = 2 and no = 1
For all n ≥ 1, n2 + n ≤ 2n3
Thus, n2 + n = O(n3)
d) 3n + 2 = O(n)
Let c = 5 and no = 1
For all n ≥ 1, 3n+2 ≤ 5n
Thus, 3n+2= O(n).
e) 1000n2 + 100n-6 = O(n2)
Let c = 1001 and no = 1
For all n ≥ 1, 1000n2 + 100n-6 ≤ 1001n2
Thus, 1000n2 + 100n-6 = O(n2)

16. Define Big Omega notation and prove


a) n3 ∈ Ω (n2)
To prove, find constants c and no such that n3 ≥ c* n2 for all n ≥ no.
Let c = 1 and no = 1.
For all n ≥ 1, n3 ≥ n2.
Thus, n3 € Ω(n2).
b) 2n + 3 = Ω (n)
Let c = 1 and no = 1.
For all n ≥ 1, 2n + 3 ≥ n
Thus, 2n + 3 € Ω(n).
c) ½n(n-1) ∈ Ω (n2)
Let f(n)= ½(n(n-1)) and g(n) = n²
To prove this, we need to find positive constants c and no such that f(n) c. g(n) for all n ≥ no.

1. Start with the inequality ½(n(n-1)) ≥ cn²


2. Simplify the left side of the inequality:
½(n(n-1)) = ½(n²-n) = ½n² - ½n
3. Set c = ½ and no = 1
4. Now, for all n≥ 1:
½n² - ½n ≥ ½n²
This confirms that f(n) is in Ω (n2). Thus, ½n(n-1) ∈ Ω (n2).
d) n3 + 4 n2 = Ω (n2)
Let c = 1 and no = 1
For all n ≥ 1, n3 + 4n2 ≥ n2
Thus, n3 + 4n2 € (n2).

17. Define Big Theta notation and prove


a) n2 + 5n + 7 = Θ (n2)
To prove, show both O(n2) and Θ (n2).
O(n2): Already proved in 20(a)
Θ (n2): Let c₁ = 1, c₂ = 6, and no = 1
For all n ≥ 1, n2 + 5n +7 ≥ n2
Thus, n2+5n+7 = Θ (n2).
b) ½n2 + 3n = Θ (n2)
To prove, show both O(n2) and Θ (n2).
O(n2): Let c = 2 and no = 1
For all n ≥ 1, ½n2 + 3n ≤ 2n2
Θ (n2): Already proved in 17(a)
Thus, ½n2 + 3n = Θ (n2)

c) 1/2n(n-1) ∈ Θ (n2)
Let f(n)= ½ (n(n-1)) and g(n)=n2. We want to show that f(n) is in Θ(n²).
To prove this, we need to find positive constants c1, c2, and n0 such that c₁.g(n) ≤ f(n) ≤ c2.g(n) for
all n ≥ no
Lower Bound:
Start with the inequality c₁.n² ≤ ½ (n(n-1))
Set c1 = 1/4 and no = 1
Now, for all n ≥ 1: 1/4 n² ≤ ½(n(n-1))
Upper Bound:
Start with the inequality ½(n(n-1)) ≤ c2.n²
Set c2 = 1 and no = 1
Now, for all n ≥ 1: ½(n(n-1)) ≤ n²
Therefore, ½(n(n-1)) belongs to Θ (n²) with c1 = 1/4, c2 = 1, and no = 1.

18. Explain with example mathematical analysis of non-recursive algorithm.


General Plan for Analyzing Time Efficiency of Nonrecursive Algorithms :
(i).Decide on a parameter (or parameters) indicating an input's size.
(ii) Identify the algorithm's basic operation. (As a rule, it is located in its inner- most loop.)
(iii) Check whether the number of times the basic operation is executed depends only on the size of
an input. If it also depends on some additional property, the worst-case, average-case, and, if
necessary, best-case efficiencies have to be investigated separately.
(iv) Set up a sum expressing the number of times the algorithm's basic operation is executed.4
(v) Using standard formulas and rules of sum manipulation, either find a closed- form formula for
the count or, at the very least, establish its order of growth.
Example : Refer Q.No. 19
19. Write an algorithm to Find the largest element in an array and also perform mathematical
analysis.
ALGORITHM MaxElement(A[0..n − 1])
//Determines the value of the largest element in a given array
//Input: An array A[0..n − 1] of real numbers
//Output: The value of the largest element in A
maxval ← A[0]
for i ←1 to n − 1 do
if A[i] > maxval
maxval ← A[i]
return maxval
Analysis:
o Parameter to be considered is n (size of input).
o The operations that are going to be executed most often are in the algorithm’s for loop.
There are two operations in the loop’s body:
➢ the comparison A[i]> maxval
➢ the assignment maxval←A[i].
Since the comparison is executed on each repetition of the loop, we should consider it to be the
algorithm’s basic operation.
o There is no need to distinguish among the worst, average, and best cases here since the number of
comparisons will be the same for all arrays of size n.
o Let C(n) denote the no. of times comparisons made.
The algorithm makes one comparison on each execution of the loop
And this is repeated once for each value of the loop’s variable i
The variable i is within the bounds 1 and n − 1.
Therefore, we get
C(n) = ∑𝑛−1
𝑖=1 1
i.e. C(n) = n-1 [ based on the summation formula, ∑𝑢𝑖=𝑚 1 = 𝑢 − 𝑚 + 1, 𝑚 ≤ 𝑢
C(n)  Θ(n)

20. Write an algorithm to Checking for Unique elements in an array and also perform
mathematical analysis.

ALGORITHM UniqueElements(A[0..n − 1])


//Determines whether all the elements in a given array are distinct
//Input: An array A[0..n − 1]
//Output: Returns “true” if all the elements in A are distinct and “false” otherwise
for i←0 to n−2 do
for j←i+1 to n−1 do
if A[i] = A[j] return false
return true

Analysis:
o Parameter to be considered is n (size of input).
o Since the innermost loop contains a single operation (the comparison of two elements), we should
consider it as the algorithm’s basic operation.
o The number of element comparisons depends not only on n but also on whether there are equal
elements in the array and, if there are, which array positions they occupy. We will limit our
investigation to the worst case only. An inspection of the innermost loop reveals that there are two
kinds of worst-case inputs:
➢ arrays with no equal elements
➢ arrays in which the last two elements are the only pair of equal elements.
o Here, one comparison is made for each repetition of the innermost loop, i.e., for each value of the
loop variable j between its limits i + 1 and n − 1; This is repeated for each value of the outer loop,
i.e., for each value of the loop variable i between its limits 0 and n − 2.
Accordingly, we get

21. Write an algorithm to perform matrix multiplication and also perform mathematical
analysis.
ALGORITHM MatrixMultiplication(A[0..n − 1, 0..n − 1], B[0..n − 1, 0..n − 1])
//Multiplies two square matrices of order n by the definition-based algorithm
//Input: Two n × n matrices A and B
//Output: Matrix C = AB
for i ←0 to n – 1 do
for j ←0 to n – 1 do
C[i, j ] ← 0.0
for k←0 to n − 1 do
C[i, j ] ← C[i, j ] + A[i, k] * B[k, j]
return C

Analysis:
o Parameter to be considered is n (size of input).
o There are two arithmetical operations in the innermost loop here: multiplication and addition. We
consider multiplication as the basic operation.
o Let M(n) be the sum for the total number of multiplications executed by the algorithm.
o There is just one multiplication executed on each repetition of the algorithm’s innermost loop,
which is governed by the variable k ranging from the lower bound 0 to the upper bound n − 1.
22. Write a non-recursive algorithm to Count the number of bits in a number. And also
perform mathematical analysis.
ALGORITHM Binary(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
count ←1
while n > 1 do
count ← count + 1
n← [n / 2]
return count

Analysis:

o The most frequently executed operation here is not inside the while loop but rather the
comparison n > 1 that determines whether the loop’s body will be executed.
o Loop variable takes on only a few values between its lower and upper limits; therefore, we have
to use an alternative way of computing the number of times the loop is executed. Since the value of
n is about halved on each repetition of the loop, the answer should be about log2n.
o The exact formula for the number of times the comparison n>1 will be executed is actually
[log2 n] + 1

23. List the steps for analyzing the time efficiency of recursive algorithm.
General Plan for Analyzing Time Efficiency of Recursive Algorithms
(i) Decide on a parameter (or parameters) indicating an input's size.
(ii) Identify the algorithm's basic operation.
(iii) Check whether the number of times the basic operation is executed can vary on different inputs
of the same size; if it can, the worst-case, average-case, and best-case efficiencies must be
investigated separately.
(iv) Set up a recurrence relation, with an appropriate initial condition, for the number of times the
basic operation is executed.
(v) Solve the recurrence or at least ascertain the order of growth of its solution.

23. Explain with example mathematical analysis of recursive algorithm.


Refer Q.No. 24
24. Write an algorithm to find the factorial of a number using recursion and also perform
mathematical analysis.
ALGORITHM F(n)
//Computes n! recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = 0
return 1
else
return F(n − 1) * n

Analysis:
25, Write an algorithm to perform Towers of Hanoi using recursion and also perform
mathematical analysis.
The problem has an elegant recursive solution,

o To move n>1 disks from peg 1 to peg 3 (with peg 2 as auxiliary),


➢ we first move recursively n − 1 disks from peg 1 to peg 2 (with peg 3 as auxiliary)
➢ then move the largest disk directly from peg 1 to peg 3
➢ finally, we move recursively n − 1 disks from peg 2 to peg 3 (using peg 1 as auxiliary).
o if n = 1, we simply move the single disk directly from the source peg to the destination peg.
Analysis:
26. State the recursive algorithm to count the bits of a decimal number in its binary
representation. Give its mathematical analysis.
ALGORITHM CountBits(n)
//Input: Decimal number n
//Output: Number of bits in the binary representation of n
if n = 0, return 0
else return 1 + CountBits(n / 2)
Analysis:
Let T(n) represent the time complexity of the algorithm for an input n.
The recurrence relation is given by: T(n) = T (2) +1
The algorithm divides the problem into subproblems by recursively calling itself with the input
divided by 2. The base case is when n becomes 0, and the recursion stops. The time complexity of
this algorithm is logarithmic, and it can be expressed as T(n) = O(log n).

27.Consider the following algorithm.


Algorithm GUESS(A[] [])
for i ← 0 to n-1
for j ← 0 to i
A[i][j] ← 0
i) What does the algorithm compute?
ii)What it its basic operation?
iii) What is the efficiency of this algorithm?
(i) The algorithm initializes the lower triangular part of a square matrix 'A' with zeros. It iterates
over the lower triangular elements (including the diagonal) and sets them to zero.
(ii) The basic operation of this algorithm is the assignment operation.
(iii) Let 'n' be the size of the square matrix 'A'. The algorithm uses two nested loops to iterate over
the lower triangular elements, and for each element, it performs a constant-time operation (setting
the element to zero).
The time complexity can be expressed as follows:

T(n) = ∑𝑛−1 𝑖
𝑖=0 ∑𝑗=0 𝑂(1)

The double summation represents the two nested loops. The inner loop runs from 0 to 'i', and the
outer loop runs from 0 to 'n-1'. The constant time operation is (1) for each element assignment.
Simplifying the expression:
T(n) = ∑𝑛−1
𝑖=0 (𝑖 + 1). 𝑂(1)

T(n) = 𝑂(∑𝑛−1
𝑖=0 (𝑖 + 1))

T(n) = O (𝑛(𝑛+1)
2
)
Therfore, the time complexity is O(n2).
27. Solve the following recurrence relation.
a) x(n) = x(n-1) + 5 for n >1 , x(1) =0
b) x(n) = 3x(n-1) for n > 1 , x(1)=4
c) x(n) = x(n-1) + n for n>1 , x(0)=0

a) x(n) = x(n-1) + 5 for n >1 , x(1) =0


The initial condition is x(1) = 0.
Solution:
x(n) = x(n-1) +5
x(n-1) = x(n-2) + 5
x(n-2) = x(n-3) + 5
……
x(2) = x(1) +5
x(1) = 0
Now, sum all these equations:
x(n) = x(n-1) + x(n-2)+...+ x(2) + x(1)
Since x(1)=0, this simplifies to: x(n) = 5n

b) x(n) = 3x(n-1) for n > 1


The initial condition is x(1) = 4.
Solution:
x(n) = 3x(n-1)
x(n-1) = 3x(n-2)
x(n-2) = 3x(n-3)
……..
x(2) = 3x(1)
x(1)=4
Now, substitute into the previous equation:
x(n) = 3n . x(1)
x(n) = 4 . 3n
c) x(n) = x(n-1) + n for n>1 , x(0)=0
The initial condition is x(0) = 0.
Solution:
x(n) = x(n-1)+n
x(n-1) = x(n-2)+(n-1)
x(n-2) = x(n-3)+(n-2)
…….
x(2) = x(1) +2
x(1) = 1
Now, sum all these equations:
x(n) = x(n-1) + x(n-2) +...+ x(2) +x(1) + (n- 1) + (n-2) +...+ 2 + 1
𝑛(𝑛+1)
x(n) = 2
28. Find the time complexity of below algorithm.

Analysis:
The algorithm has a single loop that iterates from 2 to n.
Inside the loop, there are constant-time operations (addition and assignments) that do not depend on
the input size.
The loop runs n - 1 times (from 2 to n).
Time Complexity:
Let's denote the time complexity as T(n). The time complexity is determined by the loop, which
iterates n - 1 times.
T(n) = O(n)
Therefore, the time complexity of the given Fibonacci algorithm is linear, denoted by O(n).

You might also like