0% found this document useful (0 votes)
75 views40 pages

CS3401 Unit 1 Notes

The document outlines the curriculum for the CS3401 Algorithms course, detailing course objectives, unit topics, and expected outcomes. Key areas of study include algorithm analysis, graph algorithms, algorithm design techniques, state space search algorithms, and NP-completeness. Students will learn to analyze algorithm efficiency, apply various algorithms, and solve problems using different algorithmic strategies.

Uploaded by

ruthraelumalai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views40 pages

CS3401 Unit 1 Notes

The document outlines the curriculum for the CS3401 Algorithms course, detailing course objectives, unit topics, and expected outcomes. Key areas of study include algorithm analysis, graph algorithms, algorithm design techniques, state space search algorithms, and NP-completeness. Students will learn to analyze algorithm efficiency, apply various algorithms, and solve problems using different algorithmic strategies.

Uploaded by

ruthraelumalai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

DEPARTMENT OF COMPUTER SCIENCE AND

ENGINEERING REGULATION R2021


II YEAR - IV SEMESTER

CS3401 – ALGORITHMS
CS3401 ALGORITHMS

COURSE OBJECTIVES
 To understand and apply the algorithm analysis techniques on searching and
sorting algorithms
 To critically analyze the efficiency of graph algorithms
 To understand different algorithm design techniques
 To solve programming problems using state space tree
 To understand the concepts behind NP Completeness, Approximation algorithms
and randomized algorithms.

UNIT I INTRODUCTION

Algorithm analysis: Time and space complexity - Asymptotic Notations and its
properties Best case, Worst case and average case analysis – Recurrence relation:
substitution method - Lower bounds – searching: linear search, binary search and
Interpolation Search, Pattern search: The naïve string-matching algorithm - Rabin-Karp
algorithm - Knuth-Morris-Pratt algorithm. Sorting: Insertion sort – heap sort

UNIT II GRAPH ALGORITHMS

Graph algorithms: Representations of graphs - Graph traversal: DFS – BFS -


applications - Connectivity, strong connectivity, bi-connectivity - Minimum spanning
tree: Kruskal’s and Prim’s algorithm- Shortest path: Bellman-Ford algorithm -
Dijkstra’s algorithm - Floyd-Warshall algorithm Network flow: Flow networks - Ford-
Fulkerson method – Matching: Maximum bipartite matching

UNIT III ALGORITHM DESIGN TECHNIQUES

Divide and Conquer methodology: Finding maximum and minimum - Merge sort -
Quick sort Dynamic programming: Elements of dynamic programming — Matrix-chain
multiplication - Multi stage graph — Optimal Binary Search Trees. Greedy Technique:
Elements of the greedy strategy - Activity-selection problem –- Optimal Merge pattern
— Huffman Trees.

UNIT IV STATE SPACE SEARCH ALGORITHMS

Backtracking: n-Queens problem - Hamiltonian Circuit Problem - Subset Sum Problem –


Graph colouring problem Branch and Bound: Solving 15-Puzzle problem - Assignment
problem - Knapsack Problem - Travelling Salesman Problem
UNIT V NP-COMPLETE AND APPROXIMATION ALGORITHM

Tractable and intractable problems: Polynomial time algorithms – Venn diagram


representation - NP-algorithms - NP-hardness and NP-completeness – Bin Packing
problem - Problem reduction: TSP – 3-CNF problem. Approximation Algorithms: TSP -
Randomized Algorithms: concept and application - primality testing - randomized quick
sort - Finding kth smallest number

45 PERIODS

COURSE OUTCOMES: Upon completion of the course, the students will be able to
CO1: Analyze the efficiency of algorithms using various frameworks
CO2: Apply graph algorithms to solve problems and analyze their efficiency.
CO3: Make use of algorithm design techniques like divide and conquer, dynamic
programming and greedy techniques to solve problems
CO4: Use the state space tree method for solving problems.
CO5: Solve problems using approximation algorithms and randomized algorithms

TEXT BOOKS:
1. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein,
"Introduction to
Algorithms", 3rd Edition, Prentice Hall of India, 2009.
2. Ellis Horowitz, Sartaj Sahni, Sanguthevar Rajasekaran “Computer Algorithms/C++”
Orient
Blackswan, 2nd Edition, 2019.

REFERENCES:
1. Anany Levitin, “Introduction to the Design and Analysis of Algorithms”, 3rd Edition,
Pearson
Education, 2012.
2. Alfred V. Aho, John E. Hopcroft and Jeffrey D. Ullman, "Data Structures and
Algorithms",
Reprint Edition, Pearson Education, 2006.
3. S. Sridhar, “Design and Analysis of Algorithms”, Oxford university press, 2014.
UNIT I INTRODUCTION

Algorithm analysis: Time and space complexity - Asymptotic Notations and its
properties Best case, Worst case and average case analysis – Recurrence relation:
substitution method - Lower bounds – searching: linear search, binary search and
Interpolation Search, Pattern search: The naïve string-matching algorithm - Rabin-Karp
algorithm - Knuth-Morris-Pratt algorithm. Sorting: Insertion sort – heap sort

ALGORITHM ANALYSIS: TIME AND SPACE COMPLEXITY(C214.1, PO-


1,2,3,4,5,11,12)
Algorithm analysis is the process of determining the time and space complexity of an
algorithm, which are measures of the algorithm's efficiency. Time complexity refers to
the amount of time it takes for an algorithm to run as a function of the size of the input,
and is typically expressed using big O notation. Space complexity refers to the amount of
memory required by an algorithm as a function of the size of the input, and is also
typically expressed using big O notation.
To analyze the time complexity of an algorithm, we need to consider the number of
operations performed by the algorithm, and how the number of operations changes as the
size of the input increases. This can be done by counting the number of basic operations
performed in the algorithm, such as comparisons, assignments, and function calls. The
number of basic operations is then used to determine the algorithm's time complexity
using big O notation.
To analyze the space complexity of an algorithm, we need to consider the amount of
memory used by the algorithm, and how the amount of memory used changes as the size
of the input increases. This can be done by counting the number of variables used by the
algorithm, and how the number of variables used changes as the size of the input
increases. The amount of memory used is then used to determine the algorithm's space
complexity using big O notation.
It's important to note that analyzing the time and space complexity of an algorithm is a
way to evaluate the efficiency of an algorithm and trade-off between time and space, but
it is not a definitive measure of the actual performance of the algorithm, as it depends on
the specific implementation of the algorithm, the computer and the input.
Time and Space Complexity
Time complexity is a measure of how long an algorithm takes to run as a function of the
size of the input. It is typically expressed using big O notation, which describes the upper
bound on the growth of the time required by the algorithm. For example, an algorithm
with a time complexity of O(n) takes longer to run as the input size (n) increases.
There are different types of time complexities:
• O(1) or constant time: the algorithm takes the same amount of time to run
regardless of the size of the input.
• O(log n) or logarithmic time: the algorithm's running time increases
logarithmically with the size of the input.
• O(n) or linear time: the algorithm's running time increases linearly with the size of
the input.
• O(n log n) or linear logarithmic time: the algorithm's running time increases
linearly with the size of the input and logarithmically with the size of the input.
• O(n^2) or quadratic time: the algorithm's running time increases quadratically
with the size of the input.

• O(2^n) or exponential time: the algorithm's running time increases exponentially


with the size of the input.
Space complexity, on the other hand, is a measure of how much memory an algorithm
uses as a function of the size of the input. Like time complexity, it is typically expressed
using big O notation. For example, an algorithm with a space complexity of O(n) uses
more memory as the input size (n) increases. Space complexities are generally
categorized as:
• O(1) or constant space: the algorithm uses the same amount of memory regardless
of the size of the input.
• O(n) or linear space: the algorithm's memory usage increases linearly with the
size of the input.
• O(n^2) or quadratic space: the algorithm's memory usage increases quadratically
with the size of the input.
• O(2^n) or exponential space: the algorithm's memory usage increases
exponentially with the size of the input.
It is important to note that time and space complexity analysis is a way to evaluate the
efficiency of an algorithm and the trade-off between time and space, but it is not a
definitive measure of the actual performance of the algorithm, as it depends on the
specific implementation of the algorithm, the computer and the input.

Part A
Q.no Questions CO PO
1. What is time complexity? C214.1 1,2
2. Define space complexity in an algorithm. C214.1 1,3
3. What is Big O notation? C214.1 1,3
Compare O(n) and O(n²) time complexity with
4. C214.1 1,3
examples.
What is the difference between worst-case, best-case,
5. C214.1 1,3
and average-case time complexity?
Part B
Explain different time complexities (O(1), O(log n),
1. O(n), O(n log n), O(n²), O(2^n)) with real-world C214.1 1,2,3,5,11
examples.
Discuss the trade-offs between time and space
2. C214.1 1,3,5
complexity with examples.
Part C
Explain time and space complexity with Big O
1. notation. Provide examples for different complexities C214.1 1,2,3,4,5,11,12
and analyze their efficiency.

ASYMPTOTIC NOTATION AND ITS PROPERTIES


BEST CASE, WORST CASE AND AVERAGE CASE ANALYSIS(C214.1, PO-
Asymptotic notation is a mathematical notation used to describe the behavior of an
algorithm as the size of the input (usually denoted by n) becomes arbitrarily large. The
most commonly used asymptotic notations are big O, big Ω, and big Θ.
• Big O notation (O(f(n))) provides an upper bound on the growth of a function. It
describes the worst-case scenario for the time or space complexity of an algorithm. For
example, an algorithm with a time complexity of O(n^2) means that the running time of
the algorithm is at most n^2, where n is the size of the input.
• Big Ω notation (Ω(f(n))) provides a lower bound on the growth of a function. It
describes the best-case scenario for the time or space complexity of an algorithm. For
example, an algorithm with a space complexity of Ω(n) means that the memory usage of
the algorithm is at least n, where n is the size of the input.
• Big Θ notation (Θ(f(n))) provides a tight bound on the growth of a function. It
describes the average-case scenario for the time or space complexity of an algorithm. For
example, an algorithm with a time complexity of Θ(n log n) means that the running time
of the algorithm is both O(n log n) and Ω(n log n), where n is the size of the input.
It's important to note that the asymptotic notation only describes the behavior of the
function for large values of n, and does not provide information about the exact behavior
of the function for small values of n. Also, for some cases, the best, worst and average
cases can be the same, in that case the notation will be simplified to O(f(n)) = Ω(f(n)) =
Θ(f(n))

Additionally, these notations can be used to compare the efficiency of different


algorithms, where a lower order of the function is considered more efficient. For
example, an algorithm with a time complexity of O(n) is more efficient than an
algorithm with a time complexity of O(n^2).
It's also worth mentioning that asymptotic notation is not only limited to time and space
complexity but can be used to express the behavior of any function, not just algorithms.
There are three asymptotic notations that are used to represent the time complexity of an
algorithm. They are:
• Θ Notation (theta)
• Big O Notation
• Ω Notation
Before learning about these three asymptotic notation, we should learn about the best,
average, and the worst case of an algorithm.
Best case, Average case, and Worst case
An algorithm can have different time for different inputs. It may take 1 second for some
input and 10 seconds for some other input.
For example: We have one array named " arr" and an integer " k ". we need to find if that
integer " k " is present in the array " arr " or not? If the integer is there, then return 1
other return 0. Try to make an algorithm for this question.
The following information can be extracted from the above question:
• Input: Here our input is an integer array of size "n" and we have one integer "k"
that we need to search for in that array.
• Output: If the element "k" is found in the array, then we have return 1, otherwise
we have to return 0.
Now, one possible solution for the above problem can be linear search i.e. we will
traverse each and every element of the array and compare that element with "k". If it is
equal to "k" then return 1, otherwise, keep on comparing for more elements in the array
and if you reach at the end of the array and you did not find any element, then return 0.
/*
* @type of arr: integer array
* @type of n: integer (size of integer array)
* @type of k: integer (integer to be searched)
*/
int search K(int arr[], int n, int k)
{
// for-loop to iterate with each element in the array
for (int i = 0; i < n; ++i)
{
// check if ith element is equal to "k" or not
if (arr[i] == k)
return 1; // return 1, if you find "k"

}
return 0; // return 0, if you didn't find "k"
}

/*
* [Explanation]
* i = 0 > will be executed once
* i < n > will be executed n+1 times
* i++ > will be executed n times
* if(arr[i] == k) --> will be executed n times
* return 1 > will be executed once(if "k" is there in the array)
* return 0 > will be executed once(if "k" is not there in the array)
*/
Each statement in code takes constant time, let's say "C", where "C" is some constant.
So, whenever we declare an integer then it takes constant time when we change the value
of some integer or other variables then it takes constant time, when we compare two
variables then it takes constant time. So, if a statement is taking "C" amount of time and
it is executed "N" times, then it will take C*N amount of time. Now, think of the
following inputs to the above algorithm that we have just written:
NOTE: Here we assume that each statement is taking 1sec of time to execute.
• If the input array is [1, 2, 3, 4, 5] and you want to find if "1" is present in the array
or not, then the if-condition of the code will be executed 1 time and it will find that the
element 1 is there in the array. So, the if-condition will take 1 second here.
• If the input array is [1, 2, 3, 4, 5] and you want to find if "3" is present in the array
or not, then the if-condition of the code will be executed 3 times and it will find that the
element 3 is there in the array. So, the if-condition will take 3 seconds here.
• If the input array is [1, 2, 3, 4, 5] and you want to find if "6" is present in the array
or not, then the if-condition of the code will be executed 5 times and it will find that the
element 6 is not there in the array and the algorithm will return 0 in this case. So, the if-
condition will take 5 seconds here.
As we can see that for the same input array, we have different time for different values of
"k". So, this can be divided into three cases:
• Best case: This is the lower bound on running time of an algorithm. We must
know the case that causes the minimum number of operations to be executed. In the
above example, our array was [1, 2, 3, 4, 5] and we are finding if "1" is present in the
array or not. So here, after only one comparison, we will get that ddelement is present in
the array. So, this is the best case of our algorithm.

• Average case: We calculate the running time for all possible inputs, sum all the
calculated values and divide the sum by the total number of inputs. We must know (or
predict) distribution of cases.
• Worst case: This is the upper bound on running time of an algorithm. We must
know the case that causes the maximum number of operations to be executed. In our
example, the worst case can be if the given array is [1, 2, 3, 4, 5] and we try to find if
element "6" is present in the array or not. Here, the if-condition of our loop will be
executed 5 times and then the algorithm will give "0" as output.
So, we learned about the best, average, and worst case of an algorithm. Now, let's get
back to the asymptotic notation where we saw that we use three asymptotic notation to
represent the complexity of an algorithm i.e. Θ Notation (theta), Ω Notation, Big O
Notation.
NOTE: In the asymptotic analysis, we generally deal with large input size.
Θ Notation (theta)
The Θ Notation is used to find the average bound of an algorithm i.e. it defines an upper
bound and a lower bound, and your algorithm will lie in between these levels. So, if a
function is g(n), then the theta representation is shown as Θ(g(n)) and the relation is
shown as:
Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1g(n) ≤ f(n) ≤
c2g(n) for all n ≥ n0 }
The above expression can be read as theta of g(n) is defined as set of all the functions
f(n) for which there exists some positive constants c1, c2, and n0 such that c1*g(n) is
less than or equal to f(n) and f(n) is less than or equal to c2*g(n) for all n that is greater
than or equal to n0.
For example:
if f(n) = 2n² + 3n + 1 and g(n) = n²
then for c1 = 2, c2 = 6, and n0 = 1, we can say that f(n) = Θ(n²)
Ω Notation
The Ω notation denotes the lower bound of an algorithm i.e. the time taken by the
algorithm can't be lower than this. In other words, this is the fastest time in which the
algorithm will return a result.

Its the time taken by the algorithm when provided with its best-case input. So, if a
function is g(n), then the omega representation is shown as Ω(g(n)) and the relation is
shown as:
Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n
≥ n0 }
The above expression can be read as omega of g(n) is defined as set of all the functions
f(n) for which there exist some constants c and n0 such that c*g(n) is less than or equal
to f(n), for all n greater than or equal to n0.
if f(n) = 2n² + 3n + 1 and g(n) = n²
then for c = 2 and n0 = 1, we can say that f(n) = Ω(n²)
Big O Notation
The Big O notation defines the upper bound of any algorithm i.e. you algorithm can't
take more time than this time. In other words, we can say that the big O notation denotes
the maximum time taken by an algorithm or the worst-case time complexity of an
algorithm. So, big O notation is the most used notation for the time complexity of an
algorithm. So, if a function is g(n), then the big O representation of g(n) is shown as
O(g(n)) and the relation is shown as:
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n
≥ n0 }
The above expression can be read as Big O of g(n) is defined as a set of functions f(n)
for which there exist some constants c and n0 such that f(n) is greater than or equal to 0
and f(n) is smaller than or equal to c*g(n) for all n greater than or equal to n0.
if f(n) = 2n² + 3n + 1 and g(n) = n²
then for c = 6 and n0 = 1, we can say that f(n) = O(n²)

Big O notation example of Algorithms


Big O notation is the most used notation to express the time complexity of an algorithm.
In this section of the blog, we will find the big O notation of various algorithms.
Example 1: Finding the sum of the first n numbers.
In this example, we have to find the sum of first n numbers. For example, if n = 4, then
our output should be 1 + 2 + 3 + 4 = 10. If n = 5, then the ouput should be 1 + 2 + 3 + 4
+ 5 = 15. Let's try various solutions to this code and try to compare all those codes.
O(1) solution
// function taking input "n"
int findSum(int n)
{
return n * (n+1) / 2; // this will take some constant time c1
}
In the above code, there is only one statement and we know that a statement takes
constant time for its execution. The basic idea is that if the statement is taking constant
time, then it will take the same amount of time for all the input size and we denote this as
O(1) .
O(n) solution
In this solution, we will run a loop from 1 to n and we will add these values to a variable
named "sum".
// function taking input "n"
int findSum(int n)
{
int sum = 0; // > it takes some constant time "c1"
for(int i = 1; i <= n; ++i) // --> here the comparision and increment will take place n
times(c2*n) and the creation of i takes place with some constant time
sum = sum + i; // > this statement will be executed n times i.e. c3*n

return sum; // > it takes some constant time "c4"


}
/*
* Total time taken = time taken by all the statments to execute
* here in our example we have 3 constant time taking statements i.e. "sum = 0", "i =
0", and "return sum", so we can add all the constatnts and replacce with some new
constant "c"
* apart from this, we have two statements running n-times i.e. "i < n(in real n+1)"
and "sum = sum + i" i.e. c2*n + c3*n = c0*n
* Total time taken = c0*n + c
*/
The big O notation of the above code is O(c0*n) + O(c), where c and c0 are constants.
So, the overall time complexity can be written as O(n) .
O(n²) solution
In this solution, we will increment the value of sum variable "i" times i.e. for i = 1, the
sum variable will be incremented once i.e. sum = 1. For i = 2, the sum variable will be
incremented twice. So, let's see the solution.
// function taking input "n"
int findSum(int n)
{
int sum = 0; // > constant time
for(int i = 1; i <= n; ++i)
for(int j = 1; j <= i; ++j)
sum++; // > it will run [n * (n + 1) / 2]
return sum; // > constant time
}
/*
* Total time taken = time taken by all the statments to execute
* the statement that is being executed most of the time is "sum++" i.e. n * (n + 1) /
2
* So, total complexity will be: c1*n² + c2*n + c3 [c1 is for the constant terms of n²,
c2 is for the constant terms of n, and c3 is for rest of the constant time]
*/
The big O notation of the above algorithm is O(c1*n²) +O( c2*n) + O(c3). Since we take
the higher order of growth in big O. So, our expression will be reduced to O(n²) .

So, until now, we saw 3 solutions for the same problem. Now, which algorithm will you
prefer to use when you are finding the sum of first "n" numbers? If your answer is O(1)
solution, then we have one bonus section for you at the end of this blog. We would
prefer the O(1) solution because the time taken by the algorithm will be constant
irrespective of the input size.

Part A
Q.no Questions CO PO
1. What is asymptotic notation? C214.1 1
2. Define Big O notation with an example. C214.1 1,2
What is the difference between O(n) and O(n²)
3. C214.1 1,3
complexities?
4. Explain the significance of Big Ω notation. C214.1 1,3
5. What do you mean by Best-case complexity? C214.1 1,3

6. How is average-case complexity determined? C214.1 1,2

7. Define Worst-case complexity with an example. C214.1 1,3


Part B
Explain the different asymptotic notations: Big O, Big
1. C214.1 1,2,3,5,11
Ω, and Big Θ with suitable examples.
Compare Best case, Worst case, and Average case
2. C214.1 1,3,5,11
complexities with examples.
Analyze the time complexity of Linear Search and
3. C214.1 1,2,3,4
Binary Search.
Part C
Explain asymptotic analysis in detail. Discuss different
1. asymptotic notations with examples and compare their C214.1 1,2,3,4,5,11,12
significance.

RECURRENCE RELATION(214.1, PO- 1,2,3,4,5,11,12 )


A recurrence relation is a mathematical equation that describes the relation between the
input size and the running time of a recursive algorithm. It expresses the running time of
a problem in terms of the running time of smaller instances of the same problem.
A recurrence relation typically has the form T(n) = aT(n/b) + f(n) where:
• T(n) is the running time of the algorithm on an input of size n
• a is the number of recursive calls made by the algorithm
• b is the size of the input passed to each recursive call
• f(n) is the time required to perform any non-recursive operations
The recurrence relation can be used to determine the time complexity of the algorithm
using techniques such as the Master Theorem or Substitution Method.
For example, let's consider the problem of computing the nth Fibonacci number. A
simple recursive algorithm for solving this problem is as follows:

Fibonacci(n) if n <= 1 return n else


return Fibonacci(n-1) + Fibonacci(n-2)
The recurrence relation for this algorithm is T(n) = T(n-1) + T(n-2) + O(1), which
describes the running time of the algorithm in terms of the running time of the two
smaller instances of the problem with input sizes n-1 and n-2. Using the Master
Theorem, it can be shown that the time complexity of this algorithm is O(2^n) which is
very inefficient for large input sizes.

Part A
Q.no Questions CO PO
1. What is a recurrence relation? C214.1 1,2,3
2. Define the Master Theorem with an example. C214.1 1,2
3. Write the recurrence relation for Merge Sort. C214.1 1,3
Part B
Solve the recurrence relation T(n) = 2T(n/2) + O(n)
1. C214.1 1,2,3,5
using the Master Theorem and discuss its significance.
Part C
1. Explain different methods for solving recurrence C214.1 1,2,3,4,5,11,12
relations, including Substitution, Recursion Tree, and
Master Theorem, with examples.

SEARCHING(214.1, PO- 1,2,3,4,5,11,12 )


Searching is the process of fetching a specific element in a collection of elements. The
collection can be an array or a linked list. If you find the element in the list, the process
is considered successful, and it returns the location of that element.
Two prominent search strategies are extensively used to find a specific item on a list.
However, the algorithm chosen is determined by the list's organization.

1. Linear Search
2. Binary Search
3. Interpolation search

Linear Search
Linear search, often known as sequential search, is the most basic search technique. In
this type of search, we go through the entire list and try to fetch a match for a single
element. If we find a match, then the address of the matching target element is returned.
On the other hand, if the element is not found, then it returns a NULL value. Following
is a step-by-step approach employed to perform Linear Search Algorithm.

The procedures for implementing linear search are as follows:


Step 1: First, read the search element (Target element) in the array.
Step 2: In the second step compare the search element with the first element in the array.
Step 3: If both are matched, display "Target element is found" and terminate the Linear
Search function.
Step 4: If both are not matched, compare the search element with the next element in the
array. Step 5: In this step, repeat steps 3 and 4 until the search (Target) element is
compared with the last element of the array.
Step 6 - If the last element in the list does not match, the Linear Search Function will be
terminated, and the message "Element is not found" will be displayed.

Algorithm and Pseudocode of Linear Search Algorithm Algorithm of the Linear Search
Algorithm

Pseudocode of Linear Search Algorithm

Example of Linear Search Algorithm


Consider an array of size 7 with elements 13, 9, 21, 15, 39, 19, and 27 that starts with 0
and ends with size minus one, 6.
Search element = 39

Step 1: The searched element 39 is compared to the first element of an array, which is
13.

The match is not found, you now move on to the next element and try to implement a
comparison.
Step 2: Now, search element 39 is compared to the second element of an array, 9.

Step 3: Now, search element 39 is compared with the third element, which is 21.

Again, both the elements are not matching, you move onto the next following element.
Step 4; Next, search element 39 is compared with the fourth element, which is 15.

Step 5: Next, search element 39 is compared with the fifth element 39.
A perfect match is found, display the element found at location 4.

The Complexity of Linear Search Algorithm


Three different complexities faced while performing Linear Search Algorithm, they are
mentioned as follows.
1. Best Case
2. Worst Case
3. Average Case
Best Case Complexity
• The element being searched could be found in the first position.
• In this case, the search ends with a single successful comparison.
• Thus, in the best-case scenario, the linear search algorithm performs O(1)
operations.
Worst Case Complexity
• The element being searched may be at the last position in the array or not at all.
• In the first case, the search succeeds in ‘n’ comparisons.
• In the next case, the search fails after ‘n’ comparisons.
• Thus, in the worst-case scenario, the linear search algorithm performs O(n)
operations.
Average Case Complexity
When the element to be searched is in the middle of the array, the average case of the
Linear Search Algorithm is O(n).
Space Complexity of Linear Search Algorithm
The linear search algorithm takes up no extra space; its space complexity is O(n) for an
array of n elements.
Application of Linear Search Algorithm
The linear search algorithm has the following applications:
• Linear search can be applied to both single-dimensional and multi-dimensional
arrays.
• Linear search is easy to implement and effective when the array contains only a
few elements.
• Linear Search is also efficient when the search is performed to fetch a single
search in an unordered-List.
Code Implementation of Linear Search Algorithm

Binary Search
Binary search is the search technique that works efficiently on sorted lists. Hence, to
search an element into some list using the binary search technique, we must ensure that
the list is sorted.
Binary search follows the divide and conquer approach in which the list is divided into
two halves, and the item is compared with the middle element of the list. If the match is
found then, the location of the middle element is returned. Otherwise, we search into
either of the halves depending upon the result produced through the match
NOTE: Binary search can be implemented on sorted array elements. If the list elements
are not arranged in a sorted manner, we have first to sort them.

Algorithm
1. Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array,
'lower_bound' is t he index of the first array element, 'upper_bound' is the index of the
last array element, 'val' is the value to search
2. Step 1: set beg = lower_bound, end = upper_bound, pos = - 1
3. Step 2: repeat steps 3 and 4 while beg <=end
4. Step 3: set mid = (beg + end)/2
5. Step 4: if a[mid] = val
6. set pos = mid
7. print pos
8. go to step 6
9. else if a[mid] > val
10. set end = mid - 1
11. else
12. set beg = mid + 1
13. [end of if]
14. [end of loop]
15. Step 5: if pos = -1

16. print "value is not present in the array"


17. [end of if]
18. Step 6: exit
Procedure binary_search
A ← sorted array n ← size of array
x ← value to be searched Set lowerBound = 1
Set upperBound = n while x not found
if upperBound < lowerBound EXIT: x does not exists.
set midPoint = lowerBound + ( upperBound - lowerBound ) / 2 if A[midPoint] < x
set lowerBound = midPoint + 1 if A[midPoint] > x
set upperBound = midPoint - 1 if A[midPoint] = x
EXIT: x found at location midPoint end while
end procedure

Working of Binary search


To understand the working of the Binary search algorithm, let's take a sorted array. It
will be easy to understand the working of Binary search with an example.
There are two methods to implement the binary search algorithm -
o Iterative method
o Recursive method
The recursive method of binary search follows the divide and conquer approach. Let the
elements of array are -

Let the element to search is, K = 56


We have to use the below formula to calculate the mid of the array -
1. mid = (beg + end)/2 So, in the given array -

beg = 0
end = 8
mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.
Now, the element to search is found. So algorithm will return the index of the element
matched. Binary Search complexity
Now, let's see the time complexity of Binary search in the best case, average case, and
worst case. We will also see the space complexity of Binary search.
1. Time Complexity

Case Time Complexity

Best Case O(1)


Average Case O(logn)
Worst Case O(logn)

Best Case Complexity - In Binary search, best case occurs when the element to search is
found in first comparison, i.e., when the first middle element itself is the element to be
searched. The best-case time complexity of Binary search is O(1).
Average Case Complexity - The average case time complexity of Binary search is
O(logn).
Worst Case Complexity - In Binary search, the worst case occurs, when we have to keep
reducing the search space till it has only one element. The worst-case time complexity of
Binary search is O(logn).
2. Space Complexity
The space complexity of binary search is O(1).

Implementation of Binary Search


Program: Write a program to implement Binary search in C language.
1. #include <stdio.h>
2. int binarySearch(int a[], int beg, int end, int val)
3. {
4. int mid;
5. if(end >= beg)

6. { mid = (beg + end)/2;


7. /* if the item to be searched is present at middle */
8. if(a[mid] == val)
9. {
10. return mid+1;
11. }
12. /* if the item to be searched is smaller than middle, then it can only be in
left subarra y */
13. else if(a[mid] < val)
14. {
15. return binarySearch(a, mid+1, end, val);
16. }
17. /* if the item to be searched is greater than middle, then it can only be in
right subarr ay */
18. else
19. {
20. return binarySearch(a, beg, mid-1, val);
21. }
22. }
23. return -1;
24. }
25. int main() {
26. int a[] = {11, 14, 25, 30, 40, 41, 52, 57, 70}; // given array
27. int val = 40; // value to be searched
28. int n = sizeof(a) / sizeof(a[0]); // size of array
29. int res = binarySearch(a, 0, n-1, val); // Store result
30. printf("The elements of the array are - ");
31. for (int i = 0; i < n; i++)
32. printf("%d ", a[i]);
33. printf("\nElement to be searched is - %d", val);
34. if (res == -1)
35. printf("\nElement is not present in the array");
36. else
37. printf("\nElement is present at %d position of array", res);
38. return 0;
39. }
Output

Interpolation Search
Interpolation search is an improved variant of binary search. This search algorithm
works on the probing position of the required value. For this algorithm to work properly,
the data collection should be in a sorted form and equally distributed.
Binary search has a huge advantage of time complexity over linear search. Linear search
has worst- case complexity of Ο(n) whereas binary search has Ο(log n).

There are cases where the location of target data may be known in advance. For
example, in case of a telephone directory, if we want to search the telephone number of
Morphius. Here, linear search and even binary search will seem slow as we can directly
jump to memory space where the names start from 'M' are stored.
Position Probing in Interpolation Search
Interpolation search finds a particular item by computing the probe position. Initially, the
probe position is the position of the middle most item of the collection.

If a match occurs, then the index of the item is returned. To split the list into two parts,
we use the following method −
mid = Lo + ((Hi - Lo) / (A[Hi] - A[Lo])) * (X - A[Lo])

where − A = list
Lo = Lowest index of the list Hi = Highest index of the list
A[n] = Value stored at index n in the list

If the middle item is greater than the item, then the probe position is again calculated in
the sub- array to the right of the middle item. Otherwise, the item is searched in the
subarray to the left of the middle item. This process continues on the sub-array as well
until the size of subarray reduces to zero.
Runtime complexity of interpolation search algorithm is Ο(log (log n)) as compared to
Ο(log n) of BST in favorable situations.
Algorithm
As it is an improvisation of the existing BST algorithm, we are mentioning the steps to
search the 'target' data value index, using position probing −
Step 1 − Start searching data from middle of the list.
Step 2 − If it is a match, return the index of the item, and exit. Step 3 − If it is not a
match, probe position.
Step 4 − Divide the list using probing formula and find the new midle. Step 5 − If data is
greater than middle, search in higher sub-list.
Step 6 − If data is smaller than middle, search in lower sub-list. Step 7 − Repeat until
match.

Pseudocode A → Array list N → Size of A


X → Target Value
Procedure Interpolation_Search() Set Lo → 0

Set Mid → -1 Set Hi → N-1


While X does not match

if Lo equals to Hi OR A[Lo] equals to A[Hi]


EXIT: Failure, Target not found end if

Set Mid = Lo + ((Hi - Lo) / (A[Hi] - A[Lo])) * (X - A[Lo])

if A[Mid] = X
EXIT: Success, Target found at Mid else
if A[Mid] < X
Set Lo to Mid+1 else if A[Mid] > X Set Hi to Mid-1
end if end if
End While End Procedure
Implementation of interpolation in C

#include<stdio.h> #define MAX 10


// array of items on which linear search will be conducted. int list[MAX] = { 10, 14, 19,
26, 27, 31, 33, 35, 42, 44 };
int find(int data) { int lo = 0;
int hi = MAX - 1; int mid = -1;
int comparisons = 1; int index = -1; while(lo <= hi) {
printf("\nComparison %d \n" , comparisons ) ; printf("lo : %d, list[%d] = %d\n", lo, lo,
list[lo]);
printf("hi : %d, list[%d] = %d\n", hi, hi, list[hi]);

comparisons++;
// probe the mid point
mid = lo + (((double)(hi - lo) / (list[hi] - list[lo])) * (data - list[lo])); printf("mid = %d\
n",mid);
// data found if(list[mid] == data) {
index = mid; break;

} else {
if(list[mid] < data) {
// if data is larger, data is in upper half lo = mid + 1;
} else {
// if data is smaller, data is in lower half hi = mid - 1;
}
}
}

printf("\nTotal comparisons made: %d", --comparisons); return index;


}
int main() {
//find location of 33 int location = find(33);

// if element was found if(location != -1)


printf("\nElement found at location: %d" ,(location+1)); else
printf("Element not found."); return 0;
}
If we compile and run the above program, it will produce the following result − Output
Comparison 1
lo : 0, list[0] = 10
hi : 9, list[9] = 44
mid = 6

Total comparisons made: 1 Element found at location: 7

Time Complexity
• Bestcase-O(1)
The best-case occurs when the target is found exactly as the first expected position
computed using the formula. As we only perform one comparison, the time complexity
is O(1).
• Worst-case-O(n)
The worst case occurs when the given data set is exponentially distributed.

• Averagecase-O(log(log(n)))
If the data set is sorted and uniformly distributed, then it takes O(log(log(n))) time as on
an average (log(log(n))) comparisons are made.

Space Complexity
O(1) as no extra space is required.
Part A
Q.no Questions CO PO
1. What is searching in data structures? C214.1 1
2. Define Linear Search. C214.1 1,2
Write the time complexity of Binary Search in the
3. C214.1 1,3
worst case.
4. Compare Linear Search and Binary Search. C214.1 1,2,3
5. What is the prerequisite for applying Binary Search? C214.1 1,3
What is the best-case time complexity of Linear
6. C214.1 1,2
Search?
7. Define Interpolation Search. C214.1 1,3

8. What is the space complexity of Binary Search? C214.1 1,3


When should we use Linear Search over Binary
9. C214.1 1,4
Search?
What is the worst-case time complexity of
10. C214.1 1,3
Interpolation Search?
Part B
Explain the Linear Search algorithm with an example.
1. C214.1 1,2,3,5,11
Write a program for Linear Search in C.
Explain the Binary Search algorithm with step-by-step
2. C214.1 1,3,5,11
execution and a program in C.
Compare the time complexity of Linear Search, Binary
3. C214.1 1,2,3,4,11
Search, and Interpolation Search.
Explain the Interpolation Search algorithm with an
4. C214.1 1,3,5,11
example. Write its implementation in C.
Part C
1. Compare Linear Search, Binary Search, and C214.1 1,2,3,4,5,11,12
Interpolation Search with a detailed complexity
analysis.
Implement Binary Search and Linear Search. Compare
2. their time complexities experimentally with different C214.1 1,2,3,4,5,11
input sizes and plot a graph.
How does Interpolation Search differ from Binary
3. Search? Analyze their best, worst, and average cases C214.1 1,2,3,4,5,11,12
with examples.

PATTERN SEARCH(C214.1, PO-1,2,3,4,5,11,12)


Pattern Searching algorithms are used to find a pattern or substring from another bigger
string. There are different algorithms. The main goal to design these type of algorithms
to reduce the time complexity. The traditional approach may take lots of time to
complete the pattern searching task for a longer text.
Here we will see different algorithms to get a better performance of pattern matching. In
this Section We are going to cover.
• Aho-Corasick Algorithm
• Anagram Pattern Search
• Bad Character Heuristic
• Boyer Moore Algorithm
• Efficient Construction of Finite Automata
• kasai’s Algorithm
• Knuth-Morris-Pratt Algorithm
• Manacher’s Algorithm
• Naive Pattern Searching
• Rabin-Karp Algorithm
• Suffix Array
• Trie of all Suffixes
• Z Algorithm
Naive Pattern Searching
Naïve pattern searching is the simplest method among other pattern searching
algorithms. It checks for all character of the main string to the pattern. This algorithm is
helpful for smaller texts. It does not need any pre-processing phases. We can find
substring by checking once for the string. It also does not occupy extra space to perform
the operation.
The time complexity of Naïve Pattern Search method is O(m*n). The m is the size of
pattern and n is the size of the main string.

Input and Output Input:


Main String: “ABAAABCDBBABCDDEBCABC”, pattern: “ABC” Output:
Pattern found at position: 4 Pattern found at position: 10 Pattern found at position: 18

Algorithm
naive_algorithm(pattern, text)
Input − The text and the pattern
Output − locations, where the pattern is present in the text Start
pat_len := pattern Size

str_len := string size


for i := 0 to (str_len - pat_len), do for j := 0 to pat_len, do
if text[i+j] ≠ pattern[j], then break
if j == patLen, then
display the position i, as there pattern found End

Implementation in C
#include <stdio.h> #include <string.h> int main (){
char txt[] = "tutorialsPointisthebestplatformforprogrammers"; char pat[] = "a";
int M = strlen (pat); int N = strlen (txt);
for (int i = 0; i <= N - M; i++){ int j;
for (j = 0; j < M; j++) if (txt[i + j] != pat[j])
break;
if (j == M)
printf ("Pattern matches at index %d
", i);
}
return 0;
}
Output
Pattern matches at 6 Pattern matches at 25 Pattern matches at 39

Part A
Q.no Questions CO PO
1. What is the Rabin-Karp Algorithm? C214.1 1,2,3
2. Define the concept of hashing in Rabin-Karp C214.1 1,2
What is the worst-case time complexity of the Rabin-
3. C214.1 1,3
Karp Algorithm?
How does KMP improve over Naïve Pattern
4. C214.1 1,2,3
Searching?
Part B
Explain the Rabin-Karp Algorithm with an example.
1. C214.1 1,2,3,5
Write a C program for Rabin-Karp pattern matching.
Explain the Knuth-Morris-Pratt (KMP) Algorithm
2. with an example. Write a C program for KMP pattern C214.1 1,2,3,4,5
searching.
Part C
Compare Naïve Pattern Search, Rabin-Karp, and KMP
Algorithms in terms of time complexity and practical
1. C214.1 1,2,3,4,5,11,12
applications. Implement all three in C and analyze
their efficiency.

SORTING : INSERTION SORT(C214.1, PO-1,2,3,4,5,11,12)

Insertion sort works similar to the sorting of playing cards in hands. It is assumed that
the first card is already sorted in the card game, and then we select an unsorted card. If
the selected unsorted card is greater than the first card, it will be placed at the right side;
otherwise, it will be placed at the left side. Similarly, all unsorted cards are taken and put
in their exact place.

The same approach is applied in insertion sort. The idea behind the insertion sort is that
first take one element, iterate it through the sorted array. Although it is simple to use, it
is not appropriate for large data sets as the time complexity of insertion sort in the
average case and worst case is O(n2), where n is the number of items. Insertion sort is
less efficient than the other sorting algorithms like heap sort, quick sort, merge sort, etc.

Algorithm
The simple steps of achieving the insertion sort are listed as follows -
Step 1 - If the element is the first element, assume that it is already sorted. Return 1.
Step2 - Pick the next element, and store it separately in a key. Step3 - Now, compare the
key with all elements in the sorted array.
Step 4 - If the element in the sorted array is smaller than the current element, then move
to the next element. Else, shift greater elements in the array towards the right.
Step 5 - Insert the value.
Step 6 - Repeat until the array is sorted. Working of Insertion sort Algorithm
Now, let's see the working of the insertion sort Algorithm.
To understand the working of the insertion sort algorithm, let's take an unsorted array. It
will be easier to understand the insertion sort via an example.

Let the elements of array are -


Initially, the first two elements are compared in insertion sort.

Here, 31 is greater than 12. That means both elements are already in ascending order. So,
for now, 12 is stored in a sorted sub-array.

Now, move to the next two elements and compare them.

Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25.
Along with swapping, insertion sort will also check it with all elements in the sorted
array.
For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence,
the sorted array remains sorted after swapping.

Now, two elements in the sorted array are 12 and 25. Move forward to the next elements
that are 31 and 8.

Both 31 and 8 are not sorted. So, swap them.


After swapping, elements 25 and 8 are unsorted.

So, swap them.

Now, elements 12 and 8 are unsorted.

So, swap them too.

Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that
are 31 and 32.

Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.

Move to the next elements that are 32 and 17.

17 is smaller than 32. So, swap them.

Swapping makes 31 and 17 unsorted. So, swap them too.


Now, swapping makes 25 and 17 unsorted. So, perform swapping again.

Now, the array is completely sorted. Insertion sort complexity


1. Time Complexity
Case Time Complexity
Best Case O(n)
Average Case O(n2)
Worst Case O(n2)

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array
is already sorted. The best-case time complexity of insertion sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled
order that is not properly ascending and not properly descending. The average case time
complexity of insertion sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be
sorted in reverse order. That means suppose you have to sort the array elements in
ascending order, but its elements are in descending order. The worst-case time
complexity of insertion sort is O(n2).
2. Space Complexity
Space Complexity O(1)
Stable YES
o The space complexity of insertion sort is O(1). It is because, in insertion sort, an
extra variable is required for swapping.
Implementation of insertion sort
Program: Write a program to implement insertion sort in C language.
1. #include <stdio.h>
2.
3. void insert(int a[], int n) /* function to sort an aay with insertion sort */
4. {
5. int i, j, temp;

6. for (i = 1; i < n; i++) {


7. temp = a[i];
8. j = i - 1;
9.
10. while(j>=0 && temp <= a[j]) /* Move the elements greater than temp to
one position a head from their current position*/
11. {
12. a[j+1] = a[j];
13. j = j-1;
14. }
15. a[j+1] = temp;
16. }
17. }
18.
19. void printArr(int a[], int n) /* function to print the array */
20. {
21. int i;
22. for (i = 0; i < n; i++)
23. printf("%d ", a[i]);
24. }
25.
26. int main()
27. {

28. int a[] = { 12, 31, 25, 8, 32, 17 };


29. int n = sizeof(a) / sizeof(a[0]);
30. printf("Before sorting array elements are - \n");
31. printArr(a, n);
32. insert(a, n);
33. printf("\nAfter sorting array elements are - \n");
34. printArr(a, n);
35.
36. return 0;
37. }
Output:
Heap Sort

Heap Sort Algorithm


Heap sort processes the elements by creating the min-heap or max-heap using the
elements of the given array. Min-heap or max-heap represents the ordering of array in
which the root element represents the minimum or maximum element of the array.

Heap sort basically recursively performs two main operations -


o Build a heap H, using the elements of array.
o Repeatedly delete the root element of the heap formed in 1st phase.
A heap is a complete binary tree, and the binary tree is a tree in which the node can have
the utmost two children. A complete binary tree is a binary tree in which all the levels
except the last level, i.e., leaf node, should be completely filled, and all the nodes should
be left-justified.
Heapsort is a popular and efficient sorting algorithm. The concept of heap sort is to
eliminate the elements one by one from the heap part of the list, and then insert them into
the sorted part of the list.
Algorithm
1. HeapSort(arr)
2. BuildMaxHeap(arr)
3. for i = length(arr) to 2
4. swap arr[1] with arr[i]
5. heap_size[arr] = heap_size[arr] ? 1
6. MaxHeapify(arr,1)
7. End
BuildMaxHeap(arr)
1. BuildMaxHeap(arr)
2. heap_size(arr) = length(arr)
3. for i = length(arr)/2 to 1
4. MaxHeapify(arr,i)
5. End
MaxHeapify(arr,i)
1. MaxHeapify(arr,i)
2. L = left(i)
3. R = right(i)
4. if L ? heap_size[arr] and arr[L] > arr[i]
5. largest = L
6. else
7. largest = i
8. if R ? heap_size[arr] and arr[R] > arr[largest]

9. largest = R
10. if largest != i
11. swap arr[i] with arr[largest]
12. MaxHeapify(arr,largest)
13. End
Working of Heap sort Algorithm
In heap sort, basically, there are two phases involved in the sorting of elements. By using
the heap sort algorithm, they are as follows -
o The first step includes the creation of a heap by adjusting the elements of the
array.
o After the creation of heap, now remove the root element of the heap repeatedly by
shifting it to the end of the array, and then store the heap structure with the remaining
elements.

First, we have to construct a heap from the given array and convert it into max heap.

After converting the given heap into max heap, the array elements are –

Next, we have to delete the root element (89) from the max heap. To delete this node, we
have to swap it with the last node, i.e. (11). After deleting the root element, we again
have to heapify it to convert it into max heap.
After swapping the array element 89 with 11, and converting the heap into max-heap, the
elements of array are -

In the next step, again, we have to delete the root element (81) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (54). After deleting the root
element, we again have to heapify it to convert it into max heap.

After swapping the array element 81 with 54 and converting the heap into max-heap, the
elements of array are -

In the next step, we have to delete the root element (76) from the max heap again. To
delete this node, we have to swap it with the last node, i.e. (9). After deleting the root
element, we again have to heapify it to convert it into max heap.

After swapping the array element 76 with 9 and converting the heap into max-heap, the
elements of array are -

In the next step, again we have to delete the root element (54) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (14). After deleting the root
element, we again have to heapify it to convert it into max heap.
After swapping the array element 54 with 14 and converting the heap into max-heap, the
elements of array are -

In the next step, again we have to delete the root element (22) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (11). After deleting the root
element, we again have to heapify it to convert it into max heap.

After swapping the array element 22 with 11 and converting the heap into max-heap, the
elements of array are -

In the next step, again we have to delete the root element (14) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (9). After deleting the root
element, we again have to heapify it to convert it into max heap.

After swapping the array element 14 with 9 and converting the heap into max-heap, the
elements of array are -

In the next step, again we have to delete the root element (11) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (9). After deleting the root
element, we again have to heapify it to convert it into max heap.

After swapping the array element 11 with 9, the elements of array are -
Now, heap has only one element left. After deleting it, heap will be empty.

After completion of sorting, the array elements are -

Time complexity of Heap sort in the best case, average case, and worst case
1. Time Complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array
is already sorted. The best-case time complexity of heap sort is O(n logn).
o Average Case Complexity - It occurs when the array elements are in jumbled
order that is not properly ascending and not properly descending. The average case time
complexity of heap sort is O(n log n).
o Worst Case Complexity - It occurs when the array elements are required to be
sorted in reverse order. That means suppose you have to sort the array elements in
ascending order, but its elements are in descending order. The worst-case time
complexity of heap sort is O(n log n).
The time complexity of heap sort is O(n logn) in all three cases (best case, average case,
and worst case). The height of a complete binary tree having n elements is logn.
2. Space Complexity

Space Complexity O(1)


Stable N0
o The space complexity of Heap sort is O(1). Implementation of Heapsort
Program: Write a program to implement heap sort in C language.
1. #include <stdio.h>
2. /* function to heapify a subtree. Here 'i' is the
3. index of root node in array a[], and 'n' is the size of heap. */
4. void heapify(int a[], int n, int i)
5. {
6. int largest = i; // Initialize largest as root
7. int left = 2 * i + 1; // left child
8. int right = 2 * i + 2; // right child
9. // If left child is larger than root
10. if (left < n && a[left] > a[largest])
11. largest = left;
12. // If right child is larger than root
13. if (right < n && a[right] > a[largest])
14. largest = right;
15. // If root is not largest
16. if (largest != i) {
17. // swap a[i] with a[largest]
18. int temp = a[i];
19. a[i] = a[largest];
20. a[largest] = temp;
21. heapify(a, n, largest);
22. }
23. }
24. /*Function to implement the heap sort*/
25. void heapSort(int a[], int n)
26. {
27. for (int i = n / 2 - 1; i >= 0; i--)
28. heapify(a, n, i);
29. // One by one extract an element from heap
30. for (int i = n - 1; i >= 0; i--) {
31. /* Move current root element to end*/
32. // swap a[0] with a[i]
33. int temp = a[0];
34. a[0] = a[i];
35. a[i] = temp; 36.
37. heapify(a, i, 0);
38. }
39. }
40. /* function to print the array elements */
41. void printArr(int arr[], int n)
42. {

43. for (int i = 0; i < n; ++i)


44. {
45. printf("%d", arr[i]);
46. printf(" ");
47. }
48.
49. }
50. int main()
51. {
52. int a[] = {48, 10, 23, 43, 28, 26, 1};
53. int n = sizeof(a) / sizeof(a[0]);
54. printf("Before sorting array elements are - \n");
55. printArr(a, n);
56. heapSort(a, n);
57. printf("\nAfter sorting array elements are - \n");
58. printArr(a, n);
59. return 0;
60. }
Output

Part A
Q.no Questions CO PO
1. What is Insertion Sort? C214.1 1
Define the best and worst-case time complexity of
2. C214.1 1,2
Insertion Sort.
3. What is Heap Sort? C214.1 1
4. How does Heap Sort work? C214.1 1,2,3
Compare Insertion Sort and Heap Sort in terms of
5. C214.1 1,3
efficiency.
What are the advantages of Heap Sort over Insertion
6. C214.1 1,5
Sort?
7. Explain why Insertion Sort is better for small datasets. C214.1 1,2

8. Why is Heap Sort an in-place sorting algorithm? C214.1 1,2

9. What is the space complexity of Insertion Sort? C214.1 1,2


State an application of Heap Sort in real-world
10. C214.1 1,5
scenarios.
Part B
1. Explain the Insertion Sort algorithm with an example. C214.1 1,2,3,5,11
Write a C program for Insertion Sort.
Explain Heap Sort with a step-by-step example. Write
2. C214.1 1,3,4,5
a C program for Heap Sort.
Compare the working of Insertion Sort and Heap Sort
3. with respect to time complexity and efficiency in C214.1 1,2,3,4,11
different scenarios.
Part C
Implement Insertion Sort and Heap Sort in C.
1. Compare their execution time for different input sizes C214.1 1,2,3,4,5,11,12
and analyze the results.
Discuss the advantages and disadvantages of Insertion
2. Sort and Heap Sort in terms of memory usage, C214.1 1,2,3,4,5,11
performance, and real-world applications.
📌

You might also like