0% found this document useful (0 votes)
2 views

DS-Notes-Unit-1

The document provides a comprehensive overview of data structures, focusing on linear and non-linear types, their classifications, and operations. It discusses various linear data structures such as arrays, stacks, queues, and linked lists, along with their advantages and applications in real-world scenarios. Additionally, it introduces abstract data types (ADTs) and their implementations, emphasizing the importance of choosing appropriate data structures based on specific use cases.

Uploaded by

Mahaboob Saniya
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

DS-Notes-Unit-1

The document provides a comprehensive overview of data structures, focusing on linear and non-linear types, their classifications, and operations. It discusses various linear data structures such as arrays, stacks, queues, and linked lists, along with their advantages and applications in real-world scenarios. Additionally, it introduces abstract data types (ADTs) and their implementations, emphasizing the importance of choosing appropriate data structures based on specific use cases.

Uploaded by

Mahaboob Saniya
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

DATA STRUCTURES

UNIT I
Introduction to Linear Data Structures: Definition and importance of linear data structures,
Abstract data types (ADTs) and their implementation, Overview of time and space complexity analysis
for linear data structures. Searching Techniques: Linear & Binary Search, Sorting Techniques: Bubble
sort, Selection sort, Insertion Sort.

Introduction to Linear Data Structures:

Data Structures: A data structure is a fundamental concept in computer science. It provides a


way to organize, manage, and store the data efficiently.
Data structures are an integral part of computers used for the arrangement of data in memory.

Classification of Data Structure:


Data structure has many different uses in our daily life. There are many different data structures which
will be used to solve different mathematical and logical problems.
The following are the different data structures that are used in different situations.
We can classify Data Structures into two categories:
1. Primitive Data Structure
2. Non-Primitive Data Structure
The following figure shows the different classifications of Data Structures.
Primitive Data Structures
1. Primitive Data Structures are the data structures. These consisting of the
numbers and the characters. They will come in-built into the programs.
2. These data structures can be manipulated or operated directly by machine-level
instructions.
3. Basic data types like Integer, Float, Character, and Boolean come under the
Primitive Data Structures.
These data types are also called Simple data types, as they contain characters that
can't be divided further

1
Non-Primitive Data Structures
1. Non-Primitive Data Structures are those data structures derived from Primitive Data
Structures.
2. These data structures can't be manipulated or operated directly by machine-level instructions.
3. These data structures focus on forming a set of data elements that is either homogeneous (same
data type) or heterogeneous (different data types).
4. Based on the structure and arrangement of data, we can divide these data structures into two
sub- categories -
a. Linear Data Structures
b. Non-Linear Data Structures

(You can remember any one from the above two pictures)
Major Operations we can perform on Data Structures:
The major or the common operations that can be performed on the data structures are:
o Searching: We can search for any element in a data structure.
o Sorting: We can sort the elements of a data structure either in an ascending or
descending order.
o Insertion: We can also insert the new element in a data structure.
o Updation: We can also update the element, i.e., we can replace the element
with another element.
o Deletion: We can also perform the delete operation to remove the element
from the data structure.

2
Advantages of Data structures:
The following are the advantages of a data structure:
o Efficiency: If the choice of a data structure for implementing a particular ADT is
proper, it makes the program very efficient in terms of time and space.
o Reusability: The data structure provides reusability means that multiple client
programs can use the data structure.
o Abstraction: The data structure specified by an ADT also provides the level of
abstraction. The client cannot see the internal working of the data structure, so it
does not have to worry about the implementation part. The client can only see the
interface.

1) Linear data structure: In this type of Data structure the data elements are arranged
sequentially or linearly, in this type each element is attached to its previous and next adjacent
elements, is called a linear data structure.
Examples of linear data structures are array, stack, queue, linked list, etc.

Based on memory allocation, the Linear Data Structures are further classified into two types:
 Static data structure: Static data structure has a fixed memory size. It is easy to access
the elements in a static data structure.
An example of this data structure is an array.
 Dynamic data structure: In the dynamic data structure, the size is not fixed. It can be
randomly updated during the runtime. It is efficient by concerning the memory (space)
complexity of the code.
Examples of this data structure are queue, stack, linked list etc.
2) Non-linear data structure: In this type data elements are not placed sequentially or linearly
are called non-linear data structures.
In a non-linear data structure, we can’t traverse all the elements in a single run.
Examples of non-linear data structures are trees and graphs.

Types of Linear Data Structures:


The following is the list of Linear Data Structures that we generally use:

1. Array:
o An array is a collection of items stored at contiguous memory locations.
o Elements in an array are of the same type.
o Accessing elements is efficient because you can calculate their position by adding an offset
to the base memory location.
o Example: If you want to access the fourth element, you use the index notation (e.g., arr[4]).
o Arrays have a fixed size.
o Types of arrays include One Dimensional Array (2D Array) and Two Dimensional Arrays
(2D Array), Multi-Dimensional Arrays.

Some Applications of Array:

3
a. We can store a list of data elements belonging to the same data type.
b. Array acts as an auxiliary storage for other data structures.
c. The array also helps store data elements of a binary tree of the fixed count.
d. Array also acts as storage of matrices.
2. Queue:
o A queue is a linear structure that follows the First In First Out (FIFO) order.
o Elements are inserted at the rear and removed from the front.
o Think of it like a queue of consumers waiting for a resource—the first consumer served is the
one who arrived first.
o Queues can be implemented using linked lists or arrays.
o Types of queues include circular queues, priority queues, and doubly-ended queues.

3. Stack:
o A stack is a linear data structure where elements are inserted and deleted only from one
end (the top).
o It follows the Last In First Out (LIFO) principle.
o The last element inserted is the first to be removed.
o Stacks can be implemented using linked lists or arrays.
o Common operations are push (insertion) and pop (deletion).

4. Linked List:
o A linked list is a collection of nodes, where each node contains data and a reference to the
next node.
o Linked lists allow dynamic memory allocation.
o Types of linked lists include singly linked lists, doubly linked lists, and circular linked lists.

4
Remember, the choice between these data structures depends on the specific use case and the desired
behaviour. For example:

 Use a queue when you need elements in the order they were added.
 Use a stack when you want to reverse the order of elements.
 Use an array when you need a fixed-size collection of elements.
 Use a linked list when you require dynamic memory allocation and flexibility.

Importance of linear data structures:


The importance of linear data structures in computer science. Linear data structures play a crucial
role in organizing and manipulating data in a sequential or linear fashion. Here are some key points:
1. Sequential Organization:
o In linear data structures, elements are arranged sequentially, one after the other.
o Each element has a unique predecessor (except for the first element) and a unique
successor (except for the last element).
o Examples of linear data structures include:
 Arrays: A collection of elements stored in contiguous memory locations.
 Linked Lists: A collection of nodes, each containing an element and a reference to
the next node.
 Stacks: A collection of elements with Last-In-First-Out (LIFO) order.
 Queues: A collection of elements with First-In-First-Out (FIFO) order.
2. Advantages of Linear Data Structures:
o Efficient Data Access:
 Elements can be easily accessed by their position in the sequence.
 For example, arrays offer constant-time access to elements using their index.
o Dynamic Sizing:
 Linear data structures can dynamically adjust their size as elements are added or
removed.
 Unlike arrays, structures like linked lists, stacks, and queues can grow or shrink
dynamically.
o Ease of Implementation:
 Linear data structures can be easily implemented using arrays or linked lists.
3. Use Cases:
o Arrays are widely used for their constant-time access and fixed-size storage.
o Linked lists allow efficient insertion and deletion operations.
o Stacks are essential for managing function calls, undo functionality, and expression
evaluation.
o Queues find applications in scheduling, task management, and breadth-first search
algorithms.

Applications of Linear Data Structures:


Linear data structures are used in various real-world applications. Here are a few examples:
 Database Systems: Linear data structures such as arrays and linked lists are fundamental
components in database systems. Arrays are used for efficient indexing and sequential access,
while linked lists facilitate dynamic storage and efficient insertion and deletion.
 Text Editors: Linear data structures are used in text editors to implement features like undo
and redo functionality. A stack data structure is commonly used to store the operations
performed, allowing users to revert or redo their actions.
 Web Browsers: Linear data structures are utilized in web browsers to store and manage
browser history. Linked lists or arrays are often employed to maintain a chronological list of
visited web pages, enabling users to navigate backward or forward.

5
 Task Management: Linear data structures like queues are applied in task management
systems. Queues allow tasks to be added in a First-In-First-Out (FIFO) manner, ensuring that
the oldest task is processed first.
Non-Linear Data Structures:
Non-Linear Data Structures are data structures. Here the data elements are not arranged in
sequential order. Here, the insertion and removal of data are not feasible in a linear manner. There
exists a hierarchical relationship between the individual data items.
Types of Non-Linear Data Structures
The following is the list of Non-Linear Data Structures that we generally use:
1. Trees
A Tree is a Non-Linear Data Structure and a hierarchy containing a collection of nodes such that
each node of the tree stores a value and a list of references to other nodes (the "children").
The Tree data structure is a specialized method to arrange and collect data in the computer to
be utilized more effectively. It contains a central node, structural nodes, and sub-nodes connected via
edges. We can also say that the tree data structure consists of roots, branches, and leaves connected.

Tree

Graphs:
A graph is a non-linear data structure that consists of vertices (also
known as nodes) and edges. Here are the key components:
1. Vertices (V): These are the fundamental units of the graph. Each vertex
represents an entity or an object. Vertices can be labelled or unlabelled.
2. Edges (E): Edges connect pairs of vertices. They represent relationships or
connections between nodes. Edges can be directed (with a specific direction)
or undirected (bidirectional).

6
Abstract data types (ADTs) and their implementation
Abstract Data Type (ADT):
Before knowing about Abstract Data Type, we need to remember the built in data types which
we already leaned. Data types such as int, float, double, long, etc. are considered to be in-built data
types and we can perform basic operations with them such as addition, subtraction, division,
multiplication, etc.
But now there may be a situation when we need operations for our user-defined data type
which have to be defined. These operations can be defined only as and when we require them.
We can create data structures along with their operations, these data structures are not in-built but
known as Abstract Data Type (ADT).

Abstract Data type (ADT) Definition: ADT is a type (or class) for objects
whose behaviour is defined by a set of values and a set of operations.
Or

Set of values and set of operations for a specific behaviour is called the ADT
(Abstract Data Type)
ADT only specify what operations are to be performed but not how these operations will be
implemented. It does not specify how data will be organized in memory and what algorithms will be
used for implementing the operations.
ADTs can be implemented using various data structures:
ADTs namely List ADT, Stack ADT, Queue ADT.

There are different ways to implement a list ADT:

 Array Implementation:
o Use an array to store elements in contiguous memory locations.
o Define an array of a fixed maximum length.
o Allocate storage for all elements before runtime.
o Index elements within the array to simulate the list.
o This approach is straightforward but has limitations on dynamic resizing.
 Linked List Implementation:
o Use singly linked lists or doubly linked lists.
o Each node contains the data and a pointer to the next (and possibly previous) node.

7
o Allows dynamic resizing and efficient insertions/deletions.
o Requires additional memory for pointers.
 Hash Table Implementation (for Sets):
o Create an array of “buckets” (lists).
o To add an element, hash it to find the appropriate bucket and insert it.
o To remove an element, locate the bucket and remove it if present.

1. List ADT:
 A list typically stores data elements in a linear order.
 The head structure contains information like the count, pointers, and an address
of a compare function (used for data comparison).
 Each data node points to a data structure and has a self-referential pointer to the
next node in the list.

A list ADT is a type of data structure that encapsulates a collection of data elements with operations
to access, insert, remove, and replace them.
A list ADT can be implemented using different functions, such as arrays, linked lists, or
dynamic arrays. Some of the common operations associated with a list ADT are:

 get (index) – Return an element from the list at a given index.


 insert (index, element) – Insert an element at a given index of the list.
 remove (index) – Remove the element at a given index from the list.
 replace (index, element) – Replace an element at a given index by another element.
 size ( ) – Return the number of elements in the list.
 isEmpty ( ) – Return true if the list is empty, otherwise return false.
 isFull( ) – Return true if the list is full, otherwise return false.

Example: Array Implementation


Here’s a simple example of implementing a list ADT using an array:
First we need to create a structure with pointer variable, size variable, and length variable. Then we
need to allocate the memory with the malloc( ) function. Based on the size of the array we need to
enter the elements into the array by accessing the member of the structure.

8
#include <stdio.h>
#include <stdlib.h>
struct Array
{
int* A;
int size;
int length;
};
void main( )
{
struct Array arr;
int i;
printf("Enter Size of an Array: ");
scanf("%d", &arr.size);
arr.A =(int*)malloc(sizeof(int)*arr.size);
arr.length = 0;

printf("Enter Number of Elements: ");


scanf("%d", &arr.length);

printf("Enter All Elements: \n");


for(i=0;i<arr.length;i++)
{
scanf("%d", &arr.A[i]);
}
printf("Elements from the array:");
for(i=0;i<arr.length;i++)
{
printf("\n%d",arr.A[i]);
}
Display Append and Insert Elements in an Array using C and C++ Language i.e. 3
getch( );
}

1) Array as ADT (Abstract Data Type).

Operations on Arrays:
Array operations: We can perform different operations on the arrays like.
1) Display( ),
2) Append(n), and
3) Insert(index, n)
Example:
First, we need to create an array of size 10 and length 0 as there is no element in the array. For better
understanding, please have a look at the below image.

9
Then, we fill this array with some elements. i.e., 8, 3, 7, 12, 6, 9.

Now, we can develop all 3 functions one by one as below.


Display Operation on Array –
This function will display each and every element on the screen. So here we are targeting every element
present in an array. For this, we are using for loop here.

Now, our next operation is Append ().


Add (n) / Append (n) Operation in Array–
Append (n) will add the given element to the end of the given array. The following is our array:
In the above array, I want to append 10 to the next to the last element in the array. Here I want to
append at the index of arr[length] = n. As length is the position of the last element + 1 because array
indexing starts from 0. For better understanding, please have a look at the following code.

Insert (index, n) Operation in Array–


In this function, we will pass an element and an index position where we want to insert any particular
element in our array. For this, we have to check if our array has space for that element or not. So first
we check for if (size > length), in our case, yes, so then we will perform shifting of elements. To insert
at any particular index, first, we have to shift all the elements to the right side so that space will be
created at index where our new element need to be inserted. Let’s take the example of Insert (4, 14).
In the below image we shift 10 from index 6 to 7.

Then we are shifting 9 from index 5 to 6.

10
Then we are shifting 6 from index 4 to 5

Here finally, all elements have shifted and at the 4th index, we just insert our element.

code:

11
C program to implement the List ADT by using Arrays:
#include <stdio.h>
#include <stdlib.h>
struct Array
{
int* A;
int size;
int length;
};
void Display(struct Array arr)
{
printf("\nElements are\n");
for (int i = 0; i < arr.length; i++)
printf("%d ", arr.A[i]);
}
void Append(struct Array* arr, int x)
{
if (arr->length < arr->size)
arr->A[arr->length++] =
x;
}
void Insert(struct Array* arr, int index, int x)
{
if (index >= 0 && index <= arr->length)
{
for (int i = arr->length; i > index; i--)
{
arr->A[i] = arr->A[i - 1];
}
arr->A[index] = x;
arr->length++;
}
}
int main( )
{ struct Array arr;
printf("Enter Size of an Array: ");
scanf("%d", &arr.size);
arr.A = (int*)malloc(sizeof(int) * arr.size);
arr.length = 0;
printf("Enter Number of Elements: ");
scanf("%d", &arr.length);
printf("Enter All Elements: \n");
for (int i = 0; i < arr.length; i++)
{
scanf("%d", &arr.A[i]);
}
printf("Elements from the array:");
for(i=0;i<arr.length;i++)
{
printf("\n%d",arr.A[i]);
}
Append(&arr, 10);
Insert(&arr,0,12);
Display(arr);
getchar( );
}
12
2. Stack ADT
A stack is a linear data structure. It allows data to be accessed from the top only. It
simply has two operations: push (to insert data to the top of the stack) and pop (to
remove data from the stack). (Used to remove data from the stack top).

Some of the most essential operations defined in Stack ADT are listed below.
o push( ): When we insert an element in a stack then the operation is known
as a push. If the stack is full then the overflow condition occurs.
o pop( ): When we delete an element from the stack, the operation is known
as a pop. If the stack is empty means that no element exists in the stack,
this state is known as an underflow state.
o isEmpty( ): It determines whether the stack is empty or not.
o isFull( ): It determines whether the stack is full or not.'
o peek( ): It returns the element at the given position.
o count( ): It returns the total number of elements available in a stack.
o change( ): It changes the element at the given position.
o display( ): It prints all the elements available in the stack.
3. Queue ADT
A queue is a linear data structure that allows data to be accessed from both ends. There
are two main operations in the queue: Insertion: this operation inserts data to the back
of the queue. And Deletion: this operation is used to remove data from the front of the
queue.
1. A queue can be defined as an ordered list which enables insert operations to be
performed at one end called REAR and delete operations to be performed at another
end called FRONT.
2.Queue is referred to be as First In First Out list.
3.For example, people waiting in line for a rail ticket form a queue.

Some of the most essential operations defined in Queue ADT are listed below.
The fundamental operations that can be performed on queue are listed as follows -

13
o Enqueue: The Enqueue operation is used to insert the element at the rear end of
the queue. It returns void.
o Dequeue: It performs the deletion from the front-end of the queue. It also returns
the element which has been removed from the front-end. It returns an integer
value.
o Peek: This is the third operation that returns the element, which is pointed by the
front pointer in the queue but does not delete it.
o Queue overflow (isfull): It shows the overflow condition when the queue is
completely full.
o Queue underflow (isempty): It shows the underflow condition when the Queue
is empty, i.e., no elements are in the Queue.
Advantages of ADT in Data Structures
The advantages of ADT in Data Structures are:
 Provides abstraction, which simplifies the complexity of the data structure and
allows users to focus on the functionality.
 Enhances program modularity by allowing the data structure implementation to be
separate from the rest of the program.
 Enables code reusability as the same data structure can be used in multiple
programs with the same interface.
 Promotes the concept of data hiding by encapsulating data and operations into a
single unit, which enhances security and control over the data.
 Supports polymorphism, which allows the same interface to be used with different
underlying data structures, providing flexibility and adaptability to changing
requirements.

Disadvantages of ADT in Data Structures


There are some potential disadvantages of ADT in Data Structures:
 Overhead: Using ADTs may result in additional overhead due to the need for
abstraction and encapsulation.
 Limited control: ADTs can limit the level of control that a programmer has
over the data structure, which can be a disadvantage in certain scenarios.
Performance impact: Depending on the specific implementation, the performance of an
ADT may be lower than that of a custom data structure designed for a specific application.

Overview of time and space complexity analysis


What is an Algorithm?
An algorithm is a step-by-step procedure to solve a particular problem. It contains the finite set of
instructions. They will be carried in a specific order to perform the specific task. It is not the complete
program or code; it is just a solution (logic) of a problem.
Characteristics of an Algorithm
The following are the characteristics of an algorithm:
o Input: An algorithm has some input values. We can pass 0 or some input value to
an algorithm.
o Output: We will get 1 or more output at the end of an algorithm.
o Unambiguity: An algorithm should be unambiguous which means that the
instructions in an algorithm should be clear and simple.
o Finiteness: An algorithm should have finiteness. Here, finiteness means that
the algorithm should contain a limited number of instructions, i.e., the instructions
should be countable.
o Effectiveness: An algorithm should be effective as each instruction in an
algorithm affects the overall process.
o Language independent: An algorithm must be language-independent so that
the instructions in an algorithm can be implemented in any of the languages with
the same output.

14
Dataflow of an Algorithm:
o Problem: A problem can be a real-world problem or any instance from the real-
world problem. To solve the problem we need to create a program or the set of
instructions. The set of instructions / steps to solve the problem is known as an
algorithm.
o Algorithm: An algorithm will be designed for a problem which is a step by step
procedure.
o Input: After designing an algorithm, the required and the desired inputs are
provided to the algorithm.
o Processing unit: The input will be given to the processing unit, and the
processing unit will produce the desired output.
o Output: The output is the outcome or the result of the program.

Measuring the algorithms:


We have two types of measurements to judge the algorithms.
o Time Complexity
o Space Complexity
1) Time Complexity: Time complexity of an algorithm is the amount of time (CPU time) it needs
to complete the task (to execute a particular algorithm).
The following factors will be considered for time complexity:
No. of operations, comparisons, loops, pointer references, function calls to outside.
Time taken for an algorithm is two types.
4. Compilation Time
5. Run Time.
a) Compile Time:
 Compilation time is the time taken to compile an algorithm.
 While compiling it checks for syntax and semantic errors, and links with the
standard libraries.
b) Run Time:
 It is the time to execute the compiled program.
 The run time of an algorithm depend upon the number of instructions present in the
algorithm.
 Run time is calculated only for executable statements and not for declaration
statements.

2) Space Complexity: Space complexity of an algorithm is the amount of memory it needs to run
to complete the task.
The following factors will be considered for space complexity:
No. of variables, data structures, allocations, function calls.
Generally space occupied by the program as follows:
 A fixed amount of memory space allotted to Data Types.
 Code and Variables occupied in the program.
 The space increases or decreases depending upon the program uses iterative or
recursive procedures.

Asymptotic Analysis:
Asymptotic analysis of an algorithm refers to defining the mathematical
foundation/framing of its run-time performance.
Using asymptotic analysis, we can conclude the best case, average case,
and worst case scenario of an algorithm.
Asymptotic analysis is input bound i.e., if there's no input to the
algorithm, it is concluded to work in a constant time. Other than the "input" all
other factors are considered constant.

15
Asymptotic analysis refers to computing the running time of any
operation in mathematical units of computation.
For example, the running time of one operation is computed as f(n) and
2
may be for another operation it is computed as g(n ).
This means the first operation running time will increase linearly with
the increase in n and the running time of the second operation will
increase exponentially when n increases. Similarly, the running time of
both operations will be nearly the same if n is significantly small.
Time complexity of an algorithm is classified in to three types:
 Best Case − Minimum time required for program execution.
 Average Case − Average time required for program execution.
 Worst Case − Maximum time required for program execution.

Asymptotic Notations:
Different types of asymptotic notations are used to represent the
complexity of an algorithm. Following asymptotic notations are used to calculate
the running time complexity of an algorithm.
1. O − Big Oh Notation
2. Ω − Omega Notation
3. θ − Theta Notation
Execution time of an algorithm depends on the instruction set,
processor speed, disk I/O speed, etc. Hence, we estimate the efficiency of an
algorithm asymptotically.

Time function of an algorithm is represented by T(n), where n is the input size.

1. Big Oh, O: Asymptotic Upper Bound


The notation Ο(n) is the formal way to express the upper bound of an algorithm's
running time. It is the most commonly used notation. It measures the worst case time
complexity or the longest amount of time an algorithm can possibly take to complete.

16
A function f(n) can be represented is the Order of g(n) that is O(g(n)), if there exists a
value of positive integer n as n0 and a positive constant k such that −
O(f(n)) = f(n)⩽k.g(n) for n>n0 in all cases
Hence, function g(n) is an upper bound for function f(n), as g(n) grows faster than f(n).

Example
Let us consider a given function,
f(n)=4.n3+10.n2+5.n+1 Considering g(n)=n3
f(n)⩽5.g(n) for all the values of n>2
Hence, the complexity of f(n) can be represented as O(g(n)), i.e. O(n3)
2. Omega, Ω: Asymptotic Lower Bound
The notation Ω(n) is the formal way to express the lower bound of an algorithm's
running time. It measures the best case time complexity or the best amount of time

Ω(f(n))=g(n) ⩽ c.f(n)
an algorithm can possibly take to complete.

Or as follows
We say that f(n)=Ω(g(n)) when there exists constant c that f(n)⩾c.g(n) for all
sufficiently large value of n. Here n is a positive integer. It means function g is a lower
bound for function f ; after a certain value of n, f will never go below g.

Example
Let us consider a given function,
f(n)=4.n3+10.n2+5.n+1. Considering g(n)=n3,
f(n)⩾4.g(n) for all the values of n>0.
Hence, the complexity of f(n) can be represented as Ω(g(n)), i.e. Ω(n3)

17
3. Theta, θ: Asymptotic Tight Bound
The notation θ(n) is the formal way to express both the lower bound and the upper
bound of an algorithm's running time. Someone may confuse the theta notation as the
average case time complexity; while theta notation could be almost accurately used to
describe the average case, other notations could be used as well.
We say that f(n)=θ(g(n))
When there exist constants c1 and c2 that c1.g(n)⩽f(n)⩽c2.g(n) for all sufficiently
large value of n. Here n is a positive integer.
This means function g is a tight bound for function f.

Example
Let us consider a given function, f(n)=4.n3+10.n2+5.n+1
Considering g(n)=n3,
4.g(n)⩽f(n)⩽5.g(n) for all the large values of n.
Hence, the complexity of f(n) can be represented as θ(g(n)), i.e. θ(n3).

Common Asymptotic Notations


Following is a list of some common asymptotic notations −
constant O(1)
logarithmic O(log n)
linear O(n)
n log n O(n log n)
quadratic O(n2)
cubic O(n3)
polynomial nO(1)
exponential 2O(n)

Searching: Searching is the process of finding some particular element in the list. If the element is
present in the list, then the process is called successful, and the process returns the location of that
element; otherwise (if not found), the search is called unsuccessful.

18
There are two popular search methods:
o Linear Search
o Binary Search.

1) Linear Search:
Linear search is also called as sequential search algorithm. It is the simplest searching algorithm.
Linear search is process of searching the required element from one end to another in a
sequential order. Means checking one by one in the list. It will check every element of the list until the
required element is found.
It is the simplest searching technique or algorithm.
Algorithm for Linear Search in C
Consider the key variable as the element we are searching for.
1. Check each element in the list by comparing it to the key.

2. If any element is equal to the key, return its index.

3. If we reach the end of the list without finding the element equal to the key, return some value to
represent that the element is not found.

To understand the working of linear search algorithm, let's take an unsorted array. It will be easy to
understand the working of linear search with an example.

Let the element to be searched is K = 41 or key = 41


(k or key just variable name)
Now, start from the first element and compare K with each element of the array.

The value of K, i.e., 41, is not matched with the first element of the array. So, move to the next element.
And follow the same process until the respective element is found.

19
Now, the element to be searched is found. So algorithm will return the index of the element matched.

Linear Search complexity:


Now, let's see the time complexity of linear search in the best case, average case, and
worst case. We will also see the space complexity of linear search.
1. Time Complexity

Case Time Complexity

Best Case O(1)

Average Case O(n)

Worst Case O(n)


o Best Case Complexity - In Linear search, best case occurs when the element we
are finding is at the first position of the array. The best-case time complexity of
linear search is O(1).
o Average Case Complexity - The average case time complexity of linear search is
O(n).
o Worst Case Complexity - In Linear search, the worst case occurs when the
element we are looking is present at the end of the array. The worst-case in linear
search could be when the target element is not present in the given array, and we
have to traverse the entire array. The worst-case time complexity of linear search
is O(n).
The time complexity of linear search is O(n) because every element in the array is
compared only once.
2.Space Complexity
Space Complexity O(1)

o The space complexity of linear search is O(1).

20
Advantages of Linear Search:

21
 Linear search can be used irrespective of whether the array is sorted or not. It can
be used on arrays of any data type.
 Does not require any additional memory.
 It is a well-suited algorithm for small datasets.
Drawbacks of Linear Search:
 Linear search has a time complexity of O(N), which in turn makes it slow for
large datasets. Not suitable for large arrays.

C program to implement the linear search:


#include <stdio.h>
int linearSearch(int a[ ], int n, int val)
{ int i;
// Going through array sequencially
for(i=0;i<n;i++)
{
if (a[i] == val)
return i+1;
}
return -1;
}
int main( )
{
int i;
int a[ ] = {70, 40, 30, 11, 57, 41, 25, 14, 52}; // given array
int val;
int n = sizeof(a) / sizeof(a[0]); // size of array
int res;
printf("\nEnter the element to be search:");
scanf("%d",&val);
res = linearSearch(a, n, val); // Store result
printf("The elements of the array are - ");
for (i = 0; i < n; i++)
printf("%d ", a[i]);
printf("\nElement to be searched is - %d", val);
if (res == -1)
printf("\nElement is not present in the array");
else
printf("\nElement is present at %d position of array", res);
return 0;

22
}
Output:
Enter the element to be search:41
The elements of the array are - 70, 40, 30, 11, 57, 41, 25, 14, 52
Element to be searched is -41
Element is present at 6 position of array

Binary Search:
Binary search is the search technique. It works efficiently on sorted lists. To search an element
in some list using the binary search technique, we must ensure that the list is sorted.
Binary search follows the divide and conquer approach. In this approach the list is divided
into two halves, and the item is compared with the middle element of the list. If the match is found
then, the location of the middle element is returned. If not matched, it will search into either of the
halves (right part or left part) depending upon the result produced through the match.
NOTE: Binary search can be implemented on sorted array elements. If the list elements are not
arranged in a sorted manner, we need to sort them first.

In this algorithm,
1) Divide the search space into two halves by finding the middle index “mid”.

2) Compare the middle element of the search space with the key.
3) If the key is found / matched with the middle element, the process is terminated.
4) If the key is not found at middle element, choose which half will be used as the next search
space.
 If the key is smaller than the middle element, then the left side is used for next search.
 If the key is larger than the middle element, then the right side is used for next search.
5) This process is continued until the key is found or the total search space is complete.

Working of Binary search

To understand the working of the Binary search algorithm, let's take a sorted array. It will be easy to
understand the working of Binary search with an example.
There are two methods to implement the binary search algorithm -
o Iterative method (with looping statement)
o Recursive method (with recursion concept)
The recursive method of binary search follows the divide and conquer approach.
Let the elements of array are:

Let the element to search is, K = 56


We have to use the below formula to calculate the mid of the array -
mid = (beg + end)/2 (or) mid=(low + high)/2

23
So, in the given array -
beg = 0
end = 8
mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.

Now, the element to search is found. So algorithm will return the index of the element matched.

Binary Search complexity


Now, let's see the time complexity of Binary search in the best case, average case, and
worst case. We will also see the space complexity of Binary search.
1.Time Complexity

Case Time Complexity

Best Case O(1)

Average Case O(logn)

Worst Case O(logn)

24
Best Case Complexity - In Binary search, best case occurs when the element to
o
search is found in first comparison, i.e., when the first middle element itself is the
element to be searched. The best-case time complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of Binary search is
O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs, when we have
to keep reducing the search space till it has only one element. The worst-case time
complexity of Binary search is O(logn).
2.Space Complexity

Space Complexity O(1)

oThe space complexity of binary search is O(1).


Advantages of Binary Search:
 Binary search is faster than linear search, especially for large arrays.
 More efficient than other searching algorithms with a similar time complexity
 Binary search is well-suited for searching large datasets that are stored in
external memory, such as on a hard drive or in the cloud.
Drawbacks of Binary Search:
 The array should be sorted.
 Binary search requires that the data structure being searched be stored in
contiguous memory locations.
Binary search requires that the elements of the array be comparable, meaning that they
must be able to be ordered.

C Program to implement Binary search:

#include <stdio.h>
int binarySearch(int a[ ], int beg, int end, int val)
{
int mid;
if(end >= beg)
{ mid = (beg + end)/2;
/* if the item to be searched is present at middle */
if(a[mid] == val)
{
return mid+1;
}
/* if the item to be searched is smaller than middle, then it can only be in left subarray */
else if(a[mid] < val)
{
return binarySearch(a, mid+1, end, val);
}
/* if the item to be searched is greater than middle, then it can only be in right subarray */
else

25
{
return binarySearch(a, beg, mid-1, val);
}
}
return -1;
}
int main( )
{
int a[ ] = {11, 14, 25, 30, 40, 41, 52, 57, 70}; // given array
int val = 40; // value to be searched
int n = sizeof(a) / sizeof(a[0]); // size of array
int res = binarySearch(a, 0, n-1, val); // Store result
printf("The elements of the array are : ");
for (int i = 0; i < n; i++)
printf("%d ", a[i]);
printf("\nElement to be searched is : %d", val);
if (res = = -1)
printf("\nElement is not present in the array");
else
printf("\nElement is present at %d position of array", res);
return 0;
}
Output:
The elements of the array are: 11, 14, 25, 30, 40, 41, 52, 57, 70
Element to be searched is :40
Element is present at 5 position of array

Sorting Techniques:
A Sorting Algorithm/ Technique is used to rearrange a given array or list of elements into either
ascending order or descending order.
Here ascending or descending will be done according to a comparison operator on the elements.
We have different types of sorting techniques, they are as follows:
1) Bubble sort,
In your syllabus only
2) Selection sort,
3) Insertion Sort Bubble sort,
4) Merge Sort Selection sort and
5) Quick Sort Insertion sort topics are there.
6) Heap Sort
And few more

26
Bubble Sort: (also called as Exchange Sort):
Bubble sort is a simple sorting algorithm. This sorting algorithm is comparison-based algorithm.
Means each pair of adjacent elements is compared and the elements are swapped if they are not in order.
This algorithm is not suitable for large data sets as its average and worst case complexity are of
O(N^2)where n is the number of items.

Bubble Sort algorithm / Working Process:


Algorithm: We assume list is an array of n elements. We further assume that swap function swaps the
values of the given array elements.
begin BubbleSort(list)
for all elements of list
if list[i] > list[i+1]
swap(list[i], list[i+1])
end if
end for
return list
end BubbleSort
Step 1: Compare list[i] and list[i+1] and arrange them in the desired order.
Step 1 continued until we compare list[n-1] and list[n] to Arrange.
Step 2: In the first iteration, the higher element in the list is sorted out. So, we will repeat Step 1 with
one less comparison.
..........
..........
Step n-1: Here, we compare only two elements called list[1] and list[2] and arrange them.
Then finally list will be in increasing order.
We take an unsorted array for our example.

27
After multiple iteration elements are arranged in the ascending order.

28
// C program for implementation of Bubble sort
#include <stdio.h>
// Swap function
void swap(int* arr, int i, int j)
{
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
// A function to implement bubble sort
void bubbleSort(int arr[ ], int n)
{
int i, j;
for (i = 0; i < n - 1; i++)

// Last i elements are already


// in place
for (j = 0; j < n - i - 1; j++) if
(arr[j] > arr[j + 1])
swap(arr, j, j + 1);
}
// Function to print an array
void printArray(int arr[ ], int size)
{
int i;
for (i = 0; i < size; i++)
printf("%d ", arr[i]);
printf("\n");
}

// Driver code
int main( )
{
int arr[ ] = { 5, 1, 4, 2, 8 };
int N = sizeof(arr) / sizeof(arr[0]);
printf("Array elements before sorting: ");
printArray(arr, N);
bubbleSort(arr, N);
printf("Sorted array: ");
printArray(arr, N);
return 0;
}

OUTPUT:
Array elements before sorting:5 1 4 2 8
Sorted array: 1 2 4 5 8

29
Bubble sort complexity
Now, let's see the time complexity of bubble sort in the best case, average case, and
worst case. We will also see the space complexity of bubble sort.
1.Time Complexity
Case Time Complexity

Best Case O(n)

Average Case O(n2)

Worst Case O(n2)


o Best Case Complexity - It occurs when there is no sorting required, i.e. the array
is already sorted. The best-case time complexity of bubble sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled
order that is not properly ascending and not properly descending. The average
case time complexity of bubble sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be
sorted in reverse order. That means suppose you have to sort the array elements
in ascending order, but its elements are in descending order. The worst-case time
complexity of bubble sort is O(n2).
2.Space Complexity

Space Complexity O(1)

Stable YES

o The space complexity of bubble sort is O(1). It is because, in bubble sort, an extra
variable is required for swapping.
Advantages of Bubble Sort:
 Bubble sort is easy to understand and implement.
 It does not require any additional memory space.
 It is a stable sorting algorithm, meaning that elements with the same key value
maintain their relative order in the sorted output.
Disadvantages of Bubble Sort:
 Bubble sort has a time complexity of O(n2) which makes it very slow for large
data sets. Bubble sort is a comparison-based sorting algorithm, which means that it
requires a comparison operator to determine the relative order of elements in the
input data set. It can limit the efficiency of the algorithm in certain cases.

Selection Sort:
Selection sort is a simple sorting algorithm. The selection Sort algorithm is used to arrange a
list of elements in a particular order either in Ascending or in Descending.
In selection sort, the first element in the list is selected and it is compared repeatedly with
remaining all the elements in the list. If any element is smaller than the selected element (for
ascending order), then both are swapped. Then we select the element at second position in the list and
it is compared with remaining all elements in the list. If any element is smaller than the selected
element, then both are swapped. This procedure is repeated till the entire list is sorted.
This sorting algorithm is a comparison-based algorithm in which the list is divided into two
parts, the sorted part at the left end and the unsorted part at the right end. Initially, the sorted part is
empty and the unsorted part is the entire list.

30
This process will be started from left most element and continues moving unsorted array
boundary by one element to the right.

Step by Step Process:


The selection sort algorithm is performed using following steps...
 Step 1: Select the first element of the list (i.e., Element at first position in the list).
 Step 2: Compare the selected element with all other elements in the list.
 Step 3: For every comparison, if any element is smaller than selected element (for
Ascending order), then these two are swapped.
 Step 4: Repeat the same procedure with next position in the list till the entire list is sorted.
Example:

31
Selection sort complexity
Now, let's see the time complexity of selection sort in best case, average case, and in
worst case. We will also see the space complexity of the selection sort.
1. Time Complexity
Case Time Complexity

Best Case O(n2)

Average Case O(n2)

Worst Case O(n2)


o Best Case Complexity - It occurs when there is no sorting required, i.e. the
array is already sorted. The best-case time complexity of selection sort is O(n2).

32
o Average Case Complexity - It occurs when the array elements are in jumbled
order that is not properly ascending and not properly descending. The average
case time complexity of selection sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be
sorted in reverse order. That means suppose you have to sort the array elements
in ascending order, but its elements are in descending order. The worst-case time
complexity of selection sort is O(n2).
2.Space Complexity

Space Complexity O(1)

Stable YES
o The space complexity of selection sort is O(1). It is because, in selection sort, an
extra variable is required for swapping.

Advantages of Selection Sort Algorithm


 Simple and easy to understand.
 Works well with small datasets.
Disadvantages of the Selection Sort Algorithm
 Selection sort has a time complexity of O(n2) in the worst and average case.
 Does not work well on large datasets.
 Does not preserve the relative order of items with equal keys which mean it is not
stable.

33
Example program for Selection Sort:
#include <stdio.h>
void selection(int arr[ ], int n)
{
int i, j, small;

for (i = 0; i < n-1; i++) // One by one move boundary of unsorted subarray
{
small = i; //minimum element in unsorted array

for (j = i+1; j < n; j++)


if (arr[j] < arr[small])
small = j;
// Swap the minimum element with the first element
int temp = arr[small];
arr[small] = arr[i];
arr[i] = temp;
}
}
void printArr(int a[ ], int n) /* function to print the array */
{
int i;
for (i = 0; i < n; i++)
printf("%d ", a[i]);
}
int main()
{
int a[ ] = { 12, 31, 25, 8, 32, 17 };
int n = sizeof(a) / sizeof(a[0]);
printf("Before sorting array elements are : \n");
printArr(a, n);
selection(a, n);
printf("\nAfter sorting array elements are : \n"); printArr(a,
n);
return 0;
}
OUTPUT:
Before sorting array elements are:
12 31 25 8 32 17
After sorting array elements are:
8 12 17 25 31 32

Insertion Sort:
Sorting is the process of arranging a list of elements in a particular order (Ascending or Descending).
Insertion sort algorithm arranges a list of elements in a particular order. In insertion sort
algorithm, every iteration moves an element from unsorted portion to sorted portion until all the
elements
are sorted in the list.
Step by Step Process:
The insertion sort algorithm is performed using following steps...

34
 Step 1: Assume that first element in the list is in sorted portion of the list and remaining all
elements are in unsorted portion.
 Step 2: Consider first element from the unsorted list and insert that element into the sorted list
in order specified.
 Step 3: Repeat the above process until all the elements from the unsorted list are moved
into the sorted list.
( or )

Step 1 − If it is the first element, it is already sorted. return 1;


Step 2 − Pick next element
Step 3 − Compare with all elements in the sorted sub-list
Step 4 − Shift all the elements in the sorted sub-list that is greater than the value to be sorted
Step 5 − Insert the value
Step 6 − Repeat until list is sorted

Example: in next page....

35
#include <stdio.h>
int main( )
{ int n, i, j, temp;
int arr[64];
printf("Enter number of elements:\n");
scanf("%d", &n);
printf("Enter %d integers:\n", n);
for (i = 0; i < n; i++)
{
scanf("%d", &arr[i]);
}
for (i = 1 ; i <= n - 1; i++)
{
j = i;
while ( j > 0 && arr[j-1] > arr[j])
{
temp = arr[j];
arr[j] = arr[j-1];
arr[j-1] = temp;
j--;
}
}
printf("Sorted list in ascending order:\n");
for (i = 0; i <= n - 1; i++)
{ printf("%d\n", arr[i]);

36
}
return 0;
}
OUTPUT:
Enter number of elements:
5
Enter 5 integers:
7
59
12
1
64
Sorted list in ascending order:
1
7
12
59
64
Insertion sort complexity
Now, let's see the time complexity of insertion sort in best case, average case, and in
worst case. We will also see the space complexity of insertion sort.
1.Time Complexity
Case Time Complexity
Best Case O(n)
Average Case O(n2)
Worst Case O(n2)
Best Case Complexity - It occurs when there is no sorting required, i.e. the array
o
is already sorted. The best-case time complexity of insertion sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled
order that is not properly ascending and not properly descending. The average
case time complexity of insertion sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be
sorted in reverse order. That means suppose you have to sort the array elements
in ascending order, but its elements are in descending order. The worst-case time
complexity of insertion sort is O(n2).
2.Space Complexity
Space Complexity O(1)

Stable YES

o The space complexity of insertion sort is O(1). It is because, in insertion sort, an


extra variable is required for swapping.
Advantages of Selection Sort Algorithm
 Simple and easy to understand.
 Works well with small datasets.
Disadvantages of the Selection Sort Algorithm
 Selection sort has a time complexity of O(n2) in the worst and average case.
 Does not work well on large datasets.
 Does not preserve the relative order of items with equal keys which mean it is not
stable.

37

You might also like